People do not feel guilty about exploiting machines
de Melo, C., Marsella, S., & Gratch, J., ACM Transactions on Computer-Human Interaction, 23, 2017
Guilt and envy play an important role in social interaction. Guilt occurs when individuals cause harm to others or break social norms. Envy occurs when individuals compare themselves unfavorably to others and desire to benefit from the others’ advantage. In both cases, these emotions motivate people to act and change the status quo: following guilt, people try to make amends for the perceived transgression and, following envy, people try to harm envied others. In this paper, we present two experiments that study participants' experience of guilt and envy when engaging in social decision making with machines and humans. The results showed that, though experiencing the same level of envy, people felt considerably less guilt with machines than with humans. These effects occurred both with subjective and behavioral measures of guilt and envy, and in three different economic games: public goods, ultimatum, and dictator game. This poses an important challenge for human-computer interaction because, as shown here, it leads people to systematically exploit machines, when compared to humans. We discuss theoretical and practical implications for the design of human-machine interaction systems that hope to achieve the kind of efficiency – cooperation, fairness, reciprocity, etc. – we see in human-human interaction.
Social decisions and fairness change when people's interests are represented by autonomous agents
de Melo, C., Marsella, S., & Gratch, J., Journal of Autonomous Agents and Multiagent Systems, 2017
In the realms of AI and science fiction, agents are fully-autonomous systems that can be perceived as acting of their own volition to achieve their own goals. But in the real world, the term “agent” more commonly refers to a person that serves as a representative for a human client and works to achieve this client’s goals (e.g., lawyers and real estate agents). Yet, until the day that computers become fully autonomous, agents in the first sense are really agents in the second sense as well: computer agents that serve the interests of the human user or corporation they represent. In a series of experiments, we show that human decision-making and fairness is significantly altered when agent representatives are inserted into common social decisions such as the ultimatum game. Similar to how they behave with human representatives, people show less regard for other people (e.g., exhibit more self-interest and less fairness), when the other is represented by an agent. However, in contrast to the human literature, people show more regard for others and increased fairness when “programming” an agent to represent their own interests. This finding confirms the conjecture by some in the autonomous agent community that the very act of programming an agent changes how people make decisions. Our findings provide insight into the cognitive mechanisms that underlie these effects and we discuss the implication for the design of autonomous agents that represent the interests of humans.
Reading people's minds from emotion expressions in interdependent decision making.
de Melo, C., Carnevale, P., Read, S., & Gratch, J., Journal of Personality and Social Psychology, 106(1), 73-88, 2014
How do people make inferences about other people's minds from their emotion displays? The ability to infer others beliefs, desires and intentions from their facial expressions should be especially important in interdependent decision making when people make decisions from beliefs about the others' intention to cooperate. Five experiments tested the general proposition that people follow principles of appraisal when making inferences from emotion displays, in context. Experiment 1 found that the same emotion display produced opposite effects depending on context: when the other was competitive, a smile on the other's face evoked a more negative response than when the other was cooperative. Experiment 2 found that the essential information from emotion displays was derived from appraisals (e.g., is the current state-of-affairs conducive to my goals? Who is to blame for it?}, facial displays of emotion had the same impact on people's decision making as textual expressions of the corresponding appraisals. Experiments 3, 4 and 5 used multiple mediation analyses and a causal-chain design: Results supported the proposition that beliefs about others' appraisals mediate the effects of emotion displays on expectations about others' intentions. We suggest a model based on appraisal theories of emotion that posits an inferential mechanism whereby people retrieve, from emotion expressions, information about others' appraisals, which then lead to inferences about others' mental states. This work has implications for the design of algorithms that drive agent behavior in human-agent strategic interaction, an emerging domain at the interface of computer science and social psychology.
Humans vs. computers: Impact of emotion expressions on people's decision making.
de Melo, C., Carnevale, P., & Gratch, J., IEEE Transactions on Affective Computing, 6(2), 127-136, 2014
Recent research in perception and theory of mind reveals that people show different behavior and lower activation of brain regions associated with mentalizing (i.e., the inference of other's mental states) when engaged in decision making with computers, when compared to humans. These findings are important for affective computing because they suggest people's decisions might be influenced differently according to whether they believe emotional expressions shown in computers are being generated by algorithms or humans. To test this, we had people engage in a social dilemma (Experiment 1) or negotiation (Experiment 2) with virtual humans that were either perceived to be agents (i.e., controlled by computers) or avatars (i.e., controlled by humans). The results showed that such perceptions have a deep impact on people's decisions: in Experiment 1, people cooperated more with virtual humans that showed cooperative facial displays (e.g., joy after mutual cooperation) than competitive displays (e.g., joy when the participant was exploited) but, the effect was stronger with avatars (d = .601) than with agents (d = .360}, in Experiment 2, people conceded more to angry than neutral virtual humans but, again, the effect was much stronger with avatars (d = 1.162) than with agents (d = .066). Participants also showed less anger towards avatars and formed more positive impressions of avatars when compared to agents.
Emotion in games.
de Melo, C., Paiva, A., & Gratch, J., M. Angelides, H. Agius (Eds.), The Handbook of Digital Games, 575-592, 2014
Growing interest on the study of emotion in the behavioral sciences has led to the development of several psychological theories of human emotion. These theories, in turn, inspired computer scientists to propose computational models that synthesize, express, recognize and interpret emotion. This cross-disciplinary research on emotion introduces new possibilities for digital games. Complementing techniques from the arts for drama and storytelling, these models can be used to drive believable non-player characters that experience properly-motivated emotions and express them appropriately at the right time; these theories can also help interpret the emotions the human player is experiencing and suggest adequate reactions in the game. This chapter reviews relevant psychological theories of emotion as well as computational models of emotion and discusses implications for games. We give special emphasis to appraisal theories of emotion, undeniably one of the most influential theoretical perspectives within computational research. In appraisal theories, emotions arise from cognitive appraisal of events (e.g., is this event conducive to my goals? Who is responsible for this event? Can I cope with this event?). According to the pattern of appraisals that occur, different emotions are experienced and expressed. Appraisal theories can, therefore, be used to synthesize emotions in games, which are then expressed in different ways. Complementary, reverse appraisal has been recently proposed as a theory for the interpretation of emotion. Accordingly, people are argued to retrieve, from emotion displays, information about how others' are appraising the ongoing interaction, which then leads to inferences about the others' intentions. Reverse appraisal can, thus, be used to infer how human players, from their emotion displays, are appraising the game experience and, from this information, what their intentions in the game are. This information can then be used to adjust game parameters or have non-player characters react to the player's intentions and, thus, contribute to improve the player's overall experience.
Modeling gesticulation expression in virtual humans.
de Melo, C., & Paiva, A., N. Magnenat-Thalmann, L. Jain, & N. Ichalkaranje (Eds.), New Advances in Virtual Humans, 133-151, 2008
Gesticulation is the kind of unconscious, idiosyncratic and unconventional gestures humans do in conversation or narration. This chapter reviews efforts made to harness the expressiveness of gesticulation in virtual humans and proposes one such model. First, psycholinguistics research is overviewed so as to understand how gesticulation occurs in humans. Then, relevant computer graphics and computational psycholinguistics systems are reviewed. Finally, a model for virtual human gesticulation expression is presented which supports: (a) real-time gesticulation animation described as sequences of constraints on static (Portuguese Sign Language hand shapes, orientation palm axis, orientation angle and handedness) and dynamic features; (b) synchronization between gesticulation and synthesized speech; (c) automatic reproduction of annotations in GestuRA, a gesticulation transcription algorithm; (d) expression control through an abstract integrated synchronized language – Expression Markup Language (EML). Two studies, which were conducted to evaluate the model in a storytelling context, are also described.
Increasing fairness by delegating decisions to autonomous agents
de Melo, C., Marsella, S., & Gratch, J., Proceedings of Autonomous Agents and Multiagent Systems (AAMAS 17), 2017
There has been growing interest in autonomous agents that act on our behalf, or represent us, across various domains such as negotiation, transportation, health, finance, defense, etc. As these agent representatives become immersed in society, it is critical we understand whether and, if so, how they disrupt the traditional patterns of interaction with others. In this paper we study how programming agents to represent us, shapes our decisions in social settings. Here we show that, when acting through agent representatives, people are considerably less likely to accept unfair offers from others, when compared to direct interaction with others. This result, thus, demonstrates that agent representatives have the potential to promote fairer outcomes. Moreover, we show that this effect can also occur when people are asked to “program” human representatives, thus revealing that the effect is caused by the act of programming itself. We argue this happens because programming requires the programmer to deliberate on all possible situations that might arise and, thus, promote consideration of social norms – such as fairness – when making their decisions. These results have important theoretical, practical, and ethical implications for designing and the nature of people's decision making when they act through agents that act on our behalf.
Using virtual confederates to research intergroup bias and conflict.
de Melo, C., Carnevale, P., & Gratch, J., Best Paper Proceedings of the Annual Meeting of the Academy of Management (AOM 14), 2014
Virtual confederates–i.e., three-dimensional virtual characters that look and act like humans–have been gaining in popularity as a research method in the social and medical sciences. Interest in this research method stems from the potential for increased experimental control, ease of replication, facilitated access to broader samples and lower costs. We argue that virtual confederates are also a promising research tool for the study of intergroup behavior. To support this claim we replicate and extend with virtual confederates key findings in the literature. In Experiment 1 we demonstrate that people apply racial stereotypes to virtual confederates, and show a corresponding bias in terms of money offered in the dictator game. In Experiment 2 we show that people also show an in-group bias when group membership is artificially created and based on interdependence through shared payoffs in a nested social dilemma. Our results further demonstrate that social categorization and bias can occur not only when people believe confederates are controlled by humans (i.e., they are avatars), but also when confederates are believed to be controlled by computer algorithms (i.e., they are agents). The results, nevertheless, show a basic bias in favor of avatars (the in-group in the “human category”) to agents (the out-group). Finally, our results (Experiments 2 and 3) establish that people can combine, in additive fashion, the effects of these social categories; a mechanism that, accordingly, can be used to reduce intergroup bias. We discuss implications for research in social categorization, intergroup bias and conflict.
The effect of expression of anger and happiness in computer agents on negotiations with humans.
de Melo, C., Carnevale, P., & Gratch, J., Proceedings of Autonomous Agents and Multiagent Systems (AAMAS 11), 2011
There is now considerable evidence in social psychology, economics, and related disciplines that emotion plays an important role in negotiation. For example, humans make greater concessions in negotiation to an opposing human who expresses anger, and they make fewer concessions to an opponent who expresses happiness, compared to a no-emotion-expression control. However, in AI, despite the wide interest in negotiation as a means to resolve differences between agents and humans, emotion has been largely ignored. This paper explores whether expression of anger or happiness by computer agents, in a multi-issue negotiation task, can produce effects that resemble effects seen in human-human negotiation. The paper presents an experiment where participants play with agents that express emotions (anger vs. happiness vs. control) through different modalities (text vs. facial displays). An important distinction in our experiment is that participants are aware that they negotiate with computer agents. The data indicate that the emotion effects observed in past work with humans also occur in agent-human negotiation, and occur independently of modality of expression. The implications of these results are discussed for the fields of automated negotiation, intelligent virtual agents and artificial intelligence.
Expression of emotions using wrinkles, blushing, sweating and tears.
de Melo, C., & Gratch, J., Proceedings of the Intelligent Virtual Agents (IVA 09), 2009
Wrinkles, blushing, sweating and tears are physiological manifestations of emotions in humans. Therefore, the simulation of these phenomena is important for the goal of building believable virtual humans which interact naturally and effectively with humans. This paper describes a real-time model for the simulation of wrinkles, blushing, sweating and tears. A study is also conducted to assess the influence of the model on the perception of surprise, sadness, anger, shame, pride and fear. The study follows a repeated-measures design where subjects compare how well is each emotion expressed by virtual humans with or without these phenomena. The results reveal a significant positive effect on the perception of surprise, sadness, anger, shame and fear. The relevance of these results is discussed for the fields of virtual humans and expression of emotions.