1
Incorporating physics into data-driven computer vision
Kadambi, A., de Melo, C., Hsieh, C.-J., Srivastava, M., & Soatto, S., Nature Machine Intelligence, 2023
Show abstract
Many computer vision techniques infer properties of our physical world from images. While images are formed through the physics of light and mechanics, computer vision techniques are typically data-driven. This trend is mostly driven by performance: classical techniques from physicsbased vision often do not score as high in metrics, compared to modern deep learning. However, recent research, covered in this perspective, has shown that physical models can be included as a constraint into datadriven pipelines. In doing so, one can combine the performance benefits of a data-driven method with advantages offered from a physics-based method, such as intepretability, falsifiability, and generalizability. The aim of this Perspective is to provide an overview into specific approaches of how physical models can be integrated into artificial intelligence (AI) pipelines, referred to as physics-based machine learning. We discuss technical approaches that range from modifications to the dataset, network design, loss functions, optimization, and regularization schemes.
2
Social functions of machine emotional expressions
de Melo, C., Gratch, J., Marsella, S., & Pelachaud, C., Proceedings of IEEE, 2023
Show abstract
Virtual humans and social robots frequently generate behaviors that human observers naturally see as expressing emotion. In this review article, we highlight that these expressions can have important benefits for human-machine interaction. We first summarize the psychological findings on how emotional expressions achieve important social functions in human relationships and highlight that artificial emotional expressions can serve analogous functions in human-machine interaction.We then review computational methods for determining what expressions make sense to generate within the context of an interaction and how to realize those expressions across multiple modalities such as facial expressions, voice, language and touch. The use of synthetic expressions raises a number of ethical concerns and we conclude with a discussion of principles to achieve the benefits of machine emotion in ethical ways.
3
Emotion expression and cooperation under collective risks
de Melo, C., Santos, F. C., Terada, K., iScience, 2023
Show abstract
The difficulties associated with solving Humanity's major global challenges have increasingly led world leaders and everyday citizens to publicly adopt strong emotional responses, with either mixed or unknown impacts on others' actions. Here, we present two experiments showing that non-verbal emotional expressions in group interactions play a critical role in determining how individuals behave when contributing to public goods entailing future and uncertain returns. Participants' investments were not only shaped by emotional expressions but also enhanced by anger when compared with joy. Our results suggest that global coordination may benefit from interaction in which emotion expressions can be paramount.
4
Next-generation deep learning based on simulators and synthetic data
de Melo, C., Torralba, A., Guibas, L., DiCarlo, J., Chellappa, R., & Hodgins, J., Trends in Cognitive Sciences, 2021
Show abstract
Deep learning (DL) is being successfully applied across multiple domains, yet these models learn in a most artificial way: they require large quantities of labeled data to grasp even simple concepts. Thus, the main bottleneck is often access to supervised data. Here, we highlight a trend in a potential solution to this challenge: synthetic data. Synthetic data are becoming accessible due to progress in rendering pipelines, generative adversarial models, and fusion models. Moreover, advancements in domain adaptation techniques help close the statistical gap between synthetic and real data. Paradoxically, this artificial solution is also likely to enable more natural learning, as seen in biological systems, including continual, multimodal, and embodied learning. Complementary to this, simulators and deep neural networks (DNNs) will also have a critical role in providing insight into the cognitive and neural functioning of biological systems. We also review the strengths of, and opportunities and novel challenges associated with, synthetic data.
5
Emotion expressions shape human social norms and reputations
de Melo, C., Terada, K., & Santos, F., iScience, 2021
Show abstract
The emergence of pro-social behaviors remains a key open challenge across disciplines. In this context, there is growing evidence that expressing emotions may foster human cooperation. However, it remains unclear how emotions shape individual choices and interact with other cooperation mechanisms. Here, we provide a comprehensive experimental analysis of the interplay of emotion expressions with two important mechanisms: direct and indirect reciprocity. We show that cooperation in an iterated prisoner's dilemma emerges from the combination of the opponent's initial reputation, past behaviors, and emotion expressions. Moreover, all factors influenced the social norm adopted when assessing the action of others — i.e., how their counterparts' reputations are updated – thus, reflecting longer-term consequences. We expose a new class of emotion-based social norms, where emotions are used to forgive those that defect but also punish those that cooperate. These findings emphasize the importance of emotion expressions in fostering, directly and indirectly, cooperation in society.
6
Heuristic Thinking and Altruism towards Machines in People Impacted by Covid-19
de Melo, C., Gratch, J., & Krueger, F., iScience, 2021
Show abstract
Autonomous machines are poised to become pervasive, but most treat machines differently: we are willing to violate social norms and less likely to display altruism toward machines. Here, we report an unexpected effect that those impacted by Covid-19 —as measured by a Post-Traumatic Stress Disorder scale— show a sharp reduction in this difference. Participants engaged in the dictator game with humans and machines and, consistent with prior research on disasters, those impacted by Covid-19 displayed more altruism to other humans. Unexpectedly, participants impacted by Covid-19 displayed equal altruism toward human and machine partners. A mediation analysis suggests that altruism toward machines was explained by an increase in heuristic thinking —reinforcing prior theory that heuristic thinking encourages people to treat machines like people— and faith in technology —perhaps reflecting longer-term consequences on how we act with machines. These findings give insight, but also raise concerns, for the design of technology.
7
Risk of injury in moral dilemmas with autonomous vehicles
de Melo, C., Marsella, S., & Gratch, J., Frontiers in Robotics and AI, 2021
Show abstract
As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.
8
The interplay of emotion expressions and strategy in promoting cooperation in the iterated prisoner's dilemma
de Melo, C., & Kazunori, T., Scientific Reports, 2020
Show abstract
The iterated prisoner's dilemma has been used to study human cooperation for decades. The recent discovery of extortion and generous strategies renewed interest on the role of strategy in shaping behavior in this dilemma. But what if players could perceive each other's emotional expressions? Despite increasing evidence that emotion signals influence decision making, the effects of emotion in this dilemma have been mostly neglected. Here we show that emotion expressions moderate the effect of generous strategies, increasing or reducing cooperation according to the intention communicated by the signal; in contrast, expressions by extortionists had no effect on participants' behavior, revealing a limitation of highly competitive strategies. We provide evidence that these effects are mediated mostly by inferences about other's intentions made from strategy and emotion. These findings provide insight into the value, as well as the limits, of behavioral strategies and emotion signals for cooperation.
9
Reducing cognitive load and improving warfighter problem solving with intelligent virtual assistants
de Melo, C., Kim, K., Norouzi, N., Bruder, G., & Welch, G., Frontiers in Psychology, 2020
Show abstract
Recent times have seen increasing interest in conversational assistants (e.g., Amazon Alexa) designed to help users in their daily tasks. In military settings, it is critical to design assistants that are, simultaneously, helpful and able to minimize the user’s cognitive load. Here, we show that embodiment plays a key role in achieving that goal. We present an experiment where participants engaged in an augmented reality version of the relatively well-known desert survival task. Participants were paired with a voice assistant, an embodied assistant, or no assistant. The assistants made suggestions verbally throughout the task, whereas the embodied assistant further used gestures and emotion to communicate with the user. Our results indicate that both assistant conditions led to higher performance over the no assistant condition, but the embodied assistant achieved this with less cognitive burden on the decision maker than the voice assistant, which is a novel contribution. We discuss implications for the design of intelligent collaborative systems for the warfighter.
10
Human cooperation when acting through autonomous machines
de Melo, C., Marsella, S., & Gratch, J., Proceedings of the National Academy of Sciences U.S.A., 116, 3482-3487, 2019
Show abstract
Recent times have seen an emergence of intelligent machines that act autonomously on our behalf, such as autonomous vehicles. Despite promises of increased efficiency, it is not clear whether this paradigm shift will change how we decide when our self-interest (e.g., comfort) is pitted against the collective interest (e.g., environment). Here we show that acting through machines changes the way people solve these social dilemmas and we present experimental evidence showing that participants program their autonomous vehicles to act more cooperatively than if they were driving themselves. We show this happens because programming causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals. We also show that the programmed behavior is influenced by past experience. Finally, we report evidence that the effect generalizes beyond the domain of autonomous vehicles. We discuss implications for designing autonomous machines that contribute to a more cooperative society.
11
Cooperation with autonomous machines through culture and emotion
de Melo, C., & Terada, K., PLOS ONE, 2019
Show abstract
As machines that act autonomously on behalf of others–e.g., robots–become integral to society, it is critical we understand the impact on human decision-making. Here we show that people readily engage in social categorization distinguishing humans (“us”) from machines (“them”), which leads to reduced cooperation with machines. However, we show that a simple cultural cue–the ethnicity of the machine’s virtual face–mitigated this bias for participants from two distinct cultures (Japan and United States). We further show that situational cues of affiliative intent–namely, expressions of emotion–overrode expectations of coalition alliances from social categories: When machines were from a different culture, participants showed the usual bias when competitive emotion was shown (e.g., joy following exploitation); in contrast, participants cooperated just as much with humans as machines that expressed cooperative emotion (e.g., joy following cooperation). These findings reveal a path for increasing cooperation in society through autonomous machines.
12
Toward a unified theory of learned trust in interpersonal and human-machine interactions
Juvina, I., Collins, M., Larue, O., Kennedy, W., De Visser, E., & de Melo, C., ACM Transactions on Interactive Intelligent Systems, 9, 24-31, 2019
Show abstract
A proposal for a unified theory of learned trust implemented in a cognitive architecture is presented. The theory is instantiated as a computational cognitive model of learned trust that integrates several seemingly unrelated categories of findings from the literature on interpersonal and human-machine interactions and makes unintuitive predictions for future studies. The model relies on a combination of learning mechanisms to explain a variety of phenomena such as trust asymmetry, the higher impact of early trust breaches, the black-hat/white-hat effect, the correlation between trust and cognitive ability, and the higher resilience of interpersonal as compared to human-machine trust. In addition, the model predicts that trust decays in the absence of evidence of trustworthiness or untrustworthiness. The implications of the model for the advancement of the theory on trust are discussed. Specifically, this work suggests two more trust antecedents on the trustor’s side: perceived trust necessity and cognitive ability to detect cues of trustworthiness.
13
People do not feel guilty about exploiting machines
de Melo, C., Marsella, S., & Gratch, J., ACM Transactions on Computer-Human Interaction, 23, 2017
Show abstract
Guilt and envy play an important role in social interaction. Guilt occurs when individuals cause harm to others or break social norms. Envy occurs when individuals compare themselves unfavorably to others and desire to benefit from the others’ advantage. In both cases, these emotions motivate people to act and change the status quo: following guilt, people try to make amends for the perceived transgression and, following envy, people try to harm envied others. In this paper, we present two experiments that study participants' experience of guilt and envy when engaging in social decision making with machines and humans. The results showed that, though experiencing the same level of envy, people felt considerably less guilt with machines than with humans. These effects occurred both with subjective and behavioral measures of guilt and envy, and in three different economic games: public goods, ultimatum, and dictator game. This poses an important challenge for human-computer interaction because, as shown here, it leads people to systematically exploit machines, when compared to humans. We discuss theoretical and practical implications for the design of human-machine interaction systems that hope to achieve the kind of efficiency – cooperation, fairness, reciprocity, etc. – we see in human-human interaction.
14
Social decisions and fairness change when people's interests are represented by autonomous agents
de Melo, C., Marsella, S., & Gratch, J., Journal of Autonomous Agents and Multiagent Systems, 2017
Show abstract
In the realms of AI and science fiction, agents are fully-autonomous systems that can be perceived as acting of their own volition to achieve their own goals. But in the real world, the term “agent” more commonly refers to a person that serves as a representative for a human client and works to achieve this client’s goals (e.g., lawyers and real estate agents). Yet, until the day that computers become fully autonomous, agents in the first sense are really agents in the second sense as well: computer agents that serve the interests of the human user or corporation they represent. In a series of experiments, we show that human decision-making and fairness is significantly altered when agent representatives are inserted into common social decisions such as the ultimatum game. Similar to how they behave with human representatives, people show less regard for other people (e.g., exhibit more self-interest and less fairness), when the other is represented by an agent. However, in contrast to the human literature, people show more regard for others and increased fairness when “programming” an agent to represent their own interests. This finding confirms the conjecture by some in the autonomous agent community that the very act of programming an agent changes how people make decisions. Our findings provide insight into the cognitive mechanisms that underlie these effects and we discuss the implication for the design of autonomous agents that represent the interests of humans.
15
Physiological evidence for a dual process model of the social effects of emotion in computers.
Choi, A., de Melo, C., Khooshabeh, P., Woontack, W., & Gratch, J., International Journal of Human-Computer Studies, 74, 41-53, 2015
Show abstract
There has been recent interest on the impact of emotional expressions of computers on people's decision making. However, despite a growing body of empirical work, the mechanism underlying such effects is still not clearly understood. To address this issue the paper explores two kinds of processes studied by emotion theorists in human-human interaction: inferential processes, whereby people retrieve information from emotion expressions about other's beliefs, desires, and intentions; affective processes, whereby emotion expressions evoke emotions in others, which then influence their decisions. To tease apart these two processes as they occur in human-computer interaction, we looked at physiological measures (electrodermal activity and heart rate deceleration). We present two experiments where participants engaged in social dilemmas with embodied agents that expressed emotion. Our results show, first, that people's decisions were influenced by affective and cognitive processes and, according to the prevailing process, people behaved differently and formed contrasting subjective ratings of the agents; second we show that an individual trait known as electrodermal lability, which measures people's physiological sensitivity, predicted the extent to which affective or inferential processes dominated the interaction. We discuss implications for the design of embodied agents and decision making systems that use emotion expression to enhance interaction between humans and computers.
16
Reading people's minds from emotion expressions in interdependent decision making.
de Melo, C., Carnevale, P., Read, S., & Gratch, J., Journal of Personality and Social Psychology, 106(1), 73-88, 2014
Show abstract
How do people make inferences about other people's minds from their emotion displays? The ability to infer others beliefs, desires and intentions from their facial expressions should be especially important in interdependent decision making when people make decisions from beliefs about the others' intention to cooperate. Five experiments tested the general proposition that people follow principles of appraisal when making inferences from emotion displays, in context. Experiment 1 found that the same emotion display produced opposite effects depending on context: when the other was competitive, a smile on the other's face evoked a more negative response than when the other was cooperative. Experiment 2 found that the essential information from emotion displays was derived from appraisals (e.g., is the current state-of-affairs conducive to my goals? Who is to blame for it?}, facial displays of emotion had the same impact on people's decision making as textual expressions of the corresponding appraisals. Experiments 3, 4 and 5 used multiple mediation analyses and a causal-chain design: Results supported the proposition that beliefs about others' appraisals mediate the effects of emotion displays on expectations about others' intentions. We suggest a model based on appraisal theories of emotion that posits an inferential mechanism whereby people retrieve, from emotion expressions, information about others' appraisals, which then lead to inferences about others' mental states. This work has implications for the design of algorithms that drive agent behavior in human-agent strategic interaction, an emerging domain at the interface of computer science and social psychology.
17
Humans vs. computers: Impact of emotion expressions on people's decision making.
de Melo, C., Carnevale, P., & Gratch, J., IEEE Transactions on Affective Computing, 6(2), 127-136, 2014
Show abstract
Recent research in perception and theory of mind reveals that people show different behavior and lower activation of brain regions associated with mentalizing (i.e., the inference of other's mental states) when engaged in decision making with computers, when compared to humans. These findings are important for affective computing because they suggest people's decisions might be influenced differently according to whether they believe emotional expressions shown in computers are being generated by algorithms or humans. To test this, we had people engage in a social dilemma (Experiment 1) or negotiation (Experiment 2) with virtual humans that were either perceived to be agents (i.e., controlled by computers) or avatars (i.e., controlled by humans). The results showed that such perceptions have a deep impact on people's decisions: in Experiment 1, people cooperated more with virtual humans that showed cooperative facial displays (e.g., joy after mutual cooperation) than competitive displays (e.g., joy when the participant was exploited) but, the effect was stronger with avatars (d = .601) than with agents (d = .360}, in Experiment 2, people conceded more to angry than neutral virtual humans but, again, the effect was much stronger with avatars (d = 1.162) than with agents (d = .066). Participants also showed less anger towards avatars and formed more positive impressions of avatars when compared to agents.
18
The impact of emotion displays in embodied agents on emergence of cooperation with people.
de Melo, C., Carnevale, P., Gratch, J., Presence: Teleoperators and Virtual Environments Journal, 20(5), 449-465, 2012
Show abstract
Acknowledging the social functions of emotion in people, there has been growing interest in the interpersonal effect of emotion on cooperation in social dilemmas. This article explores whether and how facial displays of emotion in embodied agents impact cooperation with human users. The article describes an experiment where participants play the iterated prisoner's dilemma against two different agents that play the same strategy (tit-for-tat), but communicate different goal orientations (cooperative vs. individualistic) through their patterns of facial displays. The results show that participants are sensitive to differences in the emotion displays and cooperate significantly more with the cooperative agent. The results also reveal that cooperation rates are only significantly different when people play first with the individualistic agent. This is in line with the well-known black-hat/white-hat effect from the negotiation literature. However, this study emphasizes that people can discern a cooperator (white-hat) from a non-cooperator (black-hat) based only on emotion displays. We propose that people are able to identify the cooperator by inferring from the emotion displays, the agent's goals. We refer to this as reverse appraisal, as it reverses the usual process in which appraising relevant events with respect to one's goals leads to specific emotion displays. We discuss implications for designing human-computer interfaces and understanding human-human interaction.
19
Affective engagement to emotional facial expressions of embodied social agents in a decision-making game.
Choi, A., de Melo, C., Woo, W., & Gratch, J., Computer Animation and Virtual Worlds, 23(3-4), 331-342, 2012
Show abstract
Previous research illustrates that people can be influenced by the emotional displays of computer-generated agents. What is less clear is if these influences arise from cognitive or affective process (i.e., do people use agent displays as information or do they provoke user emotions). To unpack these processes, we examine the decisions and physiological reactions of participants (heart rate and electrodermal activity) when engaged in a decision task (prisoner's dilemma game) with emotionally expressive agents. Our results replicate findings that people's decisions are influenced by such emotional displays, but these influences differ depending on the extent to which these displays provoke an affective response. Specifically, we show that an individual difference known as electrodermal lability predicts the extent to whether people will engage affectively or strategically with such agents, thereby better predicting their decisions. We discuss implications for designing agent facial expressions to enhance social interaction between humans and agents.
20
The influence of autonomic signals on perception of emotions in embodied agents.
de Melo, C., Kenny, P., & Gratch, J., Applied Artificial Intelligence, 24(6), 494-509, 2010
Show abstract
Specific patterns of autonomic activity have been reported when people experience emotions. Typical autonomic signals that change with emotion are wrinkles, blushing, sweating, tearing, and respiration. This article explores whether these signals can also influence the perception of emotion in embodied agents. The article first reviews the literature on specific autonomic signal patterns associated with certain affective states. Next, it proceeds to describe a real-time model for wrinkles, blushing, sweating, tearing, and respiration that is capable of implementing those patterns. Two studies are then described. In the first, subjects compare surprise, sadness, anger, shame, pride, and fear expressed in an agent with or without blushing, wrinkles, sweating, or tears. In the second, subjects compare excitement, relaxation, focus, pain, relief, boredom, anger, fear, panic, disgust, surprise, startle, sadness, and joy expressed in an agent with or without typical respiration patterns. The first study shows a statistically significant positive effect on perception of surprise, sadness, anger, shame, and fear. The second study shows a statistically significant positive effect on perception of excitement, pain, relief, boredom, anger, fear, panic, disgust, and startle. The relevance of these results to artificial intelligence and intelligent virtual agents is discussed.
21
Real-time expression of affect through respiration.
de Melo, C., Kenny, P., & Gratch, J., Computer Animation and Virtual Worlds, 21(3-4), 225-234, 2010
Show abstract
Affect has been shown to influence respiration in people. This paper takes this insight and proposes a real-time model to express affect through respiration in virtual humans. Fourteen affective states are explored: excitement, relaxation, focus, pain, relief, boredom, anger, fear, panic, disgust, surprise, startle, sadness, and joy. Specific respiratory patterns are described from the literature for each of these affective states. Then, a real-time model of respiration is proposed that uses morphing to animate breathing and provides parameters to control respiration rate, respiration depth and the respiration cycle curve. These parameters are used to implement the respiratory patterns. Finally, a within-subjects study is described where subjects are asked to classify videos of the virtual human expressing each affective state with or without the specific respiratory patterns. The study was presented to 41 subjects and the results show that the model improved perception of excitement, pain, relief, boredom, anger, fear, panic, disgust, and startle.
22
Multimodal expression in virtual humans.
de Melo, C., & Paiva, A., Computer Animation and Virtual Worlds, 17(3-4), 1-10, 2006
Show abstract
This work proposes a real-time virtual human multimodal expression model. Five modalities explore the affordances of the body: deterministic, non-deterministic, gesticulation, facial, and vocal expression. Deterministic expression is keyframe body animation. Non-deterministic expression is robotics-based procedural body animation. Vocal expression is voice synthesis, through Festival, and parameterization, through SABLE. Facial expression is lip-synch and emotion expression through a parametric muscle-based face model. Inspired by psycholinguistics, gesticulation expression is unconventional, idiosyncratic, and unconscious hand gestures animation described as sequences of Portuguese Sign Language hand shapes, positions and orientations. Inspired by the arts, one modality goes beyond the body to explore the affordances of the environment and express emotions through camera, lights, and music. To control multimodal expression, this work proposes a high-level integrated synchronized markup language—expressive markup language. Finally, three studies, involving a total of 197 subjects, evaluated the model in storytelling contexts and produced promising results.