Celso M. de Melo

Computer Scientist
Associate Editor, IEEE Transactions on Affective Computing
Contact: celso.miguel.de.melo@gmail.com

Proceedings of the National Academy of Sciences U.S.A.
1. Human cooperation when acting through autonomous machines
Autonomous machines that act on our behalf are bound to face situations where individual interest conflicts with collective interest, and we must understand if people will cooperate when acting through them. We show, in the increasingly popular domain of autonomous vehicles, that people program machines to cooperate more than they would when acting directly with others. Read the paper.
Scientific Reports
2. Joint Effect of Emotion Expressions and Strategy on Cooperation
The research provides novel insight into the combined effects of strategy and emotion expressions on cooperation. The research has important practical application for the design of autonomous systems, suggesting that a proper combination of action and emotion displays can maximize cooperation from humans. Read the paper.
3. Cooperation with autonomous machines through culture and emotion
This paper shows that people readily engage in social categorization distinguishing humans (“us”) from machines (“them”), which leads to reduced cooperation with machines. However, we show that this bias can be mitigated through appropriate cues of in-group membership - such as culture - or situational cues of affiliative intent - such as emotion expressions. Read the paper.
Journal of Autonomous Agents and Multi-Agent Systems
4. People Are Fairer When Acting Via Agents
Increasingly autonomous agents act on our behalf in health, finance, driving, defense, etc. This research suggests people tend to adopt a broader, higher-level perspective when programming these agents and, thus, act more fairly when compared to direct interaction. Read the paper.
Journal of Personality and Social Psychology
5. Reading People's Minds From Emotion Expressions
Emotion expressions can be windows into other people's minds. This research shows that people make inferences about how others are appraising the ongoing interaction from emotion expressions and, from this information, about others' beliefs, desires, and intentions. Read the paper.
Bootstrap Slider

I am interested in creating machines that show the kind of intelligence we see in humans. For over 15 years, I have studied (a) human behavior with autonomous machines (e.g., human-machine cooperation), (b) AI models for complex human behavior (e.g., emotion), (c) multimodal expression in machines through face, voice, and body; and, (d) new media that push the boundaries for human-machine interaction (e.g., augmented/virtual reality). This work has implications in commercial, industrial, medical, military, and entertainment domains.

I am a computer scientist and my research focuses on human-machine interaction, artificial intelligence, and virtual/augmented reality.

Previously, I finished a postdoc at the USC Marshall School of Business with Peter Carnevale. This research was funded by an NSF Grant and focused on:

  • The interpersonal effects of emotion expression on people's decision making and corresponding implications for the design of intelligent human-computer interaction systems;
  • Virtual humans, or three-dimensional characters that look and act like humans, as a computational interface for the future and a basic research tool for the behavioral sciences;
  • How perceptions of cognitive and affective ability in others influences decision making and its consequences for human-computer and computer-mediated decision making.

I earned my Ph.D. in Computer Science at the University of Southern California. This work consisted of creating cognitive computational models of emotion and decision making using various artificial intelligence techniques (e.g., machine learning). This work was done at the Institute for Creative Technologies with Jonathan Gratch.

Previously, I received a M.Sc. in Computer Science at the Technical University of Lisbon (IST) with Ana Paiva at the Synthetic Characters and Intelligent Agents Group (GAIPS). There, I began developing my virtual humans framework that supports multimodal expression through face, gesture, and voice.

I was born in beautiful Mozambique and also am proud to be Portuguese.


Last updated: September 11th, 2020