I am interested in creating machines that show the kind of intelligence we see in humans, with a particular focus on embodied intelligence. My research and applied work has focused on (a) synthetic data for AI/ML, (b) simulators to enable the next generation of deep learning (e.g., embodied learning), (c) human behavior with autonomous machines (e.g., human-machine cooperation), (b) AI models for complex human behavior (e.g., emotion), (c) multimodal expression in machines through face, voice, and body; and, (d) new media that push the boundaries for human-machine interaction (e.g., augmented/virtual reality). This work has implications in commercial, industrial, medical, military, and entertainment domains.
I am a computer scientist with a research focus on artificial intelligence and human-machine interaction.
Previously, I finished a postdoc at the USC Marshall School of Business with Peter Carnevale. This research was funded by an NSF Grant and focused on (a) the interpersonal effects of emotion expression on people's decision making and corresponding implications for the design of intelligent human-computer interaction systems; (b) Virtual humans, or three-dimensional characters that look and act like humans, as a computational interface for the future and a basic research tool for the behavioral sciences; (c) How perceptions of cognitive and affective ability in others influences decision making and its consequences for human-computer and computer-mediated decision making.
I earned my Ph.D. in Computer Science at the University of Southern California. This work consisted of creating cognitive computational models of emotion and decision making using various artificial intelligence techniques (e.g., machine learning). This work was done at the Institute for Creative Technologies with Jonathan Gratch.
Previously, I received a M.Sc. in Computer Science at the Technical University of Lisbon (IST) with Ana Paiva at the Synthetic Characters and Intelligent Agents Group (GAIPS). There, I began developing my virtual humans framework that supports multimodal expression through face, gesture, and voice.
Last updated: July 16th, 2021