Transforming Scientific Presentations with Co-Presenter Agents
Thu 02.18.16
Transforming Scientific Presentations with Co-Presenter Agents
Thu 02.18.16
Thu 02.18.16
Thu 02.18.16
Thu 02.18.16
Thu 02.18.16
Although journal and conference articles are recognized as the most formal and enduring forms of scientific communication, oral presentations are central to science because they are the means by which researchers, practitioners, the media, and the public hear about the latest findings thereby becoming engaged and inspired, and where scientific reputations are made. Yet despite decades of technological advances in computing and communication media, the fundamentals of oral scientific presentations have not advanced since software such as Microsoft’s PowerPoint was introduced in the 1980’s.
The PI’s goal in this project is to revolutionize media-assisted oral presentations in general, and STEM presentations in particular, through the use of an intelligent, autonomous, life-sized, animated co-presenter agent that collaborates with a human presenter in preparing and delivering his or her talk in front of a live audience. The PI’s pilot studies have demonstrated that audiences are receptive to this concept, and that the technology is especially effective for individuals who are non-native speakers of English (which may be up to 21% of the population of the United States). Project outcomes will be initially deployed and evaluated in higher education, both as a teaching tool for delivering STEM lectures and as a training tool for students in the sciences to learn how to give more effective oral presentations (which may inspire future generations to engage in careers in the sciences).
This research will be based on a theory of human-agent collaboration, in which the human presenter is monitored using real-time speech and gesture recognition, audience feedback is also monitored, and the agent, presentation media, and human presenter (cued via an intelligent wearable teleprompter) are all dynamically choreographed to maximize audience engagement, communication, and persuasion. The project will make fundamental, theoretical contributions to models of real-time human-agent collaboration and communication. It will explore how humans and agents can work together to communicate effectively with a heterogeneous audience using speech, gesture, and a variety of presentation media, amplifying the abilities of scientist-orators who would otherwise be “flying solo.”
The work will advance both artificial intelligence and computational linguistics, by extending dialogue systems to encompass mixed-initiative, multi-party conversations among co-presenters and their audience. It will impact the state of the art in virtual agents, by advancing the dynamic generation of hand gestures, prosody, and proxemics for effective public speaking and turn-taking. And it will also contribute to the field of human-computer interaction, by developing new methods for human presenters to interact with autonomous co-presenter agents and their presentation media, including approaches to cueing human presenters effectively using wearable user interfaces.
Although journal and conference articles are recognized as the most formal and enduring forms of scientific communication, oral presentations are central to science because they are the means by which researchers, practitioners, the media, and the public hear about the latest findings thereby becoming engaged and inspired, and where scientific reputations are made. Yet despite decades of technological advances in computing and communication media, the fundamentals of oral scientific presentations have not advanced since software such as Microsoft’s PowerPoint was introduced in the 1980’s.
The PI’s goal in this project is to revolutionize media-assisted oral presentations in general, and STEM presentations in particular, through the use of an intelligent, autonomous, life-sized, animated co-presenter agent that collaborates with a human presenter in preparing and delivering his or her talk in front of a live audience. The PI’s pilot studies have demonstrated that audiences are receptive to this concept, and that the technology is especially effective for individuals who are non-native speakers of English (which may be up to 21% of the population of the United States). Project outcomes will be initially deployed and evaluated in higher education, both as a teaching tool for delivering STEM lectures and as a training tool for students in the sciences to learn how to give more effective oral presentations (which may inspire future generations to engage in careers in the sciences).
This research will be based on a theory of human-agent collaboration, in which the human presenter is monitored using real-time speech and gesture recognition, audience feedback is also monitored, and the agent, presentation media, and human presenter (cued via an intelligent wearable teleprompter) are all dynamically choreographed to maximize audience engagement, communication, and persuasion. The project will make fundamental, theoretical contributions to models of real-time human-agent collaboration and communication. It will explore how humans and agents can work together to communicate effectively with a heterogeneous audience using speech, gesture, and a variety of presentation media, amplifying the abilities of scientist-orators who would otherwise be “flying solo.”
The work will advance both artificial intelligence and computational linguistics, by extending dialogue systems to encompass mixed-initiative, multi-party conversations among co-presenters and their audience. It will impact the state of the art in virtual agents, by advancing the dynamic generation of hand gestures, prosody, and proxemics for effective public speaking and turn-taking. And it will also contribute to the field of human-computer interaction, by developing new methods for human presenters to interact with autonomous co-presenter agents and their presentation media, including approaches to cueing human presenters effectively using wearable user interfaces.