The success of ChatGPT showcased the potential of language corpus. However, it is a faceless speaker without gaze, mouth movements, and gestures. Is language information good enough for the creation of the cognitively plausible AI? To address this question, we need to know how human process language. This event will explore the multimodality from an interdisciplinary perspective.
Time: 18:00, Wednesday, April 26, Venue: Library Salon 501
Speaker: Ye Zhang, Host: Yue Qi (Institute of Humanities)
Dr. Ye Zhang is currently working as a post-doctoral researcher at UCL in the Department of Experimental Psychology. She got her PhD in Cognitive Neuroscience from UCL. Her research interests include multimodal language comprehension and the underlying brain mechanisms.