Lunch will be served at 11:45 AM.
Recent advancements in large language models (LLMs) have marked a significant milestone in the field of natural language processing. Yet, as we venture into the diverse and intricate terrain of the open world—encompassing varied topics, domains, and modalities—these models encounter formidable challenges. Key among these are issues like hallucination, grounded reasoning across multimodality, and the burden of high computational demands. My research aims to tackle these challenges, focusing on three broad themes: (1) knowledge acquisition and understanding in the dynamic open world; (2) generalizable and efficient intelligence; (3) enhancing human experience when interacting with these technologies. In this talk, I will mainly delve into the pursuit of generalizable and efficient intelligence. I'll first introduce our latest advancements in endowing models with a broader cognitive scope, evolving from answering simple questions to complex questions, and from understanding text to multimodality. Additionally, I will also highlight our recent research on addressing task interference, a crucial but usually overlooked issue, in the context of parameter-efficient tuning. The talk will conclude with discussions into our future directions under these three pivotal themes.
Lifu Huang is an Assistant Professor in the Computer Science department at Virginia Tech. He obtained a PhD in Computer Science from University of Illinois at Urbana-Champaign in 2020. He has a wide range of research interests in natural language processing, multimodal learning and machine learning. His research has been recognized with an Outstanding Paper Award at ACL 2023 and Best Paper Award Honorable Mention at SIGIR 2023. He is a recipient of the NSF CAREER Award in 2023 and Amazon Research Award in 2021.