Refreshments will be served at 3:45 PM.
Explainability helps people understand and interact with the systems that make decisions and inferences about them. This should go beyond providing explanations at the moment of a decision; rather, explainability is best served when information about AI is incorporated into the entire user journey and AI literacy is built continuously throughout a person's life. We share resources that can be used in both industrial and academic environments to encourage AI practitioners to think more broadly about what explanations can look like across products and ways to provide people with a solid foundation that helps them better understand AI systems and decisions.
Allison Woodruff is a human-computer interaction researcher who focuses on societal issues such as privacy, AI literacy, algorithmic fairness, and sustainability. Allison has 20+ years' experience as a corporate researcher, and she has a strong record of translating her interdisciplinary research into significant product and organizational change. Allison is currently a user experience researcher at Google. Prior to working at Google, Allison worked at the Palo Alto Research Center (PARC) and Intel Labs Berkeley. Allison is a co-inventor on over 30 issued patents and has co-authored over 70 papers. She has conducted research in a wide range of settings, such as green homes, low-income neighborhoods, religious environments, museums, amusement parks, traditional work environments, and street sweeper maintenance yards. Allison is a member of the SIGCHI Academy, and she has served on program committees for AIES, AVI, CHI, CSCW, DIS, SOUPS, UbiComp, UIST, WWW, and more. She received her PhD in Computer Science from the University of California, Berkeley.