Date: Thursday 01/12/2022, 3:30 pm (GMT+2/Warsaw time)
Join online: zoom link
Join live: s. 3099, Wydział Geologii UW, ul. Żwirki i Wigury 93
Prof. Katharina Rohlfing is a renowned researcher in the domain of language development and human-robot interaction. She received her MA in Linguistics, Philosophy and Media Studies from the University of Paderborn and her Ph.D. in Linguistics from the Bielefeld University, Germany and worked as a DAAD and DFG Fellow at the San Diego State University, the University of Chicago and Northwestern University. From 2008 to 2015, she headed the Emergentist Semantics Group within the Center of Excellence Cognitive Interaction Technology, Bielefeld University. She is currently Professor of Psycholinguistics at Paderborn University, where she is a head of SprachSpielLabor and Spokeswoman for and Project Leader in the Transregional Collaborative Research Centre 318 „Constructing Explainability”.
Abstract
Technological advancements in machine learning affecting humans’ lives on the one hand and also regulatory initiatives fostering transparency in algorithmic decision making on the other hand drive a recent surge of interest in explainable AI (XAI). Explainability is discussed as a solution to sociotechnical challenges such as intelligent software providing incomprehensible decisions or big data enabling fast learning but becoming too complex to fully comprehend and judge its achievements. With explainable AI, more insights into the functions, decisions, and usefulness of algorithms are expected.
If an explanation is successful, it results in an understanding. Current XAI research is centering around one-way interaction from which solutions to achieve understanding are derived. In the presentation, I will point to an important resource for achieving understanding that has been overlooked so far: the interaction with the addressee. The A05 project of the TRR 318 gives insights into how the cognitive processes should be considered in the design of interaction with the addressee.
To facilitate discussion, you can read the following paper: Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems.
This talk is a part of the Traincrease Lecture Series (D4.2)
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952324.
