PhD Defense: Designing for the Human in the Loop: Transparency and Control in Interactive Machine Learning

Talk
Alison Renner
Time: 
12.19.2019 09:00 to 11:00
Location: 

IRB 4105

Interactive machine learning techniques inject domain expertise to improve or adapt models. However, a focus on how to best adapt underlying algorithms, optimizing system performance, comes at the expense of user experience. This dissertation advances our understanding of how to design for human-machine collaboration, where the goals are improving both user experience and system performance, by studying end users' experience, perceptions, and behaviors with interactive machine learning systems. In particular, we focus on two critical aspects of interactive machine learning: how systems expose or explain themselves to users (transparency) and how users provide feedback or guide systems (control).We conducted four studies to explore the effects of transparency, control, and the interaction between the two on user experience and system performance. We first explored control and transparency in supervised machine learning, where most prior research efforts have focused. We evaluated the effects of specific feedback and explanation mechanisms on user experience and feedback quality with a simple text classifier. Explanations and feedback together resulted in the highest user satisfaction and system performance, and users' satisfaction decreased when given explanations without means for feedback.We then switched the focus to unsupervised machine learning, in particular, topic models, to explore transparency and control in the context of more complex models and subjective tasks. Here, first, we developed a novel visualization technique for topic transparency and compared it against common topic representations for interpretability. While a simple word list visualization supported users in quickly understanding topics, our visualization exposed phrases that other representations obscured.Next, we developed a novel "human-centered" interactive topic modeling system that was both transparent and interactive, based on the optimal topic representations and users' desired control mechanisms identified by our prior work. We conducted a formative study of user experience with our interactive topic modeling system. This study identified two aspects of control exposed by transparency: adherence, or whether models incorporate user feedback as expected, and stability, or whether other unexpected model updates occur.Finally, we further studied adherence and stability by comparing user experience with our interactive topic modeling system across three algorithm variants. These variants differed in how user input was incorporated into the model, which resulted in differences in how the model adhered to user input, the model's stability between updates, update latency, and the coherence of the generated topics. Participants disliked slow updates the most, followed by the lack of adherence, but across modeling approaches, participants differed only in whether they noticed adherence. In both the formative and comparative interactive topic modeling studies, participants were polarized by instability: some liked it when it resulted in model improvements, whereas others preferred more control. This dissertation contributes to our understanding of how end users comprehend and interact with machine learning models and provides guidelines for designing systems for the "human in the loop."
Examining Committee:

Chair: Dr. Jordan Boyd-Graber Co-Chair: Dr. Leah Findlater Dean's rep: Dr. Naomi Feldman Members: Dr. Mihai Pop Dr. Hernisa Kacorri