Implications of the Crowd in Collaborative Visual Analytics: Social, Cognitive, and Cultural Influences

August 5, 2011
11am - 11:50am
196 Boston Ave, Room 4014

Abstract

Social visualization systems like IBM's ManyEyes have emerged to support collective intelligence-driven analysis of a growing influx of open data. By supporting the sharing of knowledge and skills such sites pave the way for interested "citizen analysts" to collectively generate insights into socially-relevant data. Yet much remains to be known about how the act of interpreting a visualization is altered in a group setting comprised of diverse online users. I will discuss evidence from a large online experiment that demonstrates how social proof, or the reliance on information about prior users' interactions with a graph, can bias a current user's interpretation. Combined with natural human biases in graph perception, social proof effects can lead to information cascades where an initial biased response propagates across the group. I will describe potential design measures for counteracting the negative effects of these dynamics. These include strategies informed by statistical practices for improving the validity of an estimate such as bootstrapping, as well as work in educational psychology showing how manipulating the difficulty of a graph-based learning task can lead to better understanding.

Bio: Jessica Hullman is a Ph.D student at the University of Michigan School of Information. Her research explores the challenges and opportunities presented by online collaborative visual analytics. How does social information about what others have seen in a graph affect subsequent users' interpretations, and how can such information be used to yield the most accurate collective insight into data? How do the diverse individuals who gather on such sites represent unique collections of skills, limitations, and biases in their viewing, and how can system designers support individual needs in light of this diversity? My objective is to generate insight into how can these systems be designed so as to 1) prompt more accurate interpretations at the individual level, and/or 2) better extract a high quality signal (e.g., valuable collective insight) from the potentially-noisy signals use of such systems produces.