Visualizing points of disagreements to help group reach consensus
Groups are often charged with reaching consensus on important decisions. For complex decisions with multiple criteria, verbal and written discourse alone may impede groups from pinpointing disagreements. To help support consensus building, we introduce ConsensUs, a novel visualization tool that highlights disagreement by asking group members to quantify their subjective opinions across multiple criteria. Our paper was admitted by CSCW 2017 and ACM TSC. Also, I will be the presenter of our paper on CHI 2018!
Jun. 2016-Sep. 2016
Design Lab @ UC San Diego
Weichen Liu | Jacob Browne | Ming Yang | Steven Dow
Contextual Inquiry, Focus Group, Cognitive Walkthrough, Survey, Usability testing
In literature review, we want to find out why people struggles to reach consensus and existing ways to utilize decision making process. Our findings are as below:
Some members avoid voicing dissent to help maintain internal social relationships in work groups. People may also influence one another so much so that they ignore their own concerns and only focus on the publicly stated arguments by others.
Individual members may be subject to an anchoring effect where people are attached to their initial opinion and unmoved by a group’s opinion. Individuals are also known to exhibit confirmation bias where they resist any disconfirming evidence to their initial preferences.
Failing to surface disagreements may lead to the situation where group
members overestimate the extent to which their opinions align with others.
Assuming conflicts do surface, groups may still choose “win-lose” solutions that fail to address the concerns of all members.
Anchoring in group decision processes can be reduced if a group’s preferences are revealed after all group members have articulated individual preferences. One paper raised a consensus building model. According to the paper, a successful consensus building process establishes criteria to compare alternatives, externalizes agreement and disagreement to help scaffold the discussion, ensures equal participation, and maintains a holistic view of the group’s opinion.
In competitive analysis, we find that numerous technologies have emerged to support multi-criteria comparisons. Also, a number of tools attempt to create a structured decision process where visualization plays a prominent role.
We conducted individual interviews and focus groups in which we ask participants to accomplish tasks with different visualization designs. We also ask visualization expert to evaluate our prototype.
We try to create even chance for best candidate to examine how people are influenced by others’ opinion. We create three faked candidates’ profile. In the individual voting stage, we want users’ choice of the best candidate to be evenly distributed among the three candidates so that we make sure users’ change of opinion on the best candidate is influenced by others’ opinion rather than the profile itself. We ask users to rate the candidates in individual and group interviews and we also run survey to see the distribution of best candidiates.
We create hidden information in the profile to test information sharing in group discussion. In the real world, even people are looking at the same material, they have different focus and may omit some facts while emphasing on others. In order to create more contrast to test how people share information in the group discussion phase, we hand each participant slightly different profile of three candidates. We will notice if they mention the hidden information in the group discussion page, and see if the hidden information influence their decision by seeing if they change their opinion after the group discussion and whether they use the corresponding hidden information as rationale to change the rating.
We decide to conduct online individual experiment for better control of the experiment. In order to stimulate the exchange of opinions in group discussion, we gather the voting result and rationale in mechanical Turk as confederates and use them as representatives of group members.
Our evaluation focused on the relative benefits of disagreement highlighting as a supplement to written arguments. We conducted a between-subjects experiment with 87 participants taking part in a mock admissions committee for an engineering program.
After the answering the objective questions, participants answered five self-assessment questions on their ability to reason about the committee and to identify disagreements (See Fig. 1). we asked eight multiple-choice questions which all have objective answers that can be calculated based on ratings from the participants and the confederate committee (See Fig. 2).
We measured how participants changed their ratings from before to after viewing the group opinions by their ratings change (See Fig. 3).
After the group stage, participants were asked an open-ended question to articulate the reasons for their decision. We analyzed the percentage of words devoted to concrete rationale versus statements about their process or strategy for making a decision. The results show that an average of 51% of the words in each participant's rationale in the Arguments-only condition can be categorized as concrete rationale, compared to 28% in Visualization-only and 29% in Both condition.