Online Workshop: Formal Models of Deliberation and Polarization

VU Amsterdam, 6-7 April 2021, on Zoom

In recent years, the public sphere seems to have become increasingly polarized. The political climate in the US and European countries like the UK grows ever more polarized and extremist, with parties from opposite sides of the spectrum apparently being increasingly incapable of constructive debate and compromise. Other cases of polarization include public opinions on climate change and vaccination. Social media is thought to play an important role in accelerating the polarization of our society, by creating epistemic bubbles where people are only confronted with information that is in line with their beliefs.

These examples shed some doubt on whether deliberation brings about epistemic benefits such as consensus or correctness. This leads to the idea that deliberation should be qualified or abandoned. There is a growing literature that addresses these issues using formal models of deliberation. The workshop will focus on models of argumentation and deliberation that can help identify the conditions under which deliberation does yield epistemic benefits, and those under which it fails to do so.

The workshop is open to all. To register and obtain zoom details, please send an email to h.w.a.duijf [at] vu.nl with subject ‘Workshop Registration’.

Speakers

  1. Gregor Betz (Karlsruhe Institute of Technology, Germany)

  2. Hein Duijf (VU Amsterdam)

  3. Catarina Dutilh Novaes (VU Amsterdam)

  4. Davide Grossi (University of Groningen, The Netherlands)

  5. Ulrike Hahn (Birbeck University of London, United Kingdom)

  6. Leah Henderson (University of Groningen, The Netherlands)

  7. Dominik Klein (Utrecht University, The Netherlands)

  8. Erik Olsson (Lund University, Sweden)

  9. Carlo Proietti (Institute for Computational Linguistics, National Research Council of Italy (CNR-ILC), Italy)

  10. Dunja Šešelja (Eindhoven University of Technology, The Netherlands)

  11. Emily Sullivan (Eindhoven University of Technology, The Netherlands)

  12. Alice Toniolo (University of St. Andrews, UK) (TBC)

Abstracts

Gregor Betz: Beyond Formal Simulations of (Argumentative) Belief Dynamics

 

I revisit different formal models of argumentative belief dynamics (balancing reason accounts, structured argumentation) and discuss the prospect of natural language debate simulations in view of recent advances in computational linguistics.

 


Hein Duijf: Does majority voting favour the majority?


It would be surprising if majority voting in a community would not succeed in selecting policies that are in the interest of the majority of that community. People are often influenced by others when reaching their voting decision. To explore the impact of social influence on majority voting, I use agent-based models where agents are situated on an influence network. First, I compare segregated with random influence networks and demonstrate that in these cases majority voting is equally likely to select policies that are in the interest of the majority. Second, it is surprising that some factors play only a minor role in determining the outcome of the majority vote: the relative sizes of the majority and minority, the total influence of the majority and the minority, and the density of the network. In contrast, some factors play a major role: the competences of the minority and the majority and the proportional influence of the minority versus the majority. The morale is that social influence and deliberation can have unexpected (and perhaps even undesirable) consequences if certain opinions are amplified disproportionately more than others.

 


Catarina Dutilh Novaes Argumentation, polarization, and a three-tiered model of epistemic exchange

 

Argumentation is often contrasted with testimony in that in cases of testimony, an epistemic agent (presumably) primarily evaluates the trustworthiness of the source of information (the informant), whereas in argumentation there is (presumably) primarily engagement with the content communicated. I have argued however (Dutilh Novaes 2020) that trust and trustworthiness in fact play an important role in argumentation too. From this analysis emerged a three-tiered model of epistemic exchange, inspired by the framework of social exchange theory (an influential framework in sociology and social psychology). According to this model, there are three stages for an instance of epistemic exchange to take place: 1- a relation of attention is established between the parties; 2- a relation of sufficient trust is established between the parties; 3- the parties can finally engage in fruitful epistemic exchange. In this talk, I present the model in detail and discuss some of its applications, in particular for the phenomenon of polarization.

 


Davide Grossi: Deliberative Consensus

In this talk I will step a bit outside the standard formal argumentation models to address the topic of democratic deliberation. The talk focuses on a setting in which a community wishes to identify a strongly supported proposal from a large space of alternatives, in order to change the status quo. I will describe a deliberation process in which agents dynamically form coalitions around proposals that they prefer over the status quo. Using this model I will show how the properties of the underlying abstract space of proposals and the ways in which agents can form coalitions affect the success of deliberation in creating consensus. We show that, as the complexity of the proposal space increases, more complex forms of coalition formation are required in order to guarantee success. Intuitively, this seems to suggest that complex deliberative spaces require more sophisticated coalition formation abilities on the side of the agents. The model provides theoretical foundations for the analysis of deliberative processes in systems for democratic deliberation support, such as Polis or LiquidFeedback.


This is joint work with Edit Elkind (University of Oxford), Ehud Shapiro (Weizmann Institute) and Nimrod Talmon (Ben-Gurion University)

Ulrike Hahn: What fuels polarization in social networks?


Polarization has increasingly been viewed as a fundamental problem for societies across the world. On the one hand, the phenomenon of polarization seems comparatively well-understood in the sense that it has been a significant topic of research within the social sciences since the 1960s. This includes experimental evidence of a range of drivers for polarization. At the same time, mapping such mechanisms onto real world developments remains difficult. Computational modelling provides a further, important source of evidence here. The talk will outline recent results that demonstrate a causal role in generating polarization in artificial societies of rational agents for a number of key factors. Implications for real-world developments are discussed.

 


Leah Henderson: The role of source reliability in belief polarisation


Psychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents' prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature.

(Joint work with Alex Gebharter)

 


Dominik Klein: On the epistemic quality of democratic and autocratic decision-making procedures

 

There is ample evidence that democratic systems of government outperform their autocratic counterparts in terms of public goods provision. Classically, this observation is explained by variations in the incentive structures between democratic and autocratic regime types (cf. Acemoglu & Robinson 2005; de Mesquita et al. 2005; Olson 2000). This paper proposes a complementary explanation for performative differences between government types. Building on the debate on the epistemic justification of democratic decision-making procedures (cf. Estlund 2000, Landemore 2013, Goodin and Spiekermann 2018), we analyze whether democratic regimes have an institutionally determined epistemic advantage in assessing the optimal level of public goods supply.
We address this question using a simulational model in which actors seek to determine the optimal level of public goods supply and deliberate about this. Building on these individual assessments, we compare two aggregative mechanisms regarding the accuracy of their ensuing collective estimate. These two mechanisms correspond to democratic and autocratic decision-making. We present three main findings. The first is that democratic decision-making outperforms its autocratic peers in terms of judgement adequacy. Second, democratic decision-making processes turn out to fare best when individual citizens are not impartial, but employ mildly biased estimators that slightly overemphasize their own needs in assessing and communicating the optimal level of public good supply. Finally, in various settings, restrictions of deliberation time can have a positive impact on the epistemic accuracy of the collective decision found.


(Joint work with Johannes Marx)

 


Erik J. Olsson: Why Bayesian Agents Polarize


A number of studies have concluded that polarization may be rational at the individual level in the sense that even ideal Bayesian agents can end up seriously divided on an issue given exactly the same evidence. In this spirit, Pallavicini, Hallsson and Kappel (2018) demonstrate that group polarization is a very robust phenomenon in the Bayesian so-called Laputa model of social network deliberation. However, in their view polarization arises due to a failure of Laputa to take into account higher-order information in a particular way, making the model incapable of capturing full rationality. I show that taking into account higher-order information in the way proposed by Pallavicini et al. fails to block polarization. Rather, what drives polarization is expectation-based updating in combination with a modelling of trust in a source that recognizes the possibility that the source is systematically biased. Finally, I show that polarization may be rational in a further sense, even at the group level: group deliberations that lead to polarization can be, and often are, associated with increased epistemic value at the group level. The upshot is a strengthened case for the rationality of polarization.


Carlo Proietti: Arguments, epistemic attitudes and opinion change


Social dynamics of opinion change in groups are fuelled by informational influence among individuals, most often in argumentative form. A long tradition of experiments in social psychology provides evidence that group polarization emerges due to the circulation of novel and persuasive arguments on the debated topic. Interestingly, informational influence by exchange of arguments has two sides, for it crucially depends on the attitude of the ”sender” (e.g. more or less strategic) and of the “receiver” (more or less vigilant). Assessing the impact of epistemic attitudes on opinion change is a challenging task, for which formal methods are desirable. Combining recent work in abstract argumentation and in dynamic epistemic logic allows to model goals, beliefs and the effect of different policies of information exchange in a joint framework, and is a crucial step in this direction.

 

(Joint work with Antonio Yuste-Ginel)

 


Dunja Šešelja: Modeling bias and deception with an argumentation-based model of scientific inquiry

 

The problem of bias and deception in science has increasingly gained the attention of scholars employing agent-based models (ABMs) to study mechanisms that produce, or those that mitigate the risk of biased or deceptive behavior. In this paper we study the impact of biased and deceptive agents on the efficiency of scientific inquiry by employing a model structurally different from those that have previously been used to this end, namely, the argumentation-based ABM (ArgABM). Our study focuses on the question whether certain factors underlying scientific inquiry (such as communication structure and procedures via which scientists choose which theories to pursue) help as mitigating strategies for reducing the harmful influence of bias and deception. Our results suggest that  highly connected communities tend to perform better than less connected ones, while some types of theory-choice procedures allow the community to be more robust towards the harmful influence of bias and deception than others.


(Based on joint work with AnneMarie Borg, Daniel Frey and Christian Straßer).

 


Emily Sullivan: The evolution of vaccine discourse on Twitter during the first six months of COVID-19

Delft University of Technology, The Netherlands

Trust in vaccination is high in many parts of the world, though it seems to be eroding in countries on multiple continents. When people lose trust in medical experts and career government officials tasked with protecting public health, they tend to turn to other sources, including family, friends, media, internet search and recommender systems, and social media. To shed light on the evolution of social media discourse about vaccines in the context of pandemic, we conducted an observational study of approximately six months’ of Twitter discourse — 75 days prior to the World Health Organization’s 11 March 2020 pandemic declaration through 75 days after the declaration. We find increased polarization and a decentralizing of trust post-pandemic.


(Joint work with Ignacio Ojea Quintana, Colin Klein, Marc Cheong, Ritsaart Reimann & Mark Alfano)

Alice Toniolo: Natural and computational models of deliberation dialogue


Computational models of deliberation are fundamental for developing autonomous systems that support human practical reasoning and provide a user, or an agent within a system, the ability to pose questions and reply to arguments for a proposed plan of action. Deliberation systems are also beneficial for analysing features of natural deliberation. In this work, we analyse key features of agent deliberation such as how the initial issue is posed, its revision during the dialogue, and the effect of information sharing in the selection of a course of action.  Criteria for success of a deliberation dialogue will be examined highlighting that a deliberative dialogue can be  successful in determining what to do but also in revealing  arguments and positions of different sides, without reaching a decision on how to act.  There are, however, various aspects that are yet to be introduced in computational deliberation to reflect the complexity and richness of different types of natural deliberation, and a deliberation typology will be presented for discussion.

Program

April 6th

April 7th

13.00–13.40

Short break (5 min)

13.45–14.25

Short break (5 min)

14.30–15.10

Break (30 min)

15.40–16.20

Short break (5 min)

16.25–17.05

Short break (5 min)

17.10–17.50

Catarina Dutilh Novaes

Ulrike Hahn

Emily Sullivan

Gregor Betz

Carlo Proietti

Alice Toniolo

Erik Olsson

Leah Henderson

Dunja Šešelja

Dominik Klein

Davide Grossi

Hein Duijf

© 2020 Social Epistemology of Argumentation.

This site was designed with the
.com
website builder. Create your website today.
Start Now