Public participation and algorithmic policy tools

Posted on

By Antonio Ballesteros

The past couple of months have increased the need for accurate, and transparent, tools that allow policymakers to track and forecast the behaviour of the pandemic we are going through. For instance, different groups of researchers in the UK have used machine learning (ML) algorithms to forecast the type of treatment a person should receive based on the first days of infection [see: 1, 2, 3]. In a broad sense, ML forecasting tools refer to a set of algorithms designed to process data and find patterns. Part of the authority set on quantitative tools (QTs), such as forecasts, is on the idea that as a way of transparency, they can be replicated. However, the notion of replicability understood as obtaining the same results as the original experiment might not be achievable. From a social perspective, a lack of replicability would impact the possibility to challenge a forecast by those been affected. In the case of QTs, at least three elements might not allow their replicability:

  • The existence of mundane everyday processes that impact the production of these tools;
  • The amount of tacit knowledge among researchers which is not transferred; and,
  • Existing infrastructure inequalities where not everyone can have access to the technology required.

My research explores these issues through participant observation during the construction of the Violence Early Warning System (ViEWS). This tool, produced at the Uppsala University Department of Peace and Conflict Research (PCR), aims to forecast the probability of political conflict within the next 36 months; and, of climate-related conflicts within 100 years. Rooted in broader discussions, including SDG 16 and the IPCC, its producers expect to provide an accurate tool for interventions. The expected users of ViEWS include the EU Commission, Western governments and the World Bank. Understanding that their objective could impact people’s lives, ViEWS’ researchers claim to provide a replicable tool as a matter of transparency.

For some at ViEWS, a QT will be understood as replicable when others can intuitively understand the way algorithms deal with data. This implies that it is not enough to provide the instructions, but there needs to be a conscious understanding of the way a tool operates. However, these same researchers acknowledge that the degree of tacit knowledge among individuals is so high, that even within the team, the sudden absence of particular members could avoid starting the project from scratch. For instance, a ViEWS’ researcher told me how, even for them as a team, it can be impossible to replicate the tool:

“I don’t think that anyone would be able to replicate it from scratch. Like in the variables, they would run like copy-paste but trying to figure out what each part does, that might be difficult.”

In the case of the infrastructure required for the production of ViEWS, it is acknowledged that most countries do not possess it. Therefore, most of the countries that have been used as case studies to prove a methodology might not be capable of challenging a measurement. This imbalance between who can measure and who can challenge perpetuates power relations now being reinforced through unreplaceable algorithms. Another ViEWS’ researcher told me that most countries do not possess the supercomputers they need for the construction of ViEWS. Therefore, only rich countries could challenge the results -by inspecting the algorithms behind a policy tool.

In sum, there is an urgency to start the discussion on the social consequences the limitations around replicability of ML and algorithmic tools could have. While those producing QTs claim that any wrong forecast would be discarded through an existing ensemble of other tools used by policymakers, this does not increase the ability of local communities to challenge the algorithms. Acknowledging that these gaps might be impossible to solve in the short-term, QTs’ producers need to involve local communities [4] during the production of their tools. For instance, there is a need to translate the production of algorithms to non-technical terms.

Antonio Ballesteros is a 4th Year PhD candidate in Science and  Technology Studies (STS). His research focuses on analysing the role everyday, mundane events play during the construction of environmental policy quantitative tools (rankings and machine learning).

 

  1. Knight, S.R., et al., Risk stratification of patients admitted to hospital with covid-19 using the ISARIC WHO Clinical Characterisation Protocol: development and validation of the 4C Mortality Score. BMJ, 2020. 370: p. m3339.
  2. Menni, C., et al., Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat Med, 2020. 26(7): p. 1037-1040.
  3. The Guardian, Is it possible to predict how sick someone could get from Covid-19? , in Science Weekly, N. Davis, Editor. 2020, The Guardian.
  4. Visvanathan, S., Knowledge, justice and democracy, in Science and Citizens, M. Leach, I. Scoones, and B. Wynne, Editors. 2005, Zed Books: London.

 


Comments are closed.