The net migration target may have failed, but it has shifted the way we debate immigration

Posted on

Figures released by the ONS today suggest that net migration to the UK stands at an all time high, at 336,000. The UK government’s pledge to reduce net migration ‘from the hundreds of thousands to the tens of thousands’ seems further than ever from being achieved.

So why hasn’t the government killed off this compromising target? One obvious answer is the political risk of abandoning a target once it has been set. Cameron and Theresa May are fully aware that the target is in many ways a liability. But they have calculated that dropping the target at this stage will send out an even more damaging political message: that their commitment to reducing immigration has been diluted.

But while the target may have been a failure, it has had significant effects on the way we talk about immigration. By setting a single, high-profile, quantitative target, the government has irreversibly shifted how we define and assess policy on immigration.

In our ESRC project on the Politics of Monitoring, we distinguish two main effects of this type of quantitative performance measurement. The first concerns how we categorise and count immigrants. The exercise of counting net migration implies suppressing important distinctions between groups – whether students, intra-company transferees and European Economic Area nationals, family migrants or refugees – and counting them as equivalent units. This glosses over important differences in the reasons behind migration, for example whether it is economic, family-related, linked to study, or fleeing persecution. And it overlooks the varied impacts of immigrants on the UK economy and society, as well as differences across regions of the UK.

Importantly, this re-classification of which immigrants ‘count’ has shifted political attention, bringing previously unproblematic – or even unobserved – groups of immigrants into the political spotlight. Few people were anxious about foreign students or high-skilled labour migrants before the target was introduced. Now they are all part of a problem to be reduced. What’s more, these new classifications are now embedded in the way statistics are produced and disseminated, and the way in which the public expects to appraise the government and hold it to account.

The second effect concerns the role of measurement. It has now become normal to frame immigration issues through using statistics, and especially in terms of overall flows. The use of numbers provides a particularly clear and authoritative way of expressing goals, one that appears more precise and objective than qualitative descriptions. It promises an especially rigorous way of holding government to account.

Opposition parties have played into this, frequently pegging their critique on the government’s failure to meet its own target. This constant invocation of the target – even by its detractors – only reinforces the idea that such targets are valid and appropriate ways of framing policy. Meanwhile, Labour has struggled to articulate a clear and compelling message on immigration, partly because of its (justifiable) refusal to adopt a clear and simply target. The fact of the matter is, targets work very well as political messages.

We may dislike targets. We may find them simplifying, distorting and in many cases unrealistic. But once policies are framed in terms of precise quantitative goals, it is very difficult to undo these effects. In UK immigration debates, it has become difficult to resist assessing policy in terms of overall inflows of migrants, or articulating goals in terms of numbers. The net migration target may have failed, but it has profoundly influenced the way we deliberate on immigration policy.

Christina Boswell

 


On evidence tools for public health policy

Posted on

A range of techniques and methods exist to assemble and present research findings in a way that will be ‘useful’ to policymakers. In public health, three of the most popular are Health Impact Assessments, systematic reviews, and economic decision-making tools (including cost-benefit analysis and scenario modelling). Despite the broadly shared goals, these methodologies have developed distinct, and often parallel, ‘epistemic cultures’ (Knorr-Cetina), through mailing lists, training courses, journals and conferences devoted to each one. In a recent article we conceptualised all three as examples of ‘evidence tools’, arguing that despite their differences they all assemble, assess and present evidence in an effort to influence decision-making processes. Paradoxically, we found that despite this explicit aim, very little attention had been paid to how policymakers experienced these tools. Based on Katherine’s interviews with public health policymakers, in policy practice, evidence tools are perceived as useful when they:-

  • save time, especially where the work has been carried out by others
  • can be adapted to different contexts
  • convey credibility to external audiences
  • offer clear, quantified answers and/or predictions of likely policy outcomes

Scenario modelling, which is widely perceived to have been a critical factor in the introduction of minimum unit pricing for alcohol in Scotland, was described as particularly appealing because it predicted a very specific, quantified benefit (for example, potential saved lives). This was described as gold dust in the political process. However most research users were frank in admitting that they had little understanding of how modelling produced this figure. Far from being a drawback, we argue (in contrast to researchers who have found that policymakers value transparency in their evidence tools) that in public health policy, at least in this particular example, the black magic of modelling actually appeared to enhance its appeal.

The practical technical advice often offered to researchers to make their findings more useful, or ‘impactful’, often presents failures of evidence-based policy as a supply-side issue. Research findings are not relevant enough, too wordy, or buried in obscure academic journals. In contrast, examining how policy actors describe using tailor-made ‘evidence tools’ highlights the complicated role evidence plays within the inevitably political and democratic process of policymaking.

 

Ellen Stewart and Katherine Smith


SKAPE is 1 year old!

Posted on

We look back on a busy year, and talk about next steps for SKAPE

SKAPE is celebrating its first anniversary today. We launched the Centre last June with a symposium on “Open Science, Open Society”, with guests Jill Rutter and Albert Weale. Other highlights of the year have included keynote lectures from Sheila Jasanoff, Jenny Ozga and Brian Wynne. Last June we launched our new Palgrave Macmillan book series on Knowledge and Governance (edited by Richard Freeman and Kat Smith), using the occasion to organise a workshop bringing together some of the best new scholarship in this field. Thanks to a grant from Edinburgh University’s Institute for Advanced Studies in the Humanities (IASH), we have hosted three further workshops: one mapping the emerging field of knowledge and policy studies; a second on science and democratization; and a third, co-hosted with the Graduate Institute Geneva, on the production of strategic ignorance in global governance.

We’ve also been building international networks. October saw the launch of SKAPE-Net (also IASH financed), a consortium of leading scholars working at the interface of policy and science & technology studies. The network includes colleagues from Harvard, Cornell, Technical University of Berlin, ARENA Oslo, Graduate Institute and Nijmegen University. In April we launched a new Research Network within the European Consortium of Political Research, and we’re convening a section at this year’s annual conference in Montreal

SKAPE members have continued to be successful in securing grants, including from the ESRC, the Belmont Forum/NERC, the Leverhulme Trust, the Wellcome Trust, and the Swedish Research Council. Two of our members were awarded prestigious prizes: Katherine Smith was awarded a Philip Leverhulme prize, and Martyn Pickersgill the Henry Duncan Medal from the Royal Society of Edinburgh. You can read more about our research projects here. We end the year with a SKAPE retreat (or “e-SKAPE”), at which we’ll be examining the role and effects of quantification in public life.

So what’s on the programme for the coming year? We’re going to continue our research focus on two key themes:

  • Knowledge democratization. We will be developing our research on citizen science, through Eugénia Rodrigues’s new Citizen Science and Crowdsourcing network. And we will be producing a special issue on the ‘Science and Democracy in Practice’, which explores the logic of attempts to democratize science and expertise.
  • Monitoring. We’ll be publishing more of the findings from our ESRC project on the Politics of Monitoring, and organising a series of dissemination events with the Institute for Public Policy Research in London. We also plan to publish a special issue on strategic ignorance in global governance. And we will be developing collaborative research on quantification and public life, and on ignorance and political rationality.We also plan to expand our engagement in knowledge exchange. We will be reflecting on the impact of the ‘impact’ agenda, through a series of events at the University of Edinburgh.If you’d like to be kept posted on events and receive our twice-yearly newsletter, please contact ada.munns@ed.ac.uk.

 

 


Targeting brains, producing responsibilities: The use of neuroscience within British social policy

Posted on

In a range of areas, the neurosciences have been described as influential – changing, it seems, policies, ideas on mental health, and our notions selfhood more generally. In a Leverhulme Trust-funded project we are looking at the way the neurosciences are (and are not) adopted in policy, the media, and family life. In a recent paper in Social Science & Medicine, we report on the first part of this study. We analysed whether and how a range of policy documents engaged with the neurosciences. The documents were focused on one of three different stages in the life course: the early years, adolescence, and older adulthood.

Responsibility came up as a key theme in our analysis. Drawing on the work of Michel Foucault and more extensively on that of Nikolas Rose and Peter Miller, we studied how the neurosciences can be and are employed in order to stimulate certain types of responsibilities for citizens. We present the results in terms of three (overlapping) discursive themes relating to responsibility.

The first is that of optimisation, by which we mean a focus on the practicalities of maximising a broadly-understood human ‘potential’. Especially in documents regarding the early years, an implicit or explicit imperative is presented for individuals to meet their potential and to help others achieve theirs. This potential is frequently described in terms of optimal brain development of young children. At the same time, some of the policy reports we analysed are more critical of the optimisation discourse or of the neurobiological idiom underlying the discourse.

The second theme is that of self-governance. Here, individuals are urged to take care of themselves, or others are urged to facilitate self-governance. Where young infants are not seen as able to govern themselves, their caregivers are urged to self-govern in ways that will ensure the development of a (often cerebral) platform from which children will eventually learn this skill. Moreover, people are seen as primarily responsible for self-governance in such a way as to reduce the chances of dementia and ‘ensure’ cognitive vitality.

The last theme is that of vulnerability. Perhaps surprisingly, teenagers and their brains are more likely to be explicitly framed as vulnerable than infants. Vulnerability is related to the ‘risky’ behaviour of teenagers but also to the idea that the risks that they take can have more impact on their still developing brains.

Thus, reports discussing policy across the life course ascribe specific social problems to the functioning of brains, yet the solution that they plea for is often a relational one, where parents have a more loving relationship with their children and understand their teenagers better, and where people care for and understand the behaviour of those with dementia. Our analysis, moreover, indicates the import of neuroscience to UK social policies, whilst simultaneously suggesting the importance of being mindful of the limits to which a neurobiological idiom is deployed in policy settings (and elsewhere). As well, we show how critical discourses may proliferate even within the terrain where the terms and concepts of the neurosciences occupy space.

Tineke Broer

Centre for Population Health Sciences/Centre for Research on Families and Relationships, University of Edinburgh

Tineke.Broer@ed.ac.uk, @tineke_broer

Martyn Pickersgill

Centre for Population Health Sciences/SKAPE, University of Edinburgh

Martyn.Pickersgill@ed.ac.uk, @PickersgillM

 

Paper: Broer, T. and Pickersgill, M. (2015) ‘Targeting brains, producing responsibilities: the use of neuroscience within British social policy’, Social Science & Medicine, 132, 54-61.


Targets, quantification and moral deliberation

Posted on

Much has been written about the ways in which quantified targets and performance indicators distort and compress the social dynamics they seek to represent. And scholars of science and technology studies have convincingly shown how such representations are not just descriptive but also performative, shaping our beliefs and norms about policy problems and appropriate responses.

But less has been said about how such compressions affect deliberation on questions of moral duties. How do the sorts of compression and simplification implied by quantification affect how we reason and debate questions of distributive justice, rights, or duties? This is not simply an academic question. The use of quantified indicators and targets is becoming mainstream in a number of policy areas which touch on issues of distributive justice. Such instruments are widely used to compare and assess trends on global poverty, human rights, development, and democratisation. In the area of immigration, national policies are increasingly compared and benchmarked through ‘indexes’, and the UK has been at the forefront of rolling out targets to codify policy goals in immigration and asylum. So how do these forms of quantification affect how we think about moral duties, and especially duties to non-nationals?

Predominant liberal theories of justice and rights suggest that moral reasoning involves abstracting from particular, personal and emotive considerations, and adopting an impartial perspective. On this account, moral duty is revealed – and motivated – by rational deliberation. That would imply that quantification might abet such processes of reasoning, by providing clear, comparable data stripped of the sort of emotional and partial baggage that could bias deliberation. This echoes more optimistic ideas of quantification as having an equaling, or ‘flattening’ effect on questions of distributive justice, abstracting from morally arbitrary characteristics and counting each person equally.

Yet there is another view of moral deliberation and motivation, which sees it as grounded in affect, rather than reason: we are moved to recognise and respond to moral imperatives through our ability to emphathise, to be affected by the plight of others. This way of thinking about morality has its roots in Scottish Enlightenment thought (notably David Hume and Adam Smith) and has been developed in feminist and psycho-analytic thought. It is also supported by cognitive psychology experiments on the role of affect in motivating ‘prosocial’ behaviour – indeed studies have shown that identifiable victims are much more likely to trigger altruistic responses than the provision of statists (as charity campaigners have long realised!). Of course, empathy alone is not a reliable route for ensuring commitment to norms of universal rights or justice. We need to exercise rational deliberation to infer more general duties from particular cases, or to channel or find a ‘fit’ for our affective inclinations in prevailing social norms. (John Charvet offers a good account of the relationship between sentiment and reason. I explored some of these issues in my inaugural lecture).

Now if we accept that affect plays at least some role in motivating duties to others, then quantification can only undermine the types of affective response or deliberation required. Quantification effectively brackets off, or ‘black-boxes’, the resources needed to underpin affective responses. It compresses the type of rich description required to motivate moral duties.  We are required to abstract from those features of our fellow human beings which might trigger concern, distress, empathy, or sympathetic identification.

The upshot is that quantified targets may have a far stronger performative role than generally acknowledged. By sterilising our representation of refugees, immigrants, victims of violence or poverty, they are suppressing the imaginative and affective resources we need to motivate moral duties.

Christina Boswell


Think-Tanks and the Governance of Science

Posted on

Think-tanks play a key role in policy today. Yet, for scholars who are concerned with the dynamics within and between law and science, the place and impact of such organisations are often over-looked. To begin to remedy this, we held an event titled ‘Regulating Bioscience: Between the Ivory Tower and the Policy Room’ on the 6th October 2014 at the Wellcome Trust Conference Centre. With participants ranging across life science, social science and the humanities – and including members of policy organisations and think-tanks – we looked at some of the issues around think-tanks and the governance of science. The event was convened as part of our AHRC Technoscience, Law and Society Network, and held in partnership with the BBSRC.

 

In order to give an overview of how a funding agency engaged with think-tanks and considered issues of science governance, we invited Patrick Middleton, Head of Engagement at the BBSRC, to give some opening comments. Patrick discussed how the BBSRC decided its funding policy through formal, structured consultations, as well as informal conversations with a range of people – including representatives of think-tanks. The reports of bodies like Demos and the Nuffield Council on Bioethics were also described as important for thinking about future funding and governance trajectories for the BBSRC. One of the most significant ways the BBSRC directed research was through apportioning funds to particular large calls. Further, the steering of science was achieved by talking about it: by noting what kinds of research the BBSRC feels is important in blogs, press releases, and on Twitter. Patrick described how the scientists he worked with often saw law as a help to research: it removed uncertainty about what could be done (and what shouldn’t), setting the boundaries of acceptability.

 

The next speaker was Jack Stilgoe, Lecturer in Science and Technology Studies at UCL (and formerly a member of Demos, and the policy group at the Royal Society of London). Jack discussed his perspectives on technology governance, drawing on the work of Langdon Winner and Roger Pielke, Jr. He argued that science and technology influence our lives in profound ways that are often unaccountable.  Accordingly, innovation should be the subject of substantive governance, involving public debate and participation. Jack spoke favourably of the degree to which the EPSRC recognised this, including their commissioning of work (with which Jack was involved) that underscored how innovation should entail: anticipation, inclusion, reflexivity and responsiveness. These themes relate to the wider project of ‘responsible research and innovation’ (RRI), which has been adopted by a range of funders internationally. Yet, the issue of ‘responsiveness’ has been less frequently considered. Think-tanks and other organisations may have a key role to play in helping scientists and their sponsors think through this, not least through the opportunities they provide to foster dialogue and collaboration between diverse actors and institutions.

 

Joanna Chataway, from RAND Europe and Professor of Biotechnology and Development at the Open University, picked up on some of Jack’s themes of interdisciplinarity, discussing the importance to think-tanks of containing a range of members from different disciplinary backgrounds. Joanna situated her talk against the backdrop of a contemporary governance style within which, since the 1990s, the UK government have increasingly stated that they consider ‘evidence’ important in policymaking (e.g. around the regulation of science and technology). Think-tanks are one means through which such evidence can be produced, and they are especially good at working fast, and also with a wide range of forms of ‘messy’ (qualitative and quantitative) data. As Joanna pointed out, though, the expertise and commitment required to collate and analyse such data is expensive, raising questions regarding who pays for the work of think-tanks and what agendas are implicit within particular projects. Further, and echoing Patrick, Joanna described how getting evidence into policy is not a straightforward or linear process: it requires the production and circulation of reports, but also the movement of, and engagement between, different people and the ideas they are working with.

 

Our last speaker was Hugh Whittall, Director of the Nuffield Council on Bioethics, who gave reflections on the place of publics in governing science and technology – and the role of think-tanks in helping to enable their participation. Like Joanna, Hugh emphasised the lack of linearity of the ‘evidence to policy’ journey, and described how this presented opportunities for a range of actors to influence the decisions of policymakers. In particular, think-tanks (as Jack also described) can present opportunities for different ‘stakeholders’ to meet and discuss their hopes and concerns, including through structured public participation events that seek to address democratic deficits in science policy. However, there are questions here about how particular publics are constituted and assembled through engagement events: which interests come to be represented, and which are excluded? Hugh also cautioned against assuming the neutrality of think-tanks; instead, it should be remembered that these organisations carry with them particular assumptions about society and technology. This does not make particular organisations ‘good’ or bad’, but does invite careful reflection from publics (including academics) regarding which think-tanks they engage with, and how they go about doing so.

 

Finally, Jane Calvert, Reader in Science, Technology and Innovation Studies at the University of Edinburgh, gave closing reflections on the talks. In particular, she picked up some of the threads from the presentations around the normative implications of the work of think-tanks. Why, for instance, do think-tanks pay attention to some things and not others? Certain issues appear intrinsically more controversial than others, but what work do think-tanks do in constructing that controversy? Likewise, the governance of ‘emerging technologies’ are frequently a matter of concern – yet the ontology of ‘emergence’ remains both opaque and socially-negotiated. Who, then, sets the agenda for think-tanks, and what are the bounds and the limits of what they can work on? What are the things think-tanks should be paying attention to, but are currently annexed out of their purview? Such issues speak to the broader concerns of the AHRC Technoscience, Law and Society Network, and which will be elaborated more fully in our next Network event.

 

 

 

Martyn Pickersgill, Associate Director of SKAPE, University of Edinburgh

Emilie Cloatre, Senior Lecturer in Law, University of Kent


The power and politics of international assessments in Europe

Posted on

International education assessments have become the lifeblood of education governance in Europe and globally. However what do we really know about how education systems are measured against one another and the effects this measuring produces? Operating as a new form of global education governance, international assessments create a powerful comparative spectacle focused on the performance and apparent ‘effectiveness’ of education systems around the world; this spectacle is now not only including the global rich but also those countries which are often pejoratively described as ‘developing’. However, and despite international assessments’ dominance and ever-pervasiveness into the logic and planning of education, there are still many areas of critique and complexity: the ways these studies are organised and delivered; the impacts they have through decontextualizing education and quantifying some aspects of it (but not others); the effects they have on what is considered worthy of teaching and knowing; and most importantly, the interlinkages that are silently yet powerfully made through commensurating education with the application of similar policy instruments that measure the economy, the labour market, even health, migration, international development; the list can go on.

Much attention has so far been given to the OECD Programme of International Student Assessment (PISA). But why and how has PISA become such a powerful force in education policy-making? To use a metaphor from the medical sciences, PISA took an apparently rapidly worsening patient (according to the diagnosis of the OECD) – education in Europe – and supplied it with a life-saving, and life-changing, transplant. All the essential parts were already there: an education industry; numerous national experts and statisticians; the believers in linking education with the labour market, as well as its critics; and the indicators that the OECD had already been preparing since the 1970s, as well as other international studies that had prepared the field: the IEA’s Progress in International Reading Literacy Study (PIRLS), Trends in International Mathematics and Science Study (TIMSS) and the previous OECD International Adult Literacy Survey (IALS) and Adult Literacy and Life Skills Survey (ALL) studies. In addition, from a more European point of view, a soft governing tool (with a hard agenda!), the Open Method of Coordination, was also ready to be launched and change the European education policy landscape for good. PISA became the heart that breathed life into this previously disparate body. This heart was beating the beat of comparison and competition, connecting the parts into a single entity, itself represented by the OECD rating and ranking tables. The PISA charts became the totemic representations of the new governing regime, excluding caveats or any awkward knowledge in order to offer policy makers what they are often after – fast-selling policy solutions.

This is the beginnings of a story that has been eloquently described and analysed by a number of academics in the field. The Laboratory of International Assessments was set up to investigate ‘chapter 2’ of this story and ask; now that international assessments are with us (and seem to be with us to stay), what are their long-term effects on education governance in Europe and globally? What do they mean for the knowledge and policy relationship and what do they suggest about the changing politics of education policy in the 21st century? How do policy makers use them (if they do)? Can participation in their organisation and management be more open and democratic or is it that their statistical complexity renders them legible only to the very few? These and many other questions are what we intend discussing over the next couple of years in the Economic and Social Research Council (ESRC) seminar series on ‘The Potentials, Politics and Practices of International Education Assessments’. The first seminar, on ‘Education Governance and International Assessments’ will take place at the University of Edinburgh  this December 11 and 12 – it is already oversubscribed, a fact which shows the increasing interest and attention to the phenomenon by the scholarly, policy and testing agency communities. For more commentaries and focused analysis, watch this space –  we are only just starting!

Sotiria Grek


What is the patient experience?

Posted on

As evidenced in documents and reviews such as High Quality Care for All, Equity and Excellence: Liberating the NHS, and the NHS Institute for Innovation and Improvement’s
Patient Experience Book, references to ‘the patient experience’ have become increasingly pervasive in healthcare policy in the UK. While a concern with how people experience health and illness has long been a topic of interest in Medical Sociology and Anthropology, the emergence of the patient experience alongside quality and safety as a key measure of healthcare services is a more recent phenomenon. Yet despite its increasing prominence, what counts as a patient experience and indeed how these experiences can and should be counted remains up for debate. This is reflected in the first section of the aforementioned Patient Experience Book which asks ‘What is Experience?’

The answer: ‘Patient experience is what the process of receiving care feels like for your patients. Understanding patient experience can be achieved through a range of activities that capture direct feedback from patients, service users, carers and wider communities.’ Two words – ‘feels’ and ‘capture’ –are crucial for understanding the work being done by the notion of the patient experience in healthcare policy. First, it is about recognising that feelings, subjective experiences, emotions and responses, are a crucial part of healthcare. Second, it is about trying to find ways to capture and measure these experiences and perceptions in order to assess and ideally improve the quality of health services. Given that people’s experiences of healthcare have traditionally been overlooked in health policy in favour of more ‘objective’ measures such as clinically defined health outcomes, it is hard to imagine anyone objecting to this long overdue recognition of their importance. The second half of the patient experience equation is, however, more problematic. For if the patient experience is a feeling, an emotion, something nebulous and hard to define, how can we capture, measure and quantify it?

Traditionally in Medical Sociology and Anthropology the response to this has been that we can find ways to articulate, to give ‘voice’, to the patient experience, usually through narrative or other qualitative methods, but that it is by its very nature something that defies quantification. As such the patient experience has typically been treated as a ‘subjective’ counterpart to ‘objective’ biomedical knowledge. In contrast, within health services and related research, a range of methods, techniques and devices, such as Patient Experience Trackers (PET), patient reported outcome measures (PROMS), customer satisfaction surveys, and web-based patient feedback have been developed to inform patient choice, health policy and clinical practice. The internet and other information technologies are playing an increasingly important role in this, enabling the large-scale collection and aggregation of different forms of experiential data. As methods and technologies for turning ‘subjective’ experiences, emotions, sensations, and thoughts, into portable forms of knowledge are becoming ubiquitous in healthcare and other domains, it becomes increasingly important for social scientists to explore their history, assumptions and practical implications. However, while deconstructing ‘objectivity’ has a respected pedigree in social studies of science and related fields, far less attention has been paid to the what Steve Shapin refers to as the ‘sciences of subjectivity’: how supposedly ‘subjective’ experiences are being turned into particular forms of knowledge and evidence.

The patient experience in healthcare research, policy and practice will be a key topic of discussion at the Experience as Evidence? conference that will take place in Oxford on the 13-14th October 2014. This event is co-organised by members of SKAPE (Fadhila Mazanderani) and SKAPE-Net (Malte Ziewitz) along with colleagues at the University of Oxford (Angela Martin, John Powell, Louise Locock, Steve Woolgar, Sue Ziebland). The event is made possible by the generous support of the Foundation for the Sociology of Health and Illness, the Wellcome Trust and by the National Institute for Health Research (NIHR) under its Programme Grants for Applied Research funding scheme (RP-PG-0608-10147).

Fadhila Mazanderani


The Use of Expertise in the Scottish Referendum Debate: Build Them Up to Knock Them Down?

Posted on

In a wonderfully perceptive article from 1999, German sociologist Peter Weingart identifies two paradoxes surrounding the use of science in political debate (and we can apply this to expertise more generally). First, late modern societies show an unprecedented dependence on expert knowledge to assess the risks and consequences of political action. Politics becomes ‘scientised’. But at the same time, science has also become politicised, thus undermining the authority of scientific claims in public debate. The second paradox is that rather than leading to the marginalisation of expertise in political debate, political actors continue to rely on it to bolster their claims. They may be sceptical about the validity of research findings; but nonetheless they are committed to the (often ritualistic) deployment of knowledge claims. Science is still considered necessary to underpin rational debate and decision-making.

The first paradox is certainly manifest in the debate on Scottish independence. From the outset, the media have been emphasising the importance of impartial, independent, expert advice to guide voting decisions. And political parties have been keen to substantiate their positions with evidence and expertise. Indeed, expert knowledge has been attributed far more weight than is the case in most political campaigns. Throughout much of the campaign, public debate has taken a largely technocratic form, with constant appeals to academics and experts to weigh in with assessments about different post-referendum scenarios. The apparent deference to experts can be partly explained by the high degree of uncertainty in predicting the outcome of a yes vote. And since most of the contention has revolved around what would happen if Scotland were independence, it’s not surprising that expertise is considered especially relevant. Standard elections revolve largely around assessments of the record of incumbents – claims which may be contested, but at least there are multiple and fairly reliable sources of knowledge for making such assessments. By contrast, when predicting the future, the rationalist impulse is to look to more abstract forms of modelling, or extrapolation from relevantly similar cases. And such forms of reasoning are of course the trademark of academics.

But as we reach the final stages of the campaign, such contributions are turning out to have limited traction or credibility. Lo and behold, the media finds that experts don’t agree in their assessments. The ‘science’ must be flawed. Rival protagonists are exposed to be partisan, or at least their arguments are being mobilised to substantiate partisan views. Either way, the science becomes politicised. Expertise become exposed as yet another weapon in the arsenal of politicians, and loses its authority.

The interesting feature of this debate, though, is that Weingart’s second paradox is not in evidence, or at least not in this final stage of the campaign. Rather than sustaining the ritual of technocratic contestation, the debate appears to have been increasingly stripped back to its raw, identity-driven essentials. And it begins finally to resemble an authentic debate about self-determination or unity. Of course, rival claims about the economy, or health, or pensions are still being asserted, and may still influence voting. But what might be seen as the charade of technocratic decision-making has been exposed. Long live the visceral politics of identity and belonging?

Christina Boswell


Can We Democratise Decisions on Complex Issues?

Posted on

Professor Albert Weale FBA (UCL) writes about the challenges of knowledge democratisation

Issues like the funding of highly expensive pharmaceutical interventions, new forms of animal breeding, dispersed chemicals in the environment, the genetic modification of plants or the choice among different forms of energy production make for hard public policy decisions. They are highly technical, involving evidence about complex chains of causality, relative costs and benefits, the assessment of statistical and other models and a judgement as to how far unforeseeable circumstances will change the picture. Yet, they all inevitably involve consideration of social values like: respect for life, justice among potential beneficiaries, prudence in the setting of standards, the responsible stewardship of nature and the obligations that this generation owe to future generations.

As if this were not enough, they all attract high levels of capital funding, often in the form of venture capital, where there are large rewards to be secured from the widespread uptake of the technology, so calling into play the reputations and careers of research scientists who are themselves often reliant on private capital. They can all be the focus of campaigns, not always scrupulous campaigns, by social movements and non-governmental groups. They are the subject of international regulation and control. Finally, they all require some policy response – even if that response is laisser-faire – in a context in which there is intense public concern. So, inevitably they all raise the issue of the public acceptability of technologies and the policies.

Over the last fifteen years or so, the issue of public acceptability has been approached through the use of techniques involving minipublics, selected groups of citizens invited to give their opinion on policy questions after exposure to evidence and argument. Such minipublics have included citizens’ juries, deliberative polls, permanent group like NICE’s Citizens Council, or sometimes just focus groups. These participatory techniques are important, but they are not a panacea. Deliberation in minipublics does not always produce consensus, and, even when it does, there is the unresolved question of the status of the minipublic’s deliberations. What force, after all, might their conclusions have for the wider public?

In this context, there is a powerful case for looking again at how public accountability in such matters can be improved within representative democracies. Too often, for example, we neglect the role of parliamentary committees, including committees like the House of Lords Science and Technology Committee, who are in a unique position to scrutinise policy. They can call witnesses, draw upon expert support, question decision premises and produce reports that have a real affect on how governments frame their policies.

Secondly, a well functioning democracy needs effective and well-functioning systems of public consultation on proposals for policy. The House of Lords Committee on Secondary Legislation was rightly critical of the coalition government’s the proposal to move away from a standard presumption of thirteen weeks consultation in their July 2012 statement of Principles of Public Consultation, a document that noted that 12 weeks might be needed for a consultation on nuclear energy!

Will the present commitment to open policy making help? The intention to open up the policy-making process to new voices is to be welcomed. Sometimes crowd-sourcing is the right way to make policy: think for example about the identification of cycling accident hotspots. But the wider the range of inputs into policy – not just of actors but also the type of evidence and information they supply – the harder becomes the task of combining those inputs into a meaningful chain of reasoning. Here again the norms of due process – recording the evidence, providing traceability of argument and making sure that there is no undue influence – become central.

Transparency is a hard discipline on matters of public policy involving science. Transparency and accountability are not enough for true democratic decision making. But in a representative democracy they are essential.