The travelling inspector? Education policy and the making of Europe

Posted on

The travelling inspector is a new phenomenon –although education in Europe has always ‘travelled’, inspectors were firmly rooted and derived influence from their local and authoritative standing as education ‘connoisseurs’. However the creation of SICI, the Standing International Conference of Inspectorates 20 years ago and its increasing influence in bringing school inspectors together across Europe since early 2000s, presents us with an interesting case of a professional community on the move.
In order to understand why European inspectors are leaving their local ‘knowns’ and are now voluntarily and actively looking into new ‘un-knowns’, it is worth examining the case of Education Scotland, an agency created in 2011 in order to foster the creation of a learning education system; its remit is no less than to support and foster the formation of professional peer learning communities through adopting the role of ‘the knowledge brokers, and knowledge managers, and knowledge transfer agents’. Indeed, Education Scotland has been exceptionally active in spreading the ‘self-evaluation’ paradigm across Europe; the result, after more than a decade of Scottish inspectors being on the move is that the Scottish inspectorate is considered ‘as one of the leading if not THE leading inspectorate in Europe’.
Thinking about the Scottish case is particularly useful in relation to the study of international policy communities, their formation and particular workings, as it signals a new level of ‘political work’: that of exporting, internationalising and then importing afresh one’s local/national knowledge, once it has successfully gone through the international ‘test’, and is therefore still relevant and future-proof. This is exemplified well through the role of these actors who, rather than being Brussels-based Europeans, invariably assume European identity depending on its exchange value – due to the current political situation in Scotland and the Scottish National Party (SNP) government’s aspiration for independence, that exchange value for Scottish actors is high.
But what does this all mean for the study of policy learning in Europe and indeed for the building of Europe itself? Through previous work on the Europeanising and converging effects of the quality assurance and evaluation processes in the field of education, I have been constantly confronted by actors who deny that these effects exist. Yet their actions and practices emphatically and repeatedly confirm the opposite. Nonetheless, the numbers of travelling inspectors around Europe are growing, as well as their acknowledgement of the benefits and mutual learning of ‘best’ practice that this travelling produces. What, then, is different about the Scottish inspectorate? What is distinctive about inspectorates in Europe in general, since they have become so mobile and receptive to lessons from abroad? Why do they advertise and pursue these exchanges when others stubbornly do not? The case of the ‘travelling inspector’ confirms the view of education as a valuable policy area for the understanding of Europeanization: it illuminates the significance of learning not only as a resource for economic and social cohesion, but crucially as a governing mechanism for the travelling and exchange of policy at the level of the international. The ‘answer’ lies in precisely what the head of Education Scotland said – ‘we need to live the talk’ (Scottish Learning Festival, 2011). Talking about self-evaluation and the creation of peer learning communities at the level of the school needs to reflect similar work at the very top –and this is precisely what this inspectorate has been pursuing internationally over the last decade.
The case of the Scottish travelling inspectors shows how ‘Europe’, rather than existing as a separate and democratically deficient political entity, is in fact continuously fabricated and capitalised on in the political scene at home – in other words, and using the usually problematic language of ‘levels’, rather than diminishing in its role and power, it is in fact the ‘national’ which makes Europe happen. It is in the examination of the national policy spaces that one finds the most useful and enlightening examples of Europeanisation in action.
Second, and for the reasons just mentioned, the Scottish case signals a need to divert the analysis of Europeanisation away from the well-trodden corridors of the Brussels European quarter to more local and apparently peripheral spaces. A sociological examination of the interaction of international actors who come together in such policy and physical spaces could move the European studies agenda from the more top-down, relatively obvious and by now rather stale examination of ‘formal’ European processes, to other arenas which now take advantage of their knowledge and learning potential –or, at least, it is only now that we acknowledge them as such. Given all the above and paraphrasing Monnet, if we were to begin the study of Europe all over again, why would one not start from education?

Sotiria Grek


Open Policy Making: Procedural or Instrumental?

Posted on

Jill Rutter of the Institute for Government writes about the UK Government’s approach to ‘open policy making’.

One of the questions at the SKAPE launch on Thursday was whether the UK government was pursuing open policy making for procedural (increasing involvement, democratic engagement in the policy making process) or for instrumental reasons (getting better results).

Part of the problem of open policy making is that the phrase suggests the former, but the emerging practice makes it clear that it is intended to achieve the latter. The UK Cabinet Office’s own description of what open policy making is, is in fact a description of what the “more open” policy maker does.

Many of the initiatives taken under the open policy making banner underline that this is about drawing on wider sources of expertise, applying new techniques, improving the evidence base and doing policy differently:

• The Contestable Policy Fund which allows the Cabinet Office to match fund departmental bids to get policy advice from outside the civil service – 16 bids have been funded so far;
• The establishment of the Policy Lab – now on its first project – to apply MINDLAB style ethnographic and design techniques to incorporate a user perspective better into service design;
• The work of the – now spun out – Behavioural Insights Team – to promote both the application of those insights but also a more rigorous approach to experimentation in government, as exemplified by the title of their publication: Test, Learn, Adapt
• The establishment of new What Works Centres built on the long-established role of NICE and the more recent work of the Education Endowment Foundation to give both policy makers and commissioners a better handle on the evidence for effective intervention.

All these are potentially useful additions to improve the way policy is made. But at the moment they are either in proof of concept stage, just being established or only dealing with quite technocratic niche issues. The question remains of whether ministers will be willing to apply this new way of doing things to some of their more cherished commitments – and whether we see significant changes to some of the most secretive policy making processes. The fact that the Cabinet Secretary has said that open policy making is less risky policy making did not stop the Chancellor revelling in the fact he could wrong foot the opposition with a giant pensions rabbit in the Budget.
And even if these changes do make for better outcomes, they do not necessarily improve public engagement in the policy process. Indeed, as Albert Weale pointed out on Thursday, at the same time as the civil service reform plan was “making open policy making the default” the government was simultaneously rewriting earlier guidance to reduce the obligation to consult.


Why real policy impact is so difficult to evidence

Posted on

Many of us recently went through the painful experience of trying to evidence the impact of research on policy, as part of the REF 2014 process. One of the problems with this endeavour is that policy-makers are likely to be reticent about the influence of research precisely in cases where it has affected policy. Yes, I know that sounds counter-intuitive. Let me elaborate.

Most studies about the uses of research in policy-making focus on how far policy is used to adjust policy. Indeed this instrumental, or problem-solving, model, dominates thinking about evidence-based policy, as well as the impact agenda. Back in the 1970s, Carole Weiss famously challenged this idea, suggesting that research often has a more subtle and gradual impact, through its ‘enlightenment’ function. But while the notion of enlightenment seemed to capture the influence of knowledge in many cases, it still stuck to the basic assumption that the value of research for policy lies in its capacity to improve government decision-making or performance.

This instrumental view overlooks the more symbolic ways in which knowledge can be a valuable resource for politicians. In previous work (Boswell 2009), I distinguished two such uses: legitimising and substantiating. Legitimising knowledge use is where policy-makers value research as a means of bolstering their credibility in taking sound, rational decisions. They can point to the fact that they commissioned research, or host a research unit, or carry out data analyses of policy problems. Substantiating knowledge use refers to the deployment of research to back up particular claims or preferences. Policy-makers can invoke – ideally independent – research findings to add weight to their claims.

My study of the political uses of research in the field of immigration policy suggested that much – probably most – research used by policy-makers in the UK, Germany and European Commission was valued for its substantiating or legitimising functions. That may not apply as widely in more technical policy areas, and those less prone to symbolic policy-making. But I suspect that much of the ‘impact’ laid claim to in REF case studies would fall into this symbolic category.

And now comes the paradox. How can we tell what function research is playing in policy-making? How can we distinguish between instrumental, legitimising and substantiating uses? I developed three indicators that might help us gauge the function played by knowledge. One of these was the extent to which governments publicised or drew attention to the research they cited, commissioned or carried out. Where research is valued for its legitimising function, we would expect policy-makers to be keen to publicise the existence of the study/research unit, highlighting the authority and independence of its authors. They would be less concerned about content; the point is to signal the credibility of their knowledge base. Where research is valued for its substantiating function, we would expect policy-makers to focus on substantive findings that support their claims. We might also expect them to be keen to demonstrate its robustness, especially in the face of scepticism from their opponents.

But when research is used instrumentally to adjust policy, policy-makers will be at best neutral about publicising it. The point of instrumental research use is that the research in question is considered a resource for improving policy outputs or performance. The political benefits accruing from these adjustments are related to how they impact the target of intervention – not what they signal about government competence or credibility. So policy-makers may see little point in publishing or referencing the underpinning research. Some organisations may even be reluctant to accredit pieces of research that had an influence on their thinking.

Moreover, if we accept Weiss’s point about enlightenment, governments may hardly be aware of how concepts or insights from research have gradually shifted their thinking. Yet it is often those sorts of processes of gradual diffusion that are the most likely to bring about radical shifts in framing policy problems.

The upshot is that real impact – as defined by REF – is going to be far more difficult to track than more symbolic forms of research utilisation. Governments will be keen to broadcast research that supports their arguments or bolsters their credibility. They will be far more reticent about findings and ideas that truly had an impact on policy.

Christina Boswell


Freedom and Reason as Rival Modes of Governance?

Posted on

For the last two centuries or so the countries that now make up the core of the Western democratic and industrialized world have – on their good days – sought to honour two political and constitutional principles: to allow freedom of thought and belief, and to accept the role of reason in political, legal and policy-making life.

These principles tend to have contrasting “logics” in the sense that freedom of belief is liable to promote the idea that one has the “right” to believe whatever one is convinced by, while the acceptance of reason seems to suggest that people should subordinate their views to those who know better.

In one sense, the history of the last two hundred years has been about these societies working out which areas of life should be governed by the first principle and which by the second. In national politics we adopt the first principle; each of us – ideally – chooses for whom to cast our vote. In most areas of science and technology, we adopt the second. Seriously to maintain that the Earth is flat is not now seen as the earnest expression of a valid personal belief, but more likely as the sign of psychiatric disorder. Similarly, the second principle is enshrined in certain of our prized institutions: the whole notion of “expert witnesses” in court is based on the idea that some people know better than others and should be given certain privileges when it comes to offering testimony.

For most of the twentieth century it appeared that this big question was close to resolution and that we were getting better and better at working out which logic to apply where. But in the last few decades this agreement has become rather threadbare and suspect. The core idea of “Wikis” for example, is that the knowledge of the crowd may be superior to that of the credentialed expert. Aided by developments in on-line opinion-making where everyone seems free to maintain that they are experts, shaken by creationism and climate denialism where vocal communities apply enormous funds and energy to evading and undermining the second principle, and ruffled by deconstructionist critiques of science’s exceptionalism, there is a renewed and troubling complexity to the links between reason and opinion, between science, policy and knowledge.. All of which makes this a most fitting time to launch SKAPE.

Steve Yearley


Reflections on the Launch of SKAPE: Freedom and Reason as Rival Modes of Governance?

Posted on

For the last two centuries or so the countries that now make up the core of the Western democratic and industrialized world have – on their good days – sought to honour two political and constitutional principles: to allow freedom of thought and belief, and to accept the role of reason in political, legal and policy-making life.

These principles tend to have contrasting “logics” in the sense that freedom of belief is liable to promote the idea that one has the “right” to believe whatever one is convinced by, while the acceptance of reason seems to suggest that people should subordinate their views to those who know better.

In one sense, the history of the last two hundred years has been about these societies working out which areas of life should be governed by the first principle and which by the second. In national politics we adopt the first principle; each of us – ideally – chooses for whom to cast our vote. In most areas of science and technology, we adopt the second. Seriously to maintain that the Earth is flat is not now seen as the earnest expression of a valid personal belief, but more likely as the sign of psychiatric disorder. Similarly, the second principle is enshrined in certain of our prized institutions: the whole notion of “expert witnesses” in court is based on the idea that some people know better than others and should be given certain privileges when it comes to offering testimony.

For most of the twentieth century it appeared that this big question was close to resolution and that we were getting better and better at working out which logic to apply where. But in the last few decades this agreement has become rather threadbare and suspect. The core idea of “Wikis” for example, is that the knowledge of the crowd may be superior to that of the credentialed expert. Aided by developments in on-line opinion-making where everyone seems free to maintain that they are experts, shaken by creationism and climate denialism where vocal communities apply enormous funds and energy to evading and undermining the second principle, and ruffled by deconstructionist critiques of science’s exceptionalism, there is a renewed and troubling complexity to the links between reason and opinion, between science, policy and knowledge.. All of which makes this a most fitting time to launch SKAPE.

Steve Yearley


Targets in Public Policy: Disciplining or Signaling?

Posted on

Targets have become a popular tool for galvanising improvements to public services across OECD countries. But targets also have an important signaling function: they they can be adopted to signal commitment to, or underscore achievement of, a range of political or organizational goals. In a new paper prepared as part of the Politics of Monitoring project I explore this dual role, looking at the use of targets in UK immigration and asylum policy. The paper, which was presented at the UK Political Studies Association conference in April, focuses on the case of targets on immigration and asylum adopted between 2000-2010 as part of the government’s Public Service Agreements (PSA). The paper argues that:

1. The initial process-based PSA targets on asylum largely failed to function as effective political signals – with the result that senior political figures instead created new, more publicly digestible, targets outside of the PSA system.  This seems to reflect a wider problem with attempts to signal performance through technocratic tools of measurement. Especially in policy areas characterised by more populist narratives, such as immigration and asylum, anecdote and focusing events may be more powerful in constructing policy problems and government performance than dry data and figures.

2. Where political leaders did set more high profile targets, this create a number of political risks. Ambitious “stretch” targets in particular exposed them to the danger of being seen to fail. Even where the government was able to meet targets, it found that it was not politically rewarded. There was no “air time” for broadcasting news about government achievement of targets. Arguably, this asymmetry in the political capital accruing from public targets was one of the reasons Labour retreated from the use of targets as a signalling device in the late 2000s (though of course their successors embraced a new immigration target with zeal!).

3. The attempt to conjoin signalling and disciplining functions created a number of organizational problems. This was especially the case where (non-PSA) politically driven targets were set in a top-down manner, without due regard to organizational capacity. While such top-down interventions certainly galvanised action, the changes they effected were arguably short-term, highly localised, and tended to produce a number of distortions and forms of gaming.

You can find out more about the Politics of Monitoring project here.

Christina Boswell