marqueur eStat'Perso

Social knowledge and governance.

The promises of evaluation*


Bernard Perret

Brussels, 27 March 1996.


The aim of this paper is not to take stock of the French system of interministerial evaluation of public policies, but to use experience there - and, in particular, the methodological approach of the Scientific Evaluation Council - to highlight some of the most promising implications of the concept of evaluation, in line with the hypotheses on the proceduralisation of collective action put forward by the leaders of this seminar. After decades of evolution of the concept, evaluation has come to be seen as a new way of looking at the rationalisation of collective action, broadening the traditional view of the relationship between the social sciences and political/administrative practice.

2- Evolution of ideas on the method and social uses of evaluation

Programme evaluation emerged in the United States before the Second World War and really took off in North America and certain European countries from the 1960s onwards. It was originally seen as a technique for rationalising public decision-making that was almost exclusively based on the use of quantitative methods. This positivist, instrumental approach gradually turned out to be inadequate both in theory and in practice, and, while it has not been completely rejected, it has now broadly given way to other approaches, which vary greatly but share the characteristic of taking seriously the many types of knowledge and the many types of interaction between knowledge and action.

The paradigm of medical "treatment"

From an epistemological point of view, evaluation, particularly the evaluation of social programmes, was originally based on the model of protocols used to measure the effectiveness of experimental medical treatments. This approach reflected the behaviourism which pervades the social sciences in the United States, particularly in the fields of psychology and education.(1). American writers on evaluation often use the term "treatment" to refer to all the measures to which the social "targets" of a policy are subject: "In the 1960s, ... the key evaluation issue was the black-box task of generating unbiased, precise estimates of the causal consequences of programmes or their major constituent parts. The preferred analytic designs for doing this were experimental, and the preferred analytic techniques were quantitative".(2)

The rational decision-maker model

Similarly, the advocates of evaluation saw it as a way of applying rational scientific thought to public decision-making: "Twenty years ago, many evaluators naively anticipated that their results would be routinely used as the central input into policy decisions. The advocacy of experimentation at that time fed into this naiveté because the decision logic underlying experimentation seems to mirror the rational actor model from public policy. In this model, a problem or need is first clearly defined (in experiments, the analog involves specifying outcome criteria); alternatives for solving the problem are determined and implemented (the various treatments are put in place); the outcome criteria are then monitored (in experiments, data are collected); and finally a decision is made from the data about which alternative is best for solving the problem (a statistical test is conducted to assess which treatments are more effective)."(3)

This ideology culminated in Donald Campbell’s work on the "experimenting society", a virtuous utopia of a society in which the quest for truth through experimentation would be made central to socio-political regulation.

Towards a more complex vision of the method and purposes of evaluation

Since the 1970s we have been witnessing a dual challenge to evaluation, on both the epistemological and political fronts. In terms of epistemology, a critique has emerged of the positivist presuppositions which inspired experimental protocols and modelling. In most real social situations, it is very difficult to rigorously establish the existence of causal links, and even more difficult to measure their extent. Moreover, the re-examination of the social sciences’ claims to objectivity in the fields to which evaluation applies is not confined to questions of extent and causality: researchers today are more aware of the impossibility of adopting a totally objective standpoint, independent of the subjective perception of those affected, to describe the result of actions whose purpose is to serve the interests or change the living conditions of certain categories of the population. One of the crucial points, as Chen notes, is that there are many ways of defining the result of a policy: "The outcome criteria finally selected in an evaluation are only a limited set of a large pool of potential outcomes that might be affected by the programme".(4)

On the political front, evaluators have had to face the fact that little direct use is made of the results of evaluations. This is disturbing but not surprising. It is not enough for information to be intrinsically relevant to a problem for it to be used, because, as Jean Leca notes, "multiple, changing interests introduce new standards for action without warning: from these the ‘political decision-maker’ has a much better and more effective knowledge than the knowledge specialists".(5) Faced with these practical interests (the political influence of a lobby, the practical impossibility of undertaking reform immediately) or ideological interests, the abstract concern for truth has little weight, particularly since the decision-makers cannot spare much time for gathering information: in a world where attention is among the rarest of major resources, information may be a costly luxury because it may deflect our attention from what is important to what is not. We cannot afford to process an item of information simply because it is there.(6)

This crisis has prompted two developments. First, it has led to a reassertion of "qualitative" methods, based on the use of verbal material or texts, on the "naturalistic" observation of social reality (monographs, ethnosociology), or on the development of group work techniques (group of experts or actors) - methods which, it was seen, were likely to provide information that was often more directly useful for the action than quantitative methods: "Qualitative methods are very useful for making explicit the theory behind a program; for understanding the context in which a program operates; for describing what is actually implemented in a program; for assessing the correspondence between what the program theory promised and what is actually implemented; for helping elucidate the processes that might have brought about program effects; for identifying some likely unintended consequences of the program; for learning how to get the program results used; or for synthesising the wisdom learned about a program or a set of programs with similar characteristics"; or even for "answer[ing] the same questions about generalisation and descriptive causal relationships to which quantitative methods are primarily addressed".(7) However, there is no denying the power of objectivisation of figures, which have the merit, among others, lending themselves better to comparison (in time and space) and to aggregation (putting partial findings together). It is now broadly accepted that evaluation requires the combined use of the two types of information.

Having adopted a more qualitative orientation, evaluation has also become more participatory, since cooperation by the different stakeholders in the policy evaluated, and, in particular, actors on the ground, is a precondition for the mobilisation of the practical knowledge that they possess.

A second development has consisted of a more complex view of the social impact of evaluation gradually being imposed. Without denying the practical use of evaluation, it became apparent that its main use in practice was to provide enlightenment of the decision-making context. And, furthermore, it started to become clear that decision-makers are not the only users of evaluation: knowledge of the way in which a public action is implemented and the results it achieves is a useful resource for all actors. Hence the stress laid on the "educational" dimension of evaluation (evaluation as a process of training and increasing involvement), and, more recently, on the fact that evaluation may serve to increase the autonomy and capacity for action of a group of people (described by D. Fetterman as empowerment evaluation).

3. The "doctrine" of the Scientific Evaluation Council: from the "tool method" to the "process method"(8)

A pluralistic view of the purposes of evaluation

This coexistence of purposes and increasingly diversified methodological references has, inevitably, led to complex disputes about theory. In its annual reports on the development of evaluation practices, the Scientific Evaluation Council has always challenged the entrenched opposition between "managerial" and "democratic" evaluation. The development of evaluation has responded to a set of closely interwoven problems: budgetary difficulties, crisis of legitimacy of public action, complexity of policies and interpenetration of levels of government, dysfunction in public services, etc. Evaluation may be seen as having a variety of purposes, which may vary in importance, depending on the case, but which are in no way incompatible:

"(a) an 'ethical' purpose: to give an account to politicians and the public of the way in which a policy has been implemented and the results it has obtained. This dimension covers the improvement in the accountability of systems of public action, the informative purpose and the 'democratic' purpose of the evaluation.

(b) an educational purpose: to contribute to training and to increasing the involvement of public officials and their partners by helping them to understand the process in which they are participating and to adopt its objectives.

(c) a management purpose: to distribute human and financial resources more rationally between different actions and improve the management of the services responsible for implementing them.

(d) a decision-making purpose: to prepare the decisions involved in running, adopting or adjusting a policy."(9)

It should be added that, in the current context of collective action, one of the main purposes served by evaluation is to catalyse cooperation between autonomous public actors involved in the same action (it has been called the "language of partnership").

Mobilising all the relevant information and the contributions of all the different disciplines

The Scientific Council has likewise stressed the multi-disciplinary nature of evaluation, and the complementarity of the quantitative and qualitative methods. Taking the utility criterion into account requires the mobilisation of all the relevant cognitive resources, without taking account of divisions between disciplines: "unlike scientific research carried out within the framework of a specific discipline, which gives priority to a limited number of arguments, evaluation tries to use all 'heuristics' and to adapt to the real conditions in which the deliberation and decision take place. Like any discussion or reflection carried out for practical purposes, evaluation does not automatically exclude any element of information concerning its subject, whatever its nature (quantitative or qualitative) and origin, as long as it appears to be relevant".(10)

In practice, this means that evaluation uses a wide variety of information sources, either exploiting existing data or documents (previous studies, administrative data, legal texts, press cuttings), or carrying out ad hoc surveys or investigations to gather new data (statistical surveys by questionnaire, monitoring a panel of beneficiaries of a measure, in-depth interviews, monographs, hearings, groups of actors or experts).

New methodological issues

This increasing complexity of the concept of evaluation can be seen in the emergence of new methodological issues. Originally, questions of method posed by evaluation were no different from those habitually encountered in the collection, processing and interpretation of information in the various social sciences (observer neutrality, validity of experimental protocols, problems of modelling and statistical inference).

Once the diversity of models integrating knowledge into the operation of systems of action and decision-making is taken into account, the methodological and ethical principles applicable to the political and organisational management of evaluation take on greater importance. In addition to the social science "toolbox", evaluation must therefore form its own corpus of methodological principles, based on both epistemological and socio-organisational - even political - principles. Of course this does not mean abandoning the ideal of a reliable, objective knowledge of social reality, but this is not enough to define the aims and requirements of evaluation. In practice, the method responds to several challenges.

First of all, the variety of methods must be organised. In order to come to clear conclusions, it is not enough to simply juxtapose information of different kinds: on the contrary, there is a risk of confusion. The conditions in which heterogeneous data and arguments are compared and integrated therefore constitute a new field of methodological development.

Secondly, the extension of the debate on the social use of evaluation has brought to light the question of the conditions in which the results are usable: it is not enough for an item of information to be scientifically accurate for it to be considered credible, relevant and useful by its users. In practice, therefore, great importance is laid on (i) the quest for consensus (or, more precisely, working out a politically and socially legitimate standpoint) on the definition of the subject and wording of the questions which the evaluation must answer, (ii) the credibility of the information used and, (iii) the legitimacy of the interpretations and value judgements which underlie the conclusions, recommendations and proposals.

Solving these problems is not a matter of proven technical expertise: it requires a specific methodological construct for each evaluation operation and makes the quality and productivity of the reports drawn up between the different stakeholders crucial. At the very most agreement can be reached on the list and the order of the questions which must be addressed during the preparatory work before a study is launched (cf. Annex, the main stages in drawing up an evaluation project).

Institutionalisation, an alternative to self-regulation of a professional milieu?

One of the difficulties of formulating the regulatory principles of the "method-process" in a uniform fashion is the diversity of the social situations to which evaluation applies. Furthermore, the degree of institutionalisation of the evaluation procedures is a key variable. Basically, a North American tendency to make the professional evaluator the guarantor of a specific evaluation ethic can be contrasted with a more European tendency to institutionalise the procedures. In the American context, the evolution of the concept of evaluation has been interpreted in disciplinary and professional terms: evaluators increasingly consider themselves to be methodologists, "facilitators", even "midwives" or analysts ("fourth generation" evaluators according to Guba and Lincoln). It is in this sense that T. D. Cook’s remark must be understood that "the interest in method is one source of potential unity in a field where evaluators work in different substantive areas and have been trained in many different disciplines".

It is particularly significant that handbooks for evaluators specifically address ethical questions: "Because of the acknowledged political nature of the evaluation process and the political climate in which it is conducted and used, it is imperative that you as the evaluator examine the circumstances of every evaluation situation and decide whether conforming to the press of the political context will violate your own ethics".(11) Likewise, the definition of quality criteria specific to evaluation is understood as an internal matter for the evaluator’s profession.

In the European context, the development of evaluation often has a more institutional character. There is more talk about evaluation of "public policies" (not just programmes), and initiatives by the public authorities are playing a decisive role in the emergence of a range of evaluation practices distinct from both auditing and other study and research practices. The distinction between evaluation research on the one hand and institutional evaluation that is integrated into the operation of political/administrative systems on the other is more marked in Europe than in the United States. At the recent world conference on evaluation in Vancouver, it was also noted that European researchers gave more attention to political influence on the process and uses of evaluation than their American colleagues. The situation in France, with the role of institutionalised referee conferred on the Scientific Evaluation Council, is typical of this approach.

The Scientific Council, institutional guarantor of the autonomy of the process and the suitability of the methods for evaluation’s social purposes

In view of the powers conferred on it, (12) the Council considered that its role was to be the guarantor of the political usefulness, the scientific rigour and the ethics of evaluation. In the words of Jean Leca, president of the Scientific Evaluation Council, evaluation of a policy is within the policy (it constitutes, in a way, an extension or enlargement of the system of action which that policy sets up), which does not mean that it must be used or manipulated by the policy. The paradox of evaluation is that it is a useful policy resource only if it is accepted that it is partially detached from the policy in order for it to have its own credibility. The concept of an "evaluation autonomy zone"(13) is particularly important: it means that, by agreeing to play the game of evaluation, politicians (and any other actors that may be involved in the decision to evaluate) take the risk of putting their practical knowledge, their ideological conceptions and their "theories of action" to the test, in a collective process to which they are party but which they do not totally control. In other words, they must pay the price for the credible information and shared references which the evaluation aims to put together. In practical terms, this idea of an "autonomy zone" is implemented by negotiating a draft evaluation which formalises the agreement of the stakeholders in the evaluation on a subject, an approach and a cognitive strategy, and by setting up, for each evaluation, an evaluation committee, which is a steering committee with broadened and formalised functions (see annex).

The aim of the two opinions given by the Scientific Evaluation Council mentioned in the annex is to ensure that this conception of evaluation is adhered to. At the stage of the first opinion (quality of the draft evaluation), the purpose is:

- to encourage the political sponsors of the evaluation to explain their concerns, their "preliminary diagnosis" of the policy to be evaluated and what they expect from the evaluation;

- to help them to translate this general issue into questions which make sense for the social sciences;

- to orient the evaluation committee towards a choice of methods which are realistic and suited to these questions, in order to produce cogent arguments for the readers of the evaluation report (this means, in practice, that they should be based on several complementary types of argument: descriptive arguments, logical or theoretical arguments, opinion-based arguments);

- to ensure that the composition of the evaluation committee reflects the main relevant points of view, allows effective steering of the study programme and does not produce any malfunctions which could block the process (which, in practice, is not that simple).

The second opinion, on the quality of the evaluation report, has the dual purpose of:

- validating the results of the evaluation. There is no question of the Scientific Council expressing an opinion on the conclusions of the evaluation, let alone the relevance of the recommendations made by the evaluation committee; it should merely "assess both the degree of coherence between the evaluation report and the various studies carried out to this end, and the logical link between the recommendations and the findings and analyses of the report as a whole. The primary aim is to provide decision-makers with an external guarantee of the scientific value of the arguments put forward by the committee".(14)

- identifying the methodological lessons applicable to other evaluations.

These opinions are based on a set of quality criteria similar to those suggested by Chen in his book Theory-Driven Evaluation (utility-relevance, reliability, objectivity, possibility of generalising the results, transparency).

To ensure "transparency" in the evaluations, the Scientific Council lays great importance on the formal rigour of the arguments put forward in the evaluation reports: "an effort should be made in evaluation reports to rigorously articulate the judgements and the facts: ideally, all normative statements should be backed up by reasoned arguments, themselves based on duly documented observations .... Evaluation reports should comprise methodological sections, descriptions, reminders of the conclusions of previous reports, analyses based on new information, and, finally, interpretations by the evaluation committee. These various types of text must be distinguished as far as possible. In particular, imputations of cause and effect (X public action produces Y individual effect) and normative judgements must be clearly identified as such and backed up with reference to the studies carried out as part of the evaluation. The most commonly found flaw is the inclusion of unsupported value judgements in a descriptive exposition".(15)

This requirement of formal rigour is designed to warn readers of an evaluation report against an erroneous interpretation of its conclusions by drawing their attention to the gaps and uncertainties which limit its scope. It also attempts to satisfy didactic and even rhetorical concerns: the evaluation report must be a rigorous, readable and cogent communication tool.

4. Evaluation and procedural rationality: challenging the two rationalities of action and social knowledge

Although the Scientific Council has never conceptualised its doctrine in these terms, it can be described as "procedural", in the sense that it attempts to organise and systematise in the form of procedures the complex processes of reciprocal adjustment between the work of social science researchers and the practical knowledge of actors and decision-makers.

In the traditional view, there is an absolute dichotomy between scientific rationality and the specific rationality of the way in which systems of collective action work: the constitution of a scientific knowledge of social reality and the instrumental action within social systems are governed by fundamentally differing practices, with no possibility of interaction. Evaluation, in contrast, has to deal with the paradoxical proximity of these two systems of logic in the sense that it tries to organise the interaction between them: researchers and actors share the duty to constantly carry out more or less arbitrary tasks of describing reality, making judgements, constructing representations, generalising, etc. Any scientific activity can be regarded as a normative activity, and any complex social activity as an activity which includes theoretical aspects. The result is a double deconstruction of the rationalities of public action and of social knowledge.

A critical unveiling of the theoretical approaches and normative sets of criteria which underlie public policy

Any public policy or action is based on a "theory of action", a set of representations and ideas (often implicit) on which its initiators and/or actors predicate its operating mechanisms and cause-and-effect relations between the measures taken and their expected social impact. One of the advantages of evaluation is that it requires the objectivisation of this theory (since it involves formalising its objectives and setting out an initial schema of the operating mechanisms) and therefore enables it to be put to the test. One of the leitmotifs that run through the evaluation reports is that key decision-makers’ theories of action are over-simplistic and that they need to be refined and reformulated in the light of real social processes. Apart from the fact that they are almost always contradictory (it is not in politicians’ interest to make their choices between different objectives too clear), theories of action generally do not know how capable the different protagonists of the "system of action" are of superimposing their own rationality on that of the "official" objectives of a policy. If carried out properly, evaluation provides political and administrative leaders with a more realistic vision of the "co-production" of public policies, and leads them to give greater attention to the conditions in which they are implemented (including, primarily, informing and training the actors). In other words, evaluation contests the "ballistic" vision of the way that public decisions impact on society, and highlights the constant temptation for the key actor to underestimate the autonomy of the other actors and the various unintended effects of his or her action on society.

It should be noted that the aim of this deconstruction is not to delegitimate the actors’ rationality, let alone replace it with an all-embracing substantive rationality, such as economic rationality, which is likely to relativise the specific objectives of the various public policies. Unlike the public economy, which attempts to translate the value of public action into monetary terms (the concept of value for money), for example by simulating the existence of a market in fields where it should not normally play any role,(16) the evaluation of public policies implicitly endorses the heterogeneity and the vagueness of this value. The criteria against which the results of the policy which have been observed are to be compared are still built on goals which have been democratically set for it, even if these almost always need to be interpreted and updated in line with current priorities. It is true that these results must be weighed up against the cost of the policy, but no conclusive consequence can automatically be deduced from this comparison. The question of value thus changes from a measure to a value judgement: in general there is no one single answer that can be given, but one can be given in reference to a given social and political context, in which legitimate players have asserted their interests and expressed preferences and aims regarding the costs and effects of this policy. The evaluation assessment must not, however, be confused with a political assessment, which belongs to the voters; in a way it constitutes a sui generis type of social construction of the value of a public action which is based on both science and common sense.

A procedural approach to the regulation of scientific work

For its part, any social knowledge that is supposedly useful for the action can be suspected of being based on the arbitrary choices and conventions needed to identify a line of enquiry, determine the objectives of research, build information systems and describe observations. The academic structuring of different scientific disciplines imposes a de facto regulation on this normative activity by researchers by instituting scientifically legitimate modes of questioning. But this form of regulation of scientific activity has the disadvantage of making interdisciplinarity difficult (think how difficult it would be to construct a dialogue between the economic, sociological, historical and anthropological approaches to the problem of employment). Evaluation could, in theory, have the effect of replacing this type of regulation by discipline with an institutional regulation of how research topics are established, perhaps giving the multidisciplinary approach to certain complex problems a better chance: "In evaluation, the position of researchers is not the same as it is in the usual research context. It is important that they should be able to find the right balance between the interpretation grids specific to their discipline and the interpretations most likely to be discussed by the evaluation committee within the framework of its own subject area".(17)

Similarly, evaluation shifts and enriches the debate about the validation of knowledge. The rigorous approach cannot hope to produce a single truth about social reality, but "merely" a legitimate, credible and useful representation of this reality. The "constructivist" conception of objectivity developed in certain texts by the Scientific Evaluation Council is particularly significant in this respect: "In the context of experimentation, the word ‘protocol’ refers to the aim of constructing a ‘social experiment’ analogous to a scientific experiment, i.e. by giving themselves the means of rigorously controlling the influence of exogenous factors on the effects of the policy or programme evaluated. By transposing the concept, we can talk of ‘protocol’ in the more general sense of organising the conditions in which the information supporting the evaluation assessment is produced and interpreted. Just as in the case of the experimental protocol, the conditions of validity of the knowledge must be monitored. The objectivity in question is not the same thing as scientific objectivity: it could be defined as the fulfilment of what is required to establish a shared belief in a given social context. The pluralism of opinions and skills brought to bear in the work of synthesising and interpreting the information is an essential condition of the construction of objectivity in the sense it is given here".(18)

The risk nonetheless exists that the requirement of scientific rigour will dissolve in the quest for a consensus between the relevant actors. This is why the quotation above must be supplemented and qualified by this other definition of objectivity, given by the Petit guide de l’évaluation: "objectivity is understood as the fact that the results of the evaluation have not been influenced by the personal preferences or institutional positions of those responsible for the evaluation (consultants or members of the committee), or at least that these preferences have been explained or checked to the extent that it can be supposed that another evaluation answering the same question and using the same methods would lead to the same conclusions".

In other words, the Council established a (theoretical?) distinction between the desirable pluralism of opinions and skills on the one hand and the influence of personal and institutional interests, which must be carefully monitored, on the other. In practice, the Council has frequently warned the evaluation committees against their natural tendency to become places of negotiation between vested interests. Discussion of interpretations must not be manipulated for strategic purposes, but rather regulated twice over, first by the values common to the various stakeholders of a policy, and second by the "standards" of validity specific to the various scientific disciplines used. The aim of the Council’s supervision is to check that the summary report is written in line with this ethical approach to the treatment of conflicts of interpretation, and that the methodological eclecticism of evaluation does not lead to the confusion of styles of argument: "unlike an ordinary discussion, carried out without formal method, evaluation endeavours not to mix the different types of argument, but rather to rank them, weight them and link each one to specific conclusions" (Petit guide de l'évaluation). The Council’s approach leads to a conception of the forms of objectivity that is not only relativistic and pluralistic, but also differentiated and prioritised.

5. Final observations

Is the conception that has just been developed at least partially validated by the analysis of evaluation practices? As noted in the introduction, this review, deliberately theoretical and forward-looking, is not a critical assessment of the modus operandi of the institutional system set up in 1990. The experiment has also come up against difficulties of various kinds which we will not analyse here, and it has remained, if not marginal, at least too limited for it to be possible to draw definitive conclusions from its results which, in any case, vary depending on whether the focus is on the quality of the evaluation reports or their actual impact. In spite of the apparent weakness of these reports (confined, depending on the case, to a few technical measures or a partial clarification/reformulation of the problem and the objectives of the policy evaluated), the interest and potential usefulness of the practical conclusions which could have been drawn from the evaluation reports should not be underestimated. Without going further into this point, we can at least make an assessment of sorts as a provisional hypothesis: evaluation has failed as a technique for renewing the work of government, but it has proved its capacity to give new meaning to officials’ activity. This tallies with the diagnosis of two researchers who have analysed the implementation of the first interministerial evaluations and the social effects produced by them: "The experiment shows a huge gap between the expectations, depending on whether they are political or administrative in origin, the resources mobilised, and the uses made of evaluation results. While, in theory, evaluation of public policies should be a tool to make the effects of public action democratically intelligible, the apparent consensus on this function barely conceals a difference of opinion (or a misunderstanding) on its limits. It is clear that while political actors do not see it as a solution to their problems, administrative players have a vague sense that it may be a means of renewal ... the usefulness of evaluation is more tangible and more profound at the administrative level: it is a means of clarifying problems, a broader feedback on atomised public action, a place where isolated rationalities can be compared, even a means of embarking on an oft-called for, but rarely found, cooperation. Most administrators concerned have a strong impression of learning and development that is generally irreversible".(19)

However, the lack of involvement of the political players (in spite of the number of official declarations in favour of developing evaluation), means that evaluation cannot fully play its role in redefining public policies at government level: "evaluation is not currently a factor in reconstructing public policies, i.e. putting the problem back on the public agenda. Currently, other ways of getting issues on the agenda and developing public policies are the rule: social demands, interest groups and pressure groups, crises, etc."(20)

If evaluation is to act as a lever to reform modes of government, it must first become a factor in the balance of powers, in other words Parliament must make it a means of shedding light on the democratic debate and a routine instrument to help draft legislation. Politicians will get involved in evaluation when it has become the stuff of a power relationship between the legislative and the executive, or, alternatively, a place for developing and testing their action (it goes without saying that the way in which the media use the results of evaluation may play a role in this development). It is clear that, as things currently stand, political players are not ready to play along with a practice which appears to restrict their liberty to act "politically", i.e. on the basis of the traditional mechanisms of aggregation of social demand, expression of values and representation of interests. In this case they will almost always reduce evaluation to surveys or covert forms of control. However, it can still be hoped that, though politicians are unlikely to participate, this will not discourage administrative players from making use of evaluation.

The list of policies evaluated (cf. Annex) illustrates the way in which evaluation has been kept on the margins of mainstream politics and, at the same time, been perceived as an instrument allowing new forms of political/administrative regulation. These are complex policies involving a number of actors and simultaneously pursuing many objectives, and there are even "thematic" evaluations looking at a set of heterogeneous public actions concerning a single problem (e.g. combating poverty, protection of wetlands). In purely decision-making terms, it would have been more effective to give priority to developing programme evaluation within each ministerial department or, more ambitiously, to carry out evaluations directly linked to immediate and important political issues (e.g. immigration, monetary policy). On the positive side, evaluation does appear to have been used as a means of clarifying complex but relatively uncontroversial questions, and of getting round the obstacles within interministerial work. This is linked to the fact that evaluation is developing rapidly at regional level within the framework of partnership policies involving several levels of public decision-making (State-Region Planning Contracts, European programmes); this way of using evaluation is probably one of the more promising, in view of the increasing complexity of public action systems.





Some facts on the French system of evaluation of public policy

The 1990 system (to which this paper directly refers) will be presented first, followed by the new system established by the decree of 18 November 1998.

A- The decree of 22 January 1990

Setting up an interministerial system for evaluating public policy was one of the components of the policy for the renewal of government initiated by the Rocard government at the end of the 1980s, the main thrusts of which were: a revamped industrial relations policy; a policy for the development of responsibilities; a requirement to evaluate public policies; and policy to improve accessibility and service to the public. The decree of 22 January 1990 set up an Interministerial Evaluation Committee (Comité Interministériel de l'Evaluation - CIME), responsible, in the words of the decree, for "coordinating government initiatives on the evaluation of public policies". As such, it had the right to choose evaluation projects which would be eligible for the National Evaluation Development Fund (Fonds National de Développement de l'Evaluation - FNDE), also set up by the decree. Once the result of the evaluations was known, it deliberated on the action to be taken. A Scientific Evaluation Council Conseil Scientifique de l'Evaluation - CSE) was also set up. This was made up of 11 members appointed for six years (not renewable) by the President of the Republic, on the basis of their expertise in the field of evaluation or of economics, social science or administration.

The CSE was given the task of "promoting the development of evaluation methods and defining an ethical approach". More specifically, it was made responsible for "ensuring the quality and objectivity of work eligible for the National Evaluation Development Fund (FNDE)" (a budgetary fund created to finance interministerial evaluations decided on by the Interministerial Evaluation Committee). To this end, it drew up two opinions on the evaluations covered by the interministerial procedure:

Policies evaluated under the 1990 system


B- The new system established by the decree of 18 November 1998

The 1998 reform demonstrates the will to relaunch interministerial evaluation, continuing to follow the main guidelines laid down in 1990: the link between evaluation, democracy and the modernisation of government; the requirements of pluralism, transparency and scientific rigour; the involvement of the administrative authorities in the evaluation of policies which concern them. Features which distinguish the 1998 decree from its predecessor are a determination to increase the involvement of local authorities in the evaluation of national policies and a determination to integrate the methodological work and the management of the evaluations to a greater extent. This was probably done at the cost of weakening the role of methodological authority that the Scientific Council had previously held.

The Interministerial Evaluation Council was abolished and the Scientific Council was replaced by a National Evaluation Council (CNE) made up of 14 members appointed for three years by decree of the Prime Minister (renewable once). Because of the way it is made up the CNE has a more political/administrative and less scientific nature than the former CSE. In fact, the CNE’s job is both political and methodological:

The Commission for the National Plan (Commissariat Général du Plan) also had its powers increased in terms of leading the development of evaluation in government and in the management of the interministerial system. It itself provides secretarial services for the CNE.

The first annual evaluation programme drawn up by the CNE was approved by the Prime Minister in July 1999. It applies to the following policies:

C- The main stages in drawing up an evaluation project

D- Structure of the questioning (categorisation of the questions to be examined)

E- Role of the "evaluation body"

(Extract from the Petit guide de l'évaluation)

"Evaluation is neither pure knowledge nor pure political mechanism. An autonomy zone for evaluation, between science and action, must be recognised and organised. When the complexity of the subject warrants it, this may be done by setting up an "evaluation committee", a group charged by the commissioner with supervising the evaluation. The evaluation committee is more than a steering committee: it must enjoy broad responsibility within the framework of a written mandate from the sponsor. Specifically, it has two types of task:

a) To steer the evaluation work, i.e. to oversee the putting together of the evaluation project, to translate the evaluation questioning into specifications for studies and research, commission and monitor the various studies, hold hearings with the resource people, administrators, experts or other "witnesses", or even make group on-site visits.

b) To integrate the evaluation work, i.e. collate the documentation and the studies, validate their results, interpret these results in the light of the other information collected (from hearings or previously available information), answer the questions posed by the evaluation project, formulate some general conclusions and, if necessary, suggestions, and write the final report.

The committee is generally the place where reasonable conclusions are deduced by deliberation from the analysis and interpretation of studies. It should be seen as an arbiter between the different points of view and not a mediator between the different interests which need to be accommodated.


- Return to Bernard Perret's personal Page -

FastCounter by bCentral