Institutionalise and professionalise evaluation
The French case
by Bernard Perret
European Evaluation Society Conference, Rome, 29-31 October 1998
In France, the low visibility and weak decisional impact of evaluation is a well established fact. Evaluation exists, however, notably at the regional level, but it meets great difficulties to get recognised as a specific activity among other forms of expertise on public action. Public demand for enlightening public decisions takes rarely the specific form of evaluation demand. The word evaluation, of course, is often pronounced, but most often in a vague sense. Several laws voted these last ten years include evaluation clauses, sometimes planning their own evaluation at the end of an experimentation period. But, except for some notable cases, these announcements come out onto mere administrative implementation assessments. Evaluation, when it exists, has no assigned function in decision making process. Comparison with other countries shows a singular default of linkage between evaluation and budgetary decisions, and a lacking commitment of Parliament and Cour des comptes.
The governmental evaluation procedure instituted in 1990, about to be replaced by a new device, has not achieved objectives fixed by its promoters. This failure, measurable by the small number or evaluations completed and their weak decisional impact is mainly due to insufficient government support (1) More generally, it reflects a lack of interest for evaluation and a misunderstanding of its issues from political staff and other sectors of French elite, apart from a limited network of specialists and militants. The most disturbing for evaluation supporters is the feeble public echo produced by evaluations, and the incapacity of medias to make a clear difference between a soundly designed evaluation and other productions, such as personal expertise or Cour des comptes audit reports. Given that, it is not surprising that experts working in the field of evaluation are generally not interested to be identified as evaluators.
Anyway, the question of what can be done to increase evaluation and evaluators role and visibility is now again topical in France, because a new institutional device is about to be implemented and, in parallel, the constitution of a French evaluation society is on the agenda. Institutionalisation and professionalisation are, indeed, two possible answers to the evaluation identity problem. At first sight, they seem concurrent answers. But, as I will argue, it is more promising to interpret this duality in term of synergy.
The three main arguments of this paper are the following 1) evaluation identity problem is not peculiar to France, but, 2) the marginality of policy evaluation within French State is related to old and firmly established national idiosyncrasies, 3) the institutional character of evaluation must be taken account of as a key element to define and display its specific functionalities and specific professional ethos. In this connection, the centrality of method - including procedural requirements -, is a federating principle that could give coherence to an evaluation domain and an evaluation ‘milieu’.
A recurrent question
Defining evaluation always captures a lot of energy in evaluators meetings. Importance given to this question may be puzzling for outsiders, making evaluation specialists be regarded as vain doctrinaires. Considering French situation, it is tempting to contrast weak social impact of evaluation and recurrent conceptual debates it gives place to. However, even if this kind of sarcasm is not senseless, evaluation identity problem is a real and serious problem, and its stakes must not be underestimated.
In a recent publication, the OECD underlines that " Evaluation has not been as successful as audit to get established as a regular and well institutionalised activity, with a clear concept and acknowledged objectives " (2). Indeed, there is a relation between the conceptual uncertainty that characterises the notion (OECD speaks of a " multiple and controversial identity "), the slow development of practices and their weak decisional impact. To support this assertion, two types of arguments can be put forward :
- Specific technical and methodological knowledge has to be capitalised within a professional network unified through shared intellectual references ;
- a clarification of evaluation identity would contribute to a better understanding of its principles and issues by public opinion, medias and decision makers. A contrario, it is all the more easy to ignore evaluation, or even instrumentalise of pervert it, since it is a malleable notion, to which no one feels compelled to give a precise meaning.
In the following discussion, we shall tackle both questions of evaluation definition and of evaluators professional identity. These two questions, distinct to some extent, are indeed closely linked.
Evaluation is not the science of results
Most usual evaluation definitions emphasise its cognitive objectives (the substantial knowledge evaluation seeks to produce (3)). In their common trunk, these definitions mention that evaluation seeks to identify and measure results (or effects, outcomes, impacts...), of a policy or a programme. Although attention devoted to results is essential in most evaluations, one cannot be satisfied of such definitions, for at least four reasons :
- the first one is that, in most situations, the notion of result is highly ambiguous. There are practically always many different ways to define the results of a policy or a programme, and the same apply for effects, outcomes or impacts, anyhow you define these words.
- the second one is that, in practice, most evaluations don’t succeed in measuring the real effects of a public action on society (which, yet, is often their exhibited objective) ;
- the third one is that results assessment is not the only evaluation focus. Evaluation questions may concern potentially all aspects of a policy process : implementation, action mechanisms, objectives relevance, etc.
- the fourth one is that several instituted activities not considered as evaluations may also have for finality to assess, at least under certain aspects, results of a public action (control, performance audit, social science research).
Similar remarks apply to definitions putting forward the notion of judgement. Not all evaluations aim to support value judgement on public action, and other activities (including polls) may claim this role. All the same, the notion of judgement is essential, because it underlines the connection between public evaluation and the more general issue of expertise proceduralisation. It is not by chance that evaluation is much developed in America, a country where law and legal regulations are central in social life. As I suggested in another text, there is much to think about the similarity between an evaluation process and a juridical process (4).
All things considered, best evaluation definitions are hybrid definitions that mention simultaneously its cognitive tasks, social purposes and normative aspects (judgement), such as the following : " Evaluations...are intended to apply social science, theory, methods and techniques to judge responsiveness, effectiveness and accountability in governmental and non-governmental organisations, to stimulate collective learning " (5).
The centrality of method
Although this definition is rather satisfactory, it passes over some specific features of evaluation process : at which conditions can evaluation judgement be a fair judgement ? Besides the list of typical questions addressed by evaluations, and the list of their potential finalities (assistance to decision making and monitoring, accountability, collective learning...), evaluation handbooks generally mention a set of ethical, procedural and methodological requirements (rigour, objectivity, independence, pluralism, transparency). In this connection, the adjective " systematic " is often used to characterise good evaluation practices (6).
All these terms underline the fact that method, in the extended sense defined below, is a key issue for evaluation. As regards evaluation, method is not only an instrument to produce a sound knowledge, it as a value in itself, insofar as it directly contributes to the practical effects of evaluation process. The value and social impact of evaluation don’t lie only in the pure knowledge it produces, but also in the way it is produced.
It must be clear that what is meant here by method is not limited to technical aspects of evaluation research. The term encompasses the activities that precede and follow evaluation studies properly said : delimitation of the object and field, inventory of the questions to be answered, selection of indicators, identification of information sources, planning of evaluation research ; elaboration, formulation and communication of conclusions. In brief, method covers all elements that determine not only the ‘validity domain’ of an evaluation (7) but also its responsiveness, credibility and legitimacy.
Evaluation, a co-production
Evaluation can be seen as a branch of social science, as a professional activity, as a function or as a set of institutional procedures. This has for consequence that evaluation practice cannot be reduced to a specific professional task. Method, in the sense given here, concerns not only evaluators but also other actors implied in the co-production of evaluation, especially sponsors or ordering authorities and members of steering committees. It does not mean there is no place for a professional organisation. But It should be conceived more like a network linking up an heterogeneous ‘milieu’ than like a classical professional structure. Because evaluations need different kinds of roles and different kinds of competence, the question of professionalisation has to be raised in a comprehensive way, as that of the collective ability of a milieu including different professional groups. In France, obstacles to evaluation progress are more on the demand side than on the supply side. Forming evaluation demand is a priority.
A composite professional world
Presumably everywhere, evaluation communities show a great diversity of professional status (academics, consultants, members of inspectorates, civil servants), academic disciplines (sociology, economy, statistics...), technical skills (quantitative and qualitative methods), of sectoral expertise (health, education, environment, scientific research, industry, urban development...) and, sometimes, understandings of evaluation role. It is remarkable that, in the United States, this diversity has not precluded the constitution of a fairly unified professional community (although the American Evaluation Society does not represent all categories of evaluators). The social visibility of American evaluators relates to the fact that, for reasons that won’t be developed here, the appeal to science as an external guarantee of the rationality of public decisions appeared very soon as a political necessity (8). This situation, with no equivalent in Europe, has certainly had a decisive influence on social science instalment, creating a market of scientific support and legitimisation of public decisions.
Referring to the above remarks about the key role of method, it is interesting to notice that methodological issues have been, and still are, a federating concern for the evaluators community. As Thomas D. Cook said in his speech at the first world evaluation meeting in Vancouver, " Interest for method issues is a source of potential unity for a profession that works in various application domains and has been trained in various disciplines " (9). If method has remained central, the meaning of the word has changed across time. The beginnings of evaluation have been marked by the positivist requirement to bring empirical evidence of programmes efficacity by means of experimental designs. The methodological debate was dominated by the issue of quantitative methods validity. Today, the enlarged notion of method defined above, including ethical and procedural requirements, prevails when professional standards are evoked (10).
The public character of evaluation
However, can we simply rely upon evaluation ‘rules of the game’ if evaluators remain dependent on the arbitrary decisions of sponsors or political powers ? At first sight, ethical or ‘good practice’ standards are not enough to avoid eclectic, or even deliberately dishonest use of evaluation. The obvious contradiction between the arbitrary power of sponsors and the need for credible assessments (credibility being a condition for their practical and social efficacity), explains why evaluation is often thought of as an institutional practice. As OECD remarks : " It is generally acknowledged that a certain degree of institutionalisation must be completed so that evaluations can play their role in Public management. A framework is needed, in order to support high quality evaluations, their use, organisational learning and an efficient implementation " (11).
Institutionalise evaluation amounts to acknowledge its ‘public’ nature. It is so much more necessary as evaluation seeks to assess actions with heavy political stakes, or assume a mediation function. One can observe a tendency to institutionalise evaluation when it is incorporated into the management of joint policies, within a partnership framework (for example, five year planning contracts between State and regions, European Structural funds). If evaluation is not to become a mere instrument of communication, self promotion, or, worst, political manoeuvre, it must be submitted to regulations supporting its fairness and transparency. In practice, institutionalisation is often limited to instructions settling what should be evaluated, when and by whom, how results will be circulated and, most important, how they will be used in policies management and design (for example through budgetary process). Another way to institutionalise evaluation is to give an extended mandate to a specialised public body, which can order evaluations, fix their scope and methods and formulate conclusions. For example, the French Comité National d’Évaluation de la Recherche, or the GAO in the United States, National Audit Offices in some European countries, have got such mandates.
Institutionalisation removes evaluation from the demand-supply encounter (a sponsoring authority mandating an evaluator from its own initiative, with no accountability requirement). It becomes a social process in which different points of view must be formally taken account of. Even when it is implemented by an independent body, institutional evaluation includes participative mechanisms. Another usual consequence of institutionalisation is the publication of methodological guidelines. Is it enough to ensure that good evaluation practices will be implemented ? Formal commitment to conduct fair and reliable evaluations may not be effectual. An evaluation management framework makes, in theory; evaluation more transparent, and facilitates a critical assessment or its conclusions. However, experience shows that few people have time and required abilities to exercise this right to criticism, even among those who are directly concerned by the questions tackled by evaluation.
If one conceives institutionalisation as a means to increase the political value of evaluative knowledge, its logical continuation is to evaluate evaluation. Such was the initial perspective of the 1990 decree, which put a public body, the Scientific council for evaluation (CSE) (12) in charge of methodological regulation, within a procedural framework supposed to foster governmental evaluation demand. This is not the only way to develop evaluation assessment and meta-evaluation. The Netherlands example suggests that it could be the task of national audit bodies. In the longer term, a market of evaluation assessment is a credible perspective.
All evaluations are not the matter of formal procedures
Whatever its merits be, Institutionalisation cannot suffice to establish evaluation as a customary activity, with acknowledged methods and well identified functions. One could even say that formal procedures carry the risk of creating a cleavage between formal and informal evaluations. It is obviously not realistic, nor even desirable, to prevent the free-flowing of evaluation studies with no defined status in public administrations. On the other side, it is a reasonable hope that institutional procedures will help to modelise good practices that will be used as references, inciting administrations to institute evaluation in a more formal and transparent way, even if it comes under no regulatory framework. Anyway, institutionalisation does not exempt from other efforts to organise and qualify a professional evaluation milieu.
A pattern of expertise political use that leaves little place to evaluation
To explain the slow growth of evaluation, some French analysts ritualy evoke a lack of evaluation skills. In my view, such an assumption has no serious basis. The reading of a fair number of evaluation reports draw me to the conclusion that supply deficiencies are not a decisive blocking factor (13) . Technical and methodological skills are available (with some weakpoints, like a reduced experience in experimental designs), but they are not identified and valued as a specific field of competence Paradoxically, the weakness of evaluation reflects less a deficit of expertise than an overflow. Evaluation has so much more difficulties to exist as its domain is already occupied by powerful established activities which embody other conceptions of expertise use.
a) Inspectorates and Cour des comptes are sometimes accused to monopolise evaluation. The truth is that, with some noticeable exceptions, they are not interested in it. The same could be said about other groups, like statisticists or Conseil d’Etat. This affected disinterest is not innocent : saying you don’t make evaluation is the most efficient attitude if you want to practice it without accounting for methods. Through an extensive and fuzzy conception of their tasks, control bodies occupy de facto a substantial piece of a domain which, in other countries, is occupied by evaluation. They have an undeniable practical knowledge on public action mechanisms, (not independent from the fact that a great proportion of their members have had or will have direct political responsibilities), and a tradition of independence of judgement that permits them to nearly monopolise legitimate criticism of State administrative running.
The role of control bodies is part of a more extended French feature that makes knowledge be considered as an attribute of political authority. French State, through its schools (Les grandes écoles), its engineers, economists, inspectors bodies, acts as if one of its mission was to know better and teach society. The reading of Tocqueville shows that this feature goes back to the ancien régime, before French Revolution : " The central government did not content itself with helping countrymen in their poverty : it pretended to show them how to get rich, help them and force them if necessary. To this purpose, it distributed from time to time through its intendants and his subdélégués small documents about the agricultural art, promised subsidies, maintained at high expenses plantations, the products of which were distributed " (14).
The influence of these " State sciences " (Jean Leca) still governs the relationships between knowledge and power, in a sense that, by several aspects, is opposed to what could be termed ‘evaluation spirit’.
- State sciences base their authority on the standing of official bodies, more than on the exhibition of their methods and arguments : " causes are pointed out, balance sheets are established, practices are criticised, reforms are recommended, letting everyone ignore how this knowledge seeking operation has been launched, how it has been instructed, according to which tempo and with whom, on which surveys and argumentation design it is based " (15). One can assume that the way political and administrative elites are selected (through bitter school competition) is a major cause of this failing : it induces a latent confusion between knowledge and State authority. What is given a State label enjoys a presumption of credibility.
- They adopt, with insufficient critical distance, the point of view of State : "State sciences are sciences that see society from the point view of State, as a list of ‘problems’ addressed to it, sliced by its bureaucracies and treated by norms that ordinary citizens should respect (...) Concrete society becomes a set of ‘questions’ - for example, the ‘social questions’ - or a set of sectors or objects of economic, social, urban, judicial ‘policies’ ".
- Always according to Leca, State sciences are " naturally turned towards internal functioning (norms, practical means, actors interplay) of State machinery ", rather than towards the understanding of social systems which constitute the application domains of public action.
b) In reaction to this ascendancy of State science, academic social science has often adopted a critical mood, sometimes denouncing, putting forward the autonomy of intellectual power and refusing to be compromised with economic and political powers. In this context, empirical evaluation research have never been considered as a prestigious activity by social scientists.
c) Among French peculiarities, one of the more detrimental to evaluation is the tendency to give high value to the personal dimension of expertise. A great evening paper put recently to the credit of the Prime minister the fact that he had commanded 50 reports to personnalités, since his designation, and thus, " put ‘savants’ (scientists) at the service of decision ". In many cases, admittedly, questions treated in these reports don’t deserve evaluation. But examples are not rare of policy assessments carried by personalities mandated as individuals, or by ‘comités de sages’ (wise men committees). When it is in competition with evaluation, personal expertise always enjoys larger mediatic returns and political impact. In 1996, an interministerial evaluation on employment policy had achieved a complete and reliable survey of existing studies about the impact of social charges on employment. The results of this evaluation, although reasonably clear and explicit, have had a much more limited impact than the report written three years later, on the same subject, by Edmond Malinvaud, renowned economist, at the demand of Government. According to Le Monde : this report has permitted to " present as an objective solution what could have been considered as a purely liberal recipe ". The reports commanded to personalities are of specially flexible use : as Le Monde says, they " empower government to define a balance point, a level of acceptance of reforms by social actors " (16).
d) The only professional group really interested and committed in evaluation progress is constituted of civil servants implied in administrative reforms and personally motivated by the project of modernising administration and institutions. They can be found within some para-administrative associations, and within the Club de l'évaluation driven by the Ministère de la Fonction Publique et des réformes administratives (Ministry of civil service and aministrative reforms). Besides, a growing number of regional actors - both political and administrative, working in regional or decentralised State administrations -, are now active evaluation promoters. Evaluation is a strategic issue for regions, a ground upon which they can build their legitimacy, and a means to assert their political existence in face of central State and European commission.
Taking account of the above observations, the relative solemnity of the governmental evaluation procedure (in its new version as well as in the previous one), may be seen as an alibi to let evaluation to its marginality. The contrast between procedural refinements of the framework, and, on the other end, the weakness of political demand for evaluation, might easily arouse irony. There is no guarantee that the new Conseil national de l’évaluation, despite its extended initiative role, will have enough political weight to foster policy evaluation demand. The key problem is that the legitimising power of method, which is the decisive rationale of formal evaluation procedures, is in bitter competition with other expertise legitimisation principles, such as: the prestige of State labelled expertise and personal authority of individual experts. To be more specific, social efficacy of methodology building processes is visible within groups of people directly involved in evaluations, as it has been observed within pluralist evaluation schemes at the national level (17).Discussions about evaluation priorities, questions to be asked and evaluation research planning, makes partners usefully confront their visions of policy and problems it applies to. On the other hand, it must be said that a sound methodology does not increase the value of evaluative knowledge in public opinion, neither its political impact.
All the same, there are good reasons to think that the traditional French pattern of expertise political use will prove to be less and less fitted to enlightenment, regulation and legitimisation needs generated by the new context of public action. The increased importance of regions and European integration already creates a pressure in favour of evaluation institutionalisation and professionalisation (and, more generally, expertise proceduralisation), as it can be seen in the case of State-Regions contractual plans and European Structural funds. An important lesson of American experience is that evaluation develops and gets institutionalised when it becomes a stake in the relationships between competing public authorities. Evaluation is likely to play an increasing role for facilitating informations sharing, co-operation and accountability within partnership contexts.
Three keypoints for fostering good practices
In this perspective, the 1990 institutional device has had the merit (despite the pessimistic statement formulated at the opening of this presentation) to stimulate debates and reflections about good practices and their procedural conditions. It has influenced the way evaluation is institutionalised in regions and helped fixing a conceptual framework commonly used by evaluators in various contexts. One way or another, this task will be carried on within the new national device soon implemented. The entrustment of a public body in ethical, procedural and methodological regulation is a defensible option in the French context, given the weakness of evaluation culture, provided that norms produced in this way influence a larger set of evaluation practices.
Through its advices and reports, the Scientific council of evaluation (CSE) has elaborated a French restatement of methodological principles acknowledged by the evaluators international community. One can capture an idea of this methodological framework through three elements that shape the ‘procedural identity’ of evaluation (cf. annexes) : the evaluation project, the role of steering committees (’instances d'évaluation’), and the norms concerning evaluation reports.
a) Evaluation design
Evaluation is not the positive science of results. Cognitive objectives of an evaluation are not defined a priori for a given policy or programme. Method has to be built in each case, according to the context and expectations formulated by political or administrative orderers. This construction includes notably the evaluation domain delimitation, the formulation of evaluation objectives, preliminary diagnosis and hypothesis, an analytical agenda (a list of questions to be answered), an inventory of informations to be gathered, a reasoned choice of studies and inquiries and, if necessary, more sophisticated research techniques. The path followed for elaborating this framework comes under the enlarged notion of method. In consequence, method covers two different aspects (18) : " method as process " (the building of the project), and " method as instrument " (the theories and techniques to collect, treat and interpret information, in order to answer questions formulated in the analytical agenda).
b) Evaluation monitoring
Except when it deals with technical and limited questions, evaluation must be removed form the bilateral relationship of an evaluator and his client. Insofar as evaluative judgement has to be shared and appropriated, it is worth organising a mediation level, as a special body constituted for each evaluation (usually denominated ‘steering committee’), in charge of discussing questions and methods, and participating to results interpretation. The Scientific council has modelised under the notion of Instance d’évaluation the composition, running and tasks of a pluralist and responsive steering committee (see annex).
c) Evaluation reporting
The form under which evaluation results are presented should assert the methodical character of evaluation. The plan and the writing techniques of an evaluation report should be conceived in this perspective. It should include, notably, an explicit reference to initial questions and an argued presentation of methodological choices. In term of writing techniques, sound rhetoric is fully part of evaluation methodology : " An evaluation report should be characterised by an important effort to articulate rigorously judgements and facts. Ideally, it is a matter of referring all normative assertions to explicit reasoning, itself supported by observations duly documented ". The different types of argument should be clearly distinguished, " In particular, causality assertions (such public action produces such specific effect) and normative judgements should be clearly identified as such and argued in reference to studies carried out " (19).
These guidelines, admittedly, have been formulated in the special context of complex government policies involving several ministries. Nevertheless, it seems that their basic principles can be applied to most of evaluation situations in the public domain. Collective appropriation of such a normative framework could facilitate the constitution of a professional milieu including members of different professional groups implied in evaluation co-production. Sharing a common understanding of evaluation methodological requirements, such a network would more convincingly promote evaluation and more efficiently contribute to the achievement of its social virtualities.
(1) The rationale of the 1990 decree was that the Prime minister himself would put pressure on sectoral administrations for inciting them to initiate pluralist evaluations of policies they are in charge of (via the Comité interministériel de l’évaluation). In practice, evaluations were launched only when all concerned administrations (or at least the most influent among these) find a political interest in them. Within the new framework, the Conseil national de l’évaluation will set each year a list of evaluations to be carried on. This yearly evaluation programme will be submitted to the agreement of the Prime minister (see Annex).
(2) "Promouvoir l'utilisation de l'évaluation de programme", OCDE, service de la gestion publique, October 1997, p. 11
(3) For example this one : "Evaluate a policy is to recognise and measure its proper effects" (Commissariat général du plan, Evaluer les politiques publiques, La documentation française 1985, p. 28).
(4) Bernard Perret " La construction du jugement ", in CSE, L’évaluation en développement 1995.
(5) G-M Hellseterne, quoted by C.Pollit in CSE, L’évaluation en développement 1997, p.28
(6) " We concentrate on systematic, or scientific evaluations, that is evaluations which : are founded ont reproduceavle reflections and assertions ; are built on a systematic procedure seeking to catch the most significant effects ; seek to gather as objective as possible informations, and are presented in a way that asserts their methodological character ", Accompagner et mettre à profit les évaluations des mesures étatiques, Ed. Georg S.A. Genève 1995, p. 55.
(7) Sylvie Trosa, "Le rôle de la méthode dans l'évaluation", Politiques et management public, septembre 1992.
(8) See for example Luc Rouban "L'évaluation, nouvel avatar de la rationalisation administrative, les limites de l'import-export institutionnel, Revue française d'administration publique n°66, avril-juin 1993.
(9) Thomas D. Cook, "Les leçons de 25 années d'évaluation", L’évaluation en développement 1995, p. 95.
(10) Cf. Elliot Stern, "Trois disciplines de l'évaluation : réflexions sur les enjeux contemporains de l'évaluation", CSE, L'évaluation en développement 1994, La Documentation française 1995.
(11) OCDE, "Promouvoir l'utilisation de l'évaluation de programme", Comité de la Gestion publique, octobre 1997.
(12) Now replaced by the National Evaluation Council with an enlarged mandate (see Annex)
(13) In 1985, it could be read in the report previously quoted : "There is no technical shortage for evaluation development in France" (51).
(14) Alexis de Tocqueville, L'ancien régime et la révolution, Gallimard 1991, p. 108
(15) Jean Leca, "Sur le rôle de la connaissance dans la modernisation de l'Etat", Revue française d'administration publique n°66 avril-juin 1993, p. 190
(16) Le Monde du 15/09/98, in a paper dedicated to the " 50 reports " commanded by the Prime minister since his designation.
(17) Pierre Lascoumes et Michel Setbon, L’évaluation pluraliste des politiques publiques, GAPP and Commissariat général du Plan 1996, p. 12.
(18) Lascoumes and Setbon, Op. cit.
(19) CSE, Petit guide de l’évaluation, La documentation française, p. 40.
1- The Interministerial evaluation procedure
a) The former procedure (1990 decree)
Interministerial Committe for Evaluation
co-ordinates government initiatives concerning evaluation
chooses evaluations to be financed by the Fund for the development of evaluation
deliberates on consequences to be drawn from evaluations
Scientific council for evaluation
assesses evaluation designs
assesses evaluation reports
carries various initiatives to help the progress of evaluation methods
Commissariat général du plan (National planning office)
holds the secretariat of the Interministerial committee
publishes evaluation reports (included CSE quality assessments)
drives the development of evaluation within administrations
b) The new procedure
The Interministerial evaluation committee is suppressed and the scientific council is replaced par a National Evaluation Council (Conseil National de l’Evaluation, CNE)
National Evaluation Council:
is composed of 14 members named by decree for 3 or 6 years
establishes a yearly evaluation programme submitted to the agreement of the Prime minister, precising organisational conditions and basic features of evalution designs. The evaluation programme may include local evaluations
assesses evaluation reports
may be consulted by any public body about methodological questions
Commissariat general du plan
holds the secretariat of the CNE
publishes evaluation reports (included CNE quality assessments and responses to evaluation from the concerned administrations)
drives the development of evaluation within administrations
2- The sixteen interministerial evaluations completed since 1990
Renovation of social housing projects
Social aid for young people in difficulty
Reception of underprivilegiated people by the public service
The efficiency benefits of data processing in the bureaucracy
Personal aid for housing
Schedules adjustment experiment in primary schools
Social and cultural programmes in favour of civil servants
Marshes and dump areas protection
Public aid to industry reconversion in disadvantaged areas
Measures in consideration of early retirement
Social measures to the benefit of very poor people
Quinquiennial Law concerning employment
Energy saving policy
Natural disasters prevention
Mountain areas development and protection
Measures against tobacco and alcohol addiction
3- Some CSE methodological guidelines and norms
a) Designing evaluation, step by step
Clarify instititutionnal context
Define the scope of evaluation, identify the policy objectives
Explicit the expectations of political addressees
List elements of preliminary diagnosis and working hypothesis
Translate general issues into matters of facts
Finalize a list of precisely formulated questions
Make an inventory of available information
Check missing information
Think ahead about argumentary style and evaluation report format
Plan spectific data collections and treatments
b) Questionning fields
c) The steering committee (‘instance d'évaluation’)
The Committee :
precises the objectives and the questions to be answered
plans evaluation research
monitors evaluation research
organizes hearings and visits
Results interpretation and integration
receives research reports
validates their results
interprets these results, by the light of other informations (hearings)
answers to initial questions
writes the final report
- Return to the top -
- Return to Bernard Perret's personal page -
FastCounter by bCentral