marqueur eStat'Perso

 

 

Policy evaluation within an Inspectorate

 

Modernising Government in Europe

Control and learning in public services: inspection, auditing and evaluation

Stockholm 19-20 February 2004

 

Bernard Perret

 

I am currently in charge of policy evaluation development within the Conseil général des ponts et chaussée, which means literally “General Council for roads and bridges”. The CGPC is a very old official body (founded in 1804, two centuries ago), composed of high level senior civil servants who have had responsibilities in the field of the Equipment, housing and transport ministry (including tourism, urban policies and some aspect of. Environment policies). In France, such bodies are sometimes colloquially labelled “Elephants cemeteries”. In its present form, the Council results from the merger in 1986 of the old Council properly said (an expertise body) and the inspectorate of the ministry of Equipment. This dualism is still partly reflected in the internal structure of the Council. Some members are specialised but many of them are not and carry various missions and activities, including control, expertise, audit, evaluation, agent’s evaluation, etc.). It employs approximately 200 high level senior civil servants (both State engineers and administrators). For all its activities, the Council is directly under minister’s command.

My profile is a bit atypical in the CGPC: I come from the National institute for statistics and economical studies. I was recruited some years ago especially to develop evaluation methods within the Council. Before that, from 1990 to 1998, I was general secretary of the scientific council for evaluation. This body, which no longer exists, had been created to develop evaluation at the interministerial level. Apart from that, I have had various other experiences in the field of evaluation (training, etc.).

In France, evaluation culture is rather recent (evaluation institutionalisation really began in 1990). Despite real progress since 1990, the low visibility and weak decisional impact of evaluation is a well established fact. Evaluation exists, however, notably at the regional level, but it has great difficulty in getting recognised as a specific activity among other forms of expertise on public action. Public, Parliament and government demand for enlightening public decisions rarely ends up in a call for a systematic and impartial evaluation. The word evaluation, of course, is often pronounced, but most often in a vague sense. Several laws voted these last ten years include evaluation clauses, sometimes planning their own evaluation at the end of an experimentation period. But, except for some notable cases, these announcements come out onto mere administrative implementation assessments. Evaluation, when it exists, has no assigned function in decision making process. A comparison with other countries shows a singular default of linkage between evaluation and budgetary decisions, and little commitment from the Parliament and Cour des comptes.

 

Policy evaluation within CGPC

 

I will now focus on the main subject of this presentation, the public policy evaluation within an inspectorate body. Of course, I will treat it from my experience of the CGPC. The subject of the place of evaluation among the activities of inspectorate bodies is complex and often discussed: to put it simply, there is an open debate concerning the degree to which different types of activities (control, audit, evaluation, expertise…) should be distinguished, give place to specific processes, and rely upon specific skills. My mission is precisely to implement specific processes for evaluation. But this effort encounters obvious limits in a context where missions are organised mostly in an ad hoc way, often depending on personal profiles and relationships. Therefore, the present situation shows a continuum of practices, including to different degrees some evaluative aspects.

Before illustrating this, I have to expose some of my views about evaluation. Fist, I assume that Evaluation may have more practical effects if it is more clearly identified as a specific function and a specific moment in the life of public actions. In other words, it will be more difficult to ignore the results and conclusions of an evaluation if it has been carried out within a formal process and if it has a clear and acknowledged place in the policy cycle. Second, evaluation requires specific methods and skills that could be more relevantly identified and developed within more formalized evaluation processes.

Given that, the question of the distinctive characteristics of evaluation activity is crucial. To put it simply, I will list some distinctive characteristics of evaluation work, by which it differs from related activities (control, audit, expertise, etc.):

 

· In terms of objects: evaluation deals with actions (policies, programmes, projects…), and not with structures, services or organisms (which are objects for audit).

· In terms of scope and aims: the ultimate goal of evaluation is to assess the overall impact of an action on economy and society. In practice, the official objectives of an action must be used as the terms of reference. Of course, many other aspects have to be assessed, including coherence, economy, implementation process, the relevance of the theory of action, etc.

· In terms of consequences for “evaluated” persons. The assessment is about action itself, considered globally: the aim is not (at least not directly) to assess the action of individuals or administrative services. In this respect, evaluation differs from control (in its various declinations: legality and regularity control, management control).

· In terms of methods: evaluations rely upon various quantitative and qualitative tools, including those used by social sciences. However, in practice, internal evaluations carried within the minister of Equipment use a very limited range of methods (the basic scheme of inquiry is based upon semi-closed interviews of actors, stakeholders and beneficiaries of evaluated policies).

· In terms of process: an evaluation is a collective process. The terms of reference and the questioning are elaborated by a steering group which ideally embodies various points of views (sometimes those of non administrative actors). The steering group is also responsible for discussing and validating the evaluation report, including final conclusions and reform proposals related to them.

 

Policy evaluations are rather solemn and generally one-off operations. About ten evaluations have been carried during the last 3 years according to these principles, some at the initiative of the Council itself, some within the framework of a ministerial programme of evaluations approved by the Ministry himself. For example: evaluations of State aids to urban collective transports schemes, the activity of the ministerial services in the application of soil use regulations (authorizations of constructing, etc.), the plan of modernisation of technical assistance activities (public engineering) carried out by the ministerial services, etc. It must be recognised that, in this context, the frontier between evaluation and audit is not well marked. For example, an evaluation of management control practices in the ministerial local services (directions départementales de l’équipement) could as well have been termed audit.

In parallel with these internal evaluations, members of the Council have been committed to interministerial evaluations, especially policies under shared responsibility of the ministry of Equipment and other ministries (for example, the policies in the field of road security). In the meantime, the Council members continue to make studies, inspections and expertise encompassing evaluative aspects. For example, I have been recently asked to participate in an assessment of the implementation process of a law voted in 2000 concerning the settlements of nomad populations. In terms of method, it is not exactly an evaluation (mainly for delay reasons), but the questions addressed are basically evaluative questions.

 

Some questions

 

In conclusion, several questions can be raised:

 

·      Is it useful to clearly separate different kinds of practice, or is it better to be pragmatic and accept that inspectorates perform mostly “quick and dirty” assessments mixing evaluation, audit and control? To answer this question, it would be relevant to take account of the different organisational and political impacts of different kinds of processes. Institutionalised and formalised evaluation may produce more in depth effects, by bringing relevant in depth informations in the political and administrative debate, and hence producing shared views about complex and disputed policy questions and/or organisational questions. On the other hand, institutionalised evaluation is more costly and it takes more time. Obviously, some questions justify such an investment while others do not.Within bodies such as the CGPC, people have several roles when they inspect services in an authority position. Is it necessary to establish more formal distinction between these different roles? In practice, it is not necessarily a problem.

·      The basics of the evaluation process are easily understood and accepted (necessity of building an evaluation design, questioning, rigorous methods, collective work…). But the skills and experience which characterize members of Inspectorate have their Limits: for example, the technique of the sociologic interview is not natural for inspectors. It is very different from the way Inspectors carry out inquiries. For this reason, we have made some experience of ground inquiries by teams mixing inspector and students in sociology.

·      Internal evaluations have obvious limits. It is clear for everyone that some questions cannot be raised. An administration has a basic interest in the pursuit and the strengthening of its policies. Its role is not to point the uselessness of some of its policies.

·     Likewise, the evaluations carried out by the Council and, more generally, administrative evaluations carried out within ministries are rarely “value for money” oriented. However, the new budgetary regulation (a law reforming the budgetary procedure has been passed in 2001) will necessarily require more value for money oriented audits and evaluations. What will the role of ministerial inspectorates in such evaluations be? The answer is not clear.

 

- Return to the top -

- Return to Bernard Perret’s Personal page -

- Return to Text Bank -