Identifier
Created
Classification
Origin
06USUNNEWYORK1264
2006-06-23 20:21:00
UNCLASSIFIED
USUN New York
Cable title:  

UN OVERSIGHT: EVALUATIONS NEED IMPROVEMENT

Tags:  AORC KUNR UNGA 
pdf how-to read a cable
VZCZCXYZ0002
PP RUEHWEB

DE RUCNDT #1264/01 1742021
ZNR UUUUU ZZH
P 232021Z JUN 06
FM USMISSION USUN NEW YORK
TO RUEHC/SECSTATE WASHDC PRIORITY 9420
INFO RUEHXX/GENEVA IO MISSIONS COLLECTIVE PRIORITY
UNCLAS USUN NEW YORK 001264 

SIPDIS

SIPDIS

E.O. 12958: N/A
TAGS: AORC KUNR UNGA
SUBJECT: UN OVERSIGHT: EVALUATIONS NEED IMPROVEMENT

UNCLAS USUN NEW YORK 001264

SIPDIS

SIPDIS

E.O. 12958: N/A
TAGS: AORC KUNR UNGA
SUBJECT: UN OVERSIGHT: EVALUATIONS NEED IMPROVEMENT


1. ACTION REQUEST: USUN seeks Department guidance on the OIOS
report summarized in this cable in time for the 46th session
of the Committee for Program and Coordination (CPC),which
begins August 14, 2006.


2. SUMMARY: In its biennial report covering the period
2004-2005 on strengthening the role of evaluation and the
application of evaluation findings on program design,
delivery and policy directives (A/61/83),the Office of
Internal Oversight (OIOS) reached two conclusions: (a) at the
program level, the Secretariat presents a mixed picture in
terms of evaluation practice; and, (b) the Secretariat's
central evaluation capacity is inadequate. To supplement
these conclusions, OIOS offers two recommendations:
assessment at the program level, and issue-specific
guidelines to increase clarification on rules and regulations
of evaluations. The report reviews both internal program
self-evaluation and central evaluation practice and capacity
in the Secretariat. The report highlights how methodological
approaches of design and conduct of evaluations are in need
of strengthening, while emphasizing that evaluation
conclusions, based on the citation of evidence, also need
improvement. At the program level, OIOS identifies problems
related to insufficient clarity and uniformity in defining
and conducting self-evaluations. At the central level, OIOS
cites weakened staff capacity which inhibits the evaluation
process and prevents the Secretariat from fully meeting its
mandate - to produce objective evaluations of the relevance,
efficiency, and effectiveness of specific programs and
activities, and assessment of their impact for use by the
Secretariat and Member States. END SUMMARY.

SIPDIS

Evaluation Quality
--------------


2. Over the course of the 2004-2005 biennium, a total of 214
evaluations were reported to have been conducted across the
Secretariat. This figure excludes the mandatory

SIPDIS
self-assessments that program managers are required to
conduct. Compared to the previous biennium, in which 134
reports were conducted, the number has increased, indicating
a degree of difficulty in identifying trends. Also, the OIOS
noted that "inconsistencies in the interpretation of what
constitutes evaluation activity and in the reporting of
evaluations continue to hamper the collection, accuracy and

analysis of data on evaluation activity. Therefore, the data
collected on types of evaluation must be interpreted with
some caution and the OIOS considers it crucial that concerted
efforts be made to familiarize staff with the terminology in
order to ensure the consistent use of evaluation terms
throughout the Secretariat and the availability of more
precise information on the types of evaluations in the
future." (paragraph 7)


3. During the 2006-2007 biennium, program managers are
planning 239 discretionary self-evaluations and 13
discretionary external evaluations. OIOS commends the five
regional commissions for their intent to use evaluation as a
management tool. However, due to the "inconsistencies in the
interpretation of what constitutes evaluation activity"
(paragraph 7),translation of these plans into action is not
assured.


4. Overall, meta-evaluations conducted by an external
consultant, using five indicators assessing 23 evaluation
reports, ranked more than half the sample reports "very high"
or "high," yet these same reports did not receive high
ratings for "soundness of methodology" (Paragraph 13). Also,
three quarters of this sample received a rating of "average"
for the "usability/potential impact" (Paragraph 14). The
OIOS report also describes the incomplete nature of some
evaluations, stating, "of 23 evaluations reports, 6 did not
have an executive summary," an observation that underscores
OIOS concerns regarding evaluation quality.

Evaluation Capacity in
Secretariat and Central

SIPDIS
Evaluation
--------------


5. The OIOS report observes that the effectiveness of
self-evaluations at the program level is compromised by the
lack of clarity in defining evaluation responsibilities, as
well as the low number of entities dedicated to evaluation
within the Secretariat. Only 5 of 24 programs have the sole
responsibility of self-evaluation, while the rest have
additional responsibilities, which detract from
effectiveness. As viewed from the personnel side of the
equation, a "limited number of evaluation staff" indicate low
priority of evaluation at the Secretariat (Paragraph 16).
OIOS noted that there were no director-level staff assigned
on a full-time basis in charge of program evaluation anywhere
in the Secretariat.



6. OIOS also voiced concern about resource allocations for
program level self-evaluation. The varied nature of
evaluations is a result of the absence of clear guidelines on
how to assess program evaluation costs. Several reviews of
options for strengthening program self-evaluation and
attempts to establish broad guidelines have been offered,
most recently, in a JIU proposal for minimum standards for
budget and staffing (Paragraph 18). OIOS also points out
that while the Secretary General's report on the budget for
the Rwanda war crimes tribunal (A/58/269) reiterates the
importance of resource identification in the areas of
budgetary concerns, further guidance is needed to distinguish
between staff and costs required for evaluation and the costs
required for other oversight activities.


7. In the same vein, OIOS notes that Article VII of the PPBME
(Program Planning, the Program Aspects of the Budget, the
Monitoring of Implementation and the Methods of Evaluation)
lacks precision on how program self-evaluations are to be
managed and budgeted. OIOS suggests greater clarity on this
matter is needed to clarify and update guidance by late
2006/early 2007 so that it can be of use to program managers
as they formulate their budgets for the 2008-2009 biennium
(Paragraph 21).


8. OIOS concludes the Secretariat's central evaluation
capacity is inadequate and unable to fully meet its mandate.
Currently, the evaluation section of the OIOS is able to
produce one in-depth evaluation, one thematic report, two
triennial reviews per year and, at best, one in-depth
evaluation of the Secretariat's program once every 27 years,
a situation OIOS deems inadequate. The OIOS presents two
examples for strengthening the current evaluation program:

- More regular, external evaluations of short duration at the
program/subprogram level; and,

-Increased frequency of in-depth evaluations and triennial
reviews.

OIOS ACTIONS (3 Total):
--------------


9. A Secretariat-wide evaluation needs assessment exercise
will be conducted to identify specific evaluation needs,
functions, resources and capacity required at the program and
subprogram level.


10. Current PPBME rules and regulations pertaining to
evaluation will be translated into clear and practical
guidelines.


11. OIOS will incorporate the forthcoming findings of the
Summit Outcome mandated independent external evaluation of
the auditing, oversight and governance system of the UN when
creating the 2008-2009 program budget. This, in addition to
evaluations of performance and outcomes of the Secretariat
programs, will be reflected in the program budget for
2008-2009.

Future Evaluations
--------------


12. In-depth evaluations for the August 14th CPC session have
been performed by the OIOS on subprogram 1 of political
affairs. For the next CPC session in 2007, OIOS will complete
in-depth evaluations of all remaining subprograms in the
political affairs program. Five separate evaluation reports
are expected, including:

- Subprogram 2: Electoral assistance, implemented by the
Electoral Assistance Division

- Subprogram 3: Security Council Affairs, implemented by
the Security Council Affairs Division

- Subprograms 4 and 5: Decolonization and the question of
Palestine, implemented by the Decolonization Unit and the
Division for Palestinian Rights, respectively

- Special Political Missions: administered and supported
by the Department of Political Affairs

- Overall assessment of the Department of Political
Affairs, including a synthesis of findings from the
subprogram evaluations, and assessment of the remaining
Executive Direction and Management, Policy Planning, and
Executive Office components.


13. Lastly, the OIOS report enumerates programs that have
never before been evaluated. It has ranked these offices for
selection by the CPC for 2008 and 2009 evaluations. These

include: ESCAP, ECE, ECA, NEPAD, UN Offices in Vienna, Geneva
and Nairobi, Peaceful Use of Outer Space, UNCTAD, OHRM,
ECLAC, ESCWA, OPPBA and OCSS.


14. COMMENT: First, an important statistical note - the
statistical methodology used in the OIOS report does not
necessarily provide a mathematically cogent argument from
which larger assumptions about program and central
evaluations may be drawn. The report does acknowledge this
point in Part C, Paragraph 10, noting, "the small size of the
sample and its non-random nature are limitations of the
meta-evaluation, and therefore the findings cannot be
projected to the universe of all evaluations reports produced
by the Secretariat." As a result, some caution is advised
when referring to this report. Typically, for a study to
have firm statistical footing from which accurate analysis
may be extended, at least 30 randomly selected samples of
data are required from any pool of data. USUN notes that
this study used only 23 samples that were not randomly
selected (Paragraph 10).


15. Second, a note on our concerns: the report is very
descriptive, stating a range of findings that reflect
concerns about efficiencies and effectiveness. Those offices
that fall short of their mandates should not overshadow the
offices that fulfill their mandates. It is important to
recognize and discern these two groups (and those in
between). Although this report does not specifically
indicate where each office falls on the spectrum of
efficiency, it is a strong talking point for the USG. It
would also be helpful for the U.S. to seek additional
information about what offices are exemplary because it is
equally as important to ensure that offices not up-to-par are
noted while offices that fulfill their duties are commended.
END COMMENT.

BOLTON