PRELIMINARY FINDINGS
Based on a review of the available documents and the methods discussed therein, here are some initial findings:
METHODS FOUND BY PROGRAM CYCLE STAGE
The stages of the program cycle where data collection methodology is most emphasized, both generally and for qualitative methods in particular, are assessment, monitoring & evaluation. For other stages of the program cycle, there were some documents that addressed data collection for the appraisal stage, but very few that discussed data collection methods for program design or baseline determination. Not one of the organizations studied had documents that covered all stages of the program cycle. (see Table 1)
Table 1: Eight Organizations & Methods Found, by Stage of Program Cycle
In most of these documents there was very little differentiation between the methods used for collecting monitoring data and evaluation data. While there is of course a distinction in the purpose and concept of these two distinct stages, the methodology described as appropriate for data collection for these two stages was often lumped together as “M&E”.
It is entirely possible that additional methods are used during the program cycles where limited resources were available, such as program design and appraisal. These documents are not (do not seem to be) commonly posted on the organizations’ websites, and this study found very few documents that discussed these stages of the program cycle, so it was not possible to determine the best practices for data collection for these stages.
However, there was a great deal of variety in the number and type of methodology discussed in the reviewed documents. Nearly all of documents which were not specifically focused on describing participatory techniques included key informant interviews and/or focus groups. It is noteworthy how many organizations have documents which not only mention, but which actively describe and advocate the use of various participatory methods, such as mapping, timelines, ranking & scoring, and others. (see Tables 2-4)
Table 2: Methods Found in UN Agency Documents
Table 3: Methods Found in IFRC Documents
Table 4: Methods Found in BINGO Documents
GAP: QUALITATIVE METHODS AT CERTAIN PROGRAM CYCLE STAGES
For nearly every organization studied, qualitative methods were found at both the assessment and evaluation stages, with only one exception for each (none in assessments for UNICEF, and none in evaluations for Save the Children). Indeed, qualitative methods were found almost exclusively in assessment (seven organizations), evaluation (seven organizations), and monitoring (five organizations) stages, with only a handful for program design (only three organizations) and nearly none for either appraisal (only two organizations) or baseline (only one organization). This is likely in part due to the lack of documentation available for these stages overall, but is also likely indicative of a larger pattern of methodology gaps for certain program stages.
PRESCRIBED v. UTILIZED METHODS
The toolkits & guidelines documents contained a wealth of information on participatory and other qualitative methods, including in most cases wonderfully detailed instructions on how to use these data-gathering techniques, in what situations they were appropriate to use, and how to properly train staff conducting the techniques.
The overwhelming majority of reports of operations or programs, including assessments and monitoring & evaluation reports, listed key informant interviews, focus groups, and document reviews as the only methods used. Some indicated “field visits” or “case studies” which were comprised of these methods, but few listed participatory or other methods.
Chart 2: Prescription v. Utilization of Methods
As seen in Chart 2, there is a great disparity between the methods prescribed in the reviewed documents and the methods utilized in actual program reports and analyses. The most glaring difference is in the area of participatory techniques, which are by far the most often prescribed methods, with a full 17 of the 20 guidelines and toolkits (85%) mentioning them. However, a mere four of the 13 report documents (31%) indicated that they had been used in practice. The most commonly utilized qualitative data-gathering method was key informant interviews, mentioned in more than twice the number of reports as any other single method studied, nine of the 13 (69%). The opposite pattern exists with observation methods, which are commonly prescribed in six of the 20 guidelines documents (30%), but only completed in one of the 13 reports (8%). This is also the case with focus groups, which are commonly prescribed in nearly half (eight of the 20) of the guidance documents (40%), but only utilized in a 31% of the reports (four of the 13).
These findings indicate a distinct disparity between prescribed methods found in guidance documents, and those being used to generate reports from the field.
GAP: UTILIZATION OF PARTICIPATORY TECHNIQUES
There was a great deal of overlap in the participatory techniques described in guidelines & toolkits, across all ten organizations. There is clearly a broad understanding of the validity and utility of participatory data collection methods, in particular for assessment, monitoring & evaluation, but there is little evidence from the reports available that these methods are being used in practice. Most assessment and evaluation reports of actual projects or programs did not mention the use of participatory techniques in their methodology, but instead focused on key informant interviews and focus groups.