A boy being seen to by a Health Education nurse in Sri Lanka. (Photo courtesy of Dominic Sansoni/The World Bank)
Award Date 
3/2014
Grant Amount 
$500,000
Purpose 
To support the World Bank's Service Delivery Indicators project.
Topic (focus area) 

Published: January 2014

Note: This page was created using content published by Good Ventures and GiveWell, the organizations that created the Open Philanthropy Project, before this website was launched. Uses of “we” and “our” on this page may therefore refer to Good Ventures or GiveWell, but they still represent the work of the Open Philanthropy Project.

This page provides a high-level description of the Service Delivery Indicators Program (SDI), to which Good Ventures granted $500,000 in 2013 based on a joint assessment with GiveWell. The program has an estimated budget of $27 million over the first five years, of which other funders had committed about $10 million as of January 2013.1

Funding and following the project is a learning opportunity for GiveWell and Good Ventures. We will be continuing our analysis of the project; below we report on what we have learned to date. Key remaining questions include:

  • Strength of the indicators: How accurately and consistently will the indicators measure overall service quality? How vulnerable are they to ‘gaming’ by sufficiently motivated governments, service providers, or survey staff?
  • Usage of the indicators: Who will use the indicators? What impact will the existence of the indicators have?
  • Scale up: What factors might prevent SDI from scaling up to 10 – 15 countries on schedule?

We spoke to Gayle Martin about updates on SDI on July 30, 2014

About the program

The Service Delivery Indicators Program (SDI) was designed to create and promote the use of objective measures of the quality of health and education services in Africa. According to the World Bank, which helped create SDI, “No set of indicators is available for measuring service delivery and quality at schools and clinics from the citizens’ perspective.”2 By creating metrics for health and education services that are consistent within and among countries, SDI hopes to increase attention, measurability, and accountability for the success of those services, leading to long-term improvements in health and education.3

SDI will assess 10 - 15 countries in Africa on each of the indicators based on a random sample of health and primary school facilities.4 The countries will be selected to maximize the impact of the program.5 The indicators are designed to measure the availability of important health and education inputs as well as the knowledge and effort of the service providers.6 The intention is to “provide a useful snapshot of actual performance as well as possible constraints that may undermine the delivery of quality services.”7 The hope is that, long term, the existence and usage of the indicators will help improve health and education in the participating countries.8

Program methodology

Data collection

Each of the 10 – 15 countries will be assessed at regular intervals of about two - three years, staggered so that a similar number of countries are assessed each year.9 Each time a country is assessed, SDI will select a sample of about 200-300 primary health facilities and about 200-300 primary schools to be evaluated.10 The sample size and survey design are chosen “with the aim of producing nationally representative indicators with sufficient precision to identify changes in the indicators of around 5-7 percentage points over time.”11

The indicators (listed below) will be measured by enumerators that travel in teams of two to the selected facilities and directly observe the resources available to teachers and health workers, observe the quality of their services, and administer tests of their relevant knowledge.12 The enumerators will be employed by an organization operating in that country to implement SDI, assisted by the World Bank,13 and will be trained and managed by survey supervisors within the same organization.14 The survey supervisors will also be responsible for quality assurance, including spot checks completed by field coordinators and supervision visits with the enumerators.15 We are not aware of plans to conduct independent audits of the data.16

The following explanations of the indicators are based on the SDI definitions 2013 which reflects updates to the indicators based on experience in Kenya.

Education indicators

SDI will measure seven indicators in primary schools: three measures of input availability at schools and four measures of the knowledge and effort of teachers.17

School inputs:

  • Minimum teaching equipment, measured as the average of the following in 4th grade classrooms: the fraction of students with pens, the fraction of students with notebooks, and the existence (“1” for existent, “0” for nonexistent) of a functional chalkboard.18
  • Textbooks per student, measured as the average number of math and language books in 4th grade classrooms per student.19
  • School infrastructure, measured as a fraction representing classroom light sufficient for reading and toilets that are accessible, functioning, clean, and private.20

Teacher efficacy:

  • Absence from school, based on the attendance of a random sample of up to 10 teachers during a later unannounced visit.21
  • Absence from classroom, based on the location of teachers at the school during an unannounced visit.22
  • Share of teachers with minimum knowledge, based on a test measuring math and English knowledge from the curriculum, given to teachers of those subjects.23
  • Time spent teaching in the classroom, based on direct observation, including, for example, interacting with students, grading students’ work, having students work on a specific task, and maintaining discipline, but not working on private matter, doing nothing, or leaving the classroom altogether.24

Health indicators

SDI will measure eight indicators in primary health facilities; four measures of input availability, and four measures of the knowledge and effort of health workers.

Health facility inputs:

  • Equipment availability, a binary measure counted as “1” if the health facility has at least one functioning thermometer, stethoscope, sphygmonometer, and weighing scale, plus a refrigerator and sterilization equipment for larger facilities, and as “0” otherwise.25
  • Drug availability, measured as the share of 26 specific drugs that are in stock and not expired on the day of observation.26
  • Caseload per health provider, measured as number of outpatient visits divided by number of days the facility was open and the number of health workers who conduct outpatient consultations in the prior three months.27
  • Health facility infrastructure, a binary measure counted as “1” if the facility has electricity, water, and sanitation, and as “0” otherwise.28

Health worker efficacy:

  • Absence rate, based on the absence of a random sample of up to 10 on duty health workers.29
  • Diagnostic accuracy, measured as the fraction of five hypothetical patient case scenarios for which prescribers are able to mention the correct diagnosis.30
  • Management of maternal and neonatal complications, measured as the fraction of total relevant treatment actions proposed during a case study of post-partum hemorrhage and a case study of neonatal asphyxia.31
  • Adherence to clinical guidelines, measured as the average fraction of history taking questions and examination questions that were asked by a clinician for each of five case study patients.32

Data dissemination and capacity building

Once the data are collected, SDI will encourage it to be used by institutions and the public within the country and internationally to maximize its impact.33 SDI also plans for the program to build the capacity of the participating countries to conduct surveys and analyze their results.34

Evaluating program success

Strength of the indicators

Measuring these indicators is an ambitious attempt to collect important data from a wide range of environments. With the information we currently have, it is difficult to know:

  • How consistently the indicators will be defined and measured among locations and over time
  • How representative these indicators are of overall service quality, if accurately measured
  • How vulnerable the indicators are to gaming by sufficiently motivated governments, service providers, and survey staff
  • How burdensome the survey process is for service providers and facilities. SDI reports that, as of September 2013, no individual has been interviewed for more than two hours at a time.35

Usage of the indicators

SDI plans to monitor the program’s success using two intermediate indicators: 36

  1. “Public debate on education and health service delivery [is] initiated and/or informed.”
  2. “Stakeholders (policymakers, media, NGOs, CSO) reporting use of SDI analysis within 6 months after any of the SDI dissemination events.”

Scale up

SDI’s success at implementing and disseminating the indicators on schedule can be tracked. As of September 2013, SDI had already implemented a highly visible launch with Kenya’s data in July, and had plans to finish implementing launches with four more countries by the end of 2013.37 In one of those countries, Nigeria, SDI will only measure the health indicators in six states and the education indicators in four states due to the large size of the country.38

We know that the project intends to assess each of 10 – 15 countries at regular intervals of about two - three years, staggered so that a similar number of countries are assessed each year.39 By the end of the third full year, 2015, we will know how its progression compares to this vision,40 though we also hope to understand in more detail the anticipated obstacles to scale-up.

Updates

In March 2015, we published our first update on SDI’s progress to date.

Sources

Document Source
Gayle Martin, World Bank Senior Economist, call with GiveWell on April 17, 2013 Unpublished
Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013 Unpublished
SDI definitions 2013 Source
SDI Implementation Update January 2013 Source
SDI Kenya Education Survey Instrument 2012 Unpublished
SDI Kenya Health Survey Instrument 2012 Unpublished
SDI Program Document 2011 Source
  • 1.

    Based on $4 million from The William and Flora Hewlett Foundation plus $5.1 million committed and $0.7 million budgeted from “Non-SDI multi-donor trust fund/Hewlett” SDI Implementation Update January 2013, Pg 7.

  • 2.

    SDI Program Document 2011, Pg v.

  • 3.

    “The objectives of the Service Delivery Indicators (SDI) Program are to: (i) collect robust evidence on quality of education and health services over time and across countries; (ii) disseminate the data and create a high level of public awareness of the Service Delivery Indicators, nationally and internationally; and (iii) strengthen the capacity of institutions in Africa to conduct surveys and analyze the data generated. The longer term goal is to promote the use of the data by a wide variety of stakeholders toward the ultimate end of improving service delivery for human development.” SDI Program Document 2011, Pg v.

  • 4.

    “Improving efficiency is important for SDI given the scale of the surveys (10-15 countries) repeated every 2-3 years.” SDI Implementation Update January 2013, Pg 2.

    “Random sample” from Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013

    “In order to minimize costs, the normal approach to sampling will be a multistage, cluster sampling approach, for instance by first selecting districts (or another suitable geographic unit) and then selecting health facilities and schools within the selected districts. Several of the indicators will also involve a third step by selecting teachers / health workers within the facilities. Cluster sampling generally increases the variability of the sample estimates above that of simple random sampling. Hence, the number of surveyed units will increase. These costs must be weighed against the added costs of travel and administration with a simple random sample.” SDI Program Document 2011, Pg 24.

  • 5.

    “The country selection will be made with the aim to maximize the impact of the Program. High impact requires first and foremost that the data are credible, and secondly that there are ‘champions of change’ within the countries that are—or may become—interested in using the data. Furthermore, in order to facilitate healthy competition between countries, each country must be able to compare itself with other countries that are seen as relevant comparisons.

    Some of the country selection criteria that will be considered are:

    • The existence of local institutions capable of implementing the survey with sufficiently high quality.
    • Contextual factors that influence the likely impact of the Program at the country level.
    • The significance of the country as a relevant comparison to other countries (taking into account factors such as country size, level of development, political stability, governance structure, post-conflict situation, etc.).
    • Geographical location (East, West, Central and South) to ensure the pan-African vision of the Program.
    • Main language (first implement the Program in Anglophone and Francophone countries to capitalize on existing tools developed during the pilot phase and subsequently add Lusophone countries).”

    SDI Implementation Update January 2013, Pg 16.

  • 6.

    “The Program focuses on a core set of indicators at two levels: (i) the knowledge and effort of service providers, i.e. what frontline service providers know and do; and (ii) the availability of key inputs at the frontline for effective service provision. The former is a key contribution of the Program, as no comprehensive data collection has been devoted to what is going on inside schools and health facilities.” SDI Program Document 2011, Pg v.

  • 7.

    SDI Program Document 2011, Pg v.

  • 8.

    “The longer term goal is to promote the use of the data by a wide variety of stakeholders toward the ultimate end of improving service delivery for human development.” SDI Program Document 2011, Pg v.

  • 9.

    “Improving efficiency is important for SDI given the scale of the surveys (10-15 countries) repeated every 2-3 years.” SDI Implementation Update January 2013, Pg 2.

    “Assuming that a new survey is produced in each country every third year, there will be three survey waves, each wave encompassing one third of the countries in the Program. As a point of departure, we suggest five countries in each wave, i.e., 15 countries in total. The number of countries can be expanded as additional resources become available.” SDI Program Document 2011, Pg 17.

  • 10.

    “We will be surveying clinics and schools per country (about 200-300 of each facility type).” SDI Program Document 2011, Pg 13.

  • 11.

    “It is suggested that the surveys are designed with the aim of producing nationally representative indicators with sufficient precision to identify changes in the indicators of around 5-7 percentage points over time. At the same time, separate results will be reported for urban/rural areas and government/non-government providers where appropriate. The sub- national aggregates will have lower levels of precision…
    Based on the results from the pilot, we have tentatively estimated that a nationally representative sample with cluster sampling would require at least 250-300 health facilities and the same number of schools in each country (Assuming 80% power, 10% level of confidence, and a survey design effect of 2). (The pilot used samples of 150-175 units in each sector). Adjustments to the sample size should be routinely considered as new data are collected and analyzed.” SDI Program Document 2011, Pg 24.

  • 12.

    “Enumerators will work together in teams of two to collect the data, spending two days at each health facility or school.” SDI Program Document 2011, Pg 37.

    “The data will be gathered from direct observation of provider behavior, from various test of provider knowledge and skills, and from observation of the availability of key inputs required to enable provision of quality services (infrastructure, equipment, supplies, etc.).” SDI Program Document 2011, Pg 17.

  • 13.

    “Data will be collected by trained enumerators, one team for each sector (education and health supervised by a survey leader, again under the supervision of the country leader within the implementing organization in each country.” SDI Program Document 2011, Pg 17.

    “The implementing organization (one per country) will be responsible for country-level implementation of the Program. The criteria for selecting a Country Implementing Organization will include: technical capacity and proven track record of collecting and analyzing survey data; demonstrated ability to work constructively with the government and other partners; and other criteria as determined by the Program Management Team in consultation with the Steering Committee.” SDI Program Document 2011, Pg 21.

    “The Program will be implemented at the country-level by Country Implementing Organizations, in collaboration with a World Bank-based Program Management Team.”SDI Program Document 2011, Pg 6.

  • 14.

    “The survey supervisors will leading the implementation of the survey in each sector, with responsibility for quality assurance of the data, training and coordination of enumerators at headquarters and in the field and outreach and dissemination activities with assistance from the country leader.” SDI Program Document 2011, Pg 37.

  • 15.

    “The survey supervisors will leading the implementation of the survey in each sector, with responsibility for quality assurance of the data, training and coordination of enumerators at headquarters and in the field and outreach and dissemination activities with assistance from the country leader.” SDI Program Document 2011, Pg 37.

    Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013

  • 16.

    SDI has told us that “DEP” will also conduct verification, but we do not have further details. Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013

  • 17.

    “We focus on teachers teaching younger cohorts because cognitive ability is most malleable at younger ages (see Cunha and Heckman, 2007), therefore we would expect that teacher effort has the highest marginal effect at early age.” SDI Program Document 2011, Pg 29.

  • 18.

    “Minimum teaching resources is assigned 0‐1 capturing availability of (i) whether a grade 4 classroom has a functioning blackboard and chalk, (ii) the share of students with pens, and (iii) the share of students with notebooks, giving equal weight to each of the three components.

    Functioning blackboard and chalk: The enumerator assesses if there was a functioning blackboard in the classroom, measured as whether a text written on the blackboard could be read at the front and back of the classroom, and whether there was chalk available to write on the blackboard.

    Pencils and notebooks: The enumerator counts the number of students with pencils and notebooks, respectively, and by dividing each count by the number of students in the classroom one can then estimate the share of students with pencils and the share of students with notebooks.” SDI definitions 2013, Pg 1.

  • 19.

    “The indicator is measured as the number of mathematics and language books that students use in a grade 4 classroom divided by the number of students present in the classroom. The data will be collected as part of the classroom observation schedule.” SDI definitions 2013, Pg 1.

  • 20.

    “Minimum infrastructure resources is assigned 0‐1 capturing availability of: (i) functioning toilets operationalized as being clean, private, and accessible; and (ii) sufficient light to read the blackboard from the back of the classroom.

    Functioning toilets: Whether the toilets were functioning was verified by the enumerators as being accessible, clean and private (enclosed and with gender separation).

    Electricity: Functional availability of electricity is assessed by checking whether the light in the classroom works gives minimum light quality. The enumerator places a printout on the board and checks (assisted by a mobile light meter) whether it was possible to read the printout from the back of the classroom given the slight source.” SDI definitions 2013, Pg 2.

  • 21.

    “During the first announced visit, a maximum of ten teachers are randomly selected from the list of all teachers who are on the school roster. The whereabouts of these ten teachers are then verified in the second, unannounced, visit. Teachers found anywhere on the school premises are marked as present.” SDI definitions 2013, Pg 1.

  • 22.

    “The indicator is constructed in the same way as School Absence Rate indicator, with the exception that the numerator now is the number of teachers who are both at school and in the classroom. The denominator is the number of teachers who are present at the school. A small number of teachers are found teaching outside, and these are marked as present for the purposes of the indicator.” SDI definitions 2013, Pg 1.

  • 23.

    “This indicator measures teacher’s knowledge and is based mathematics and language tests covering the primary curriculum administered at the school level to all teachers currently teaching maths and english in grade 4, those who taught english and maths at grade 3 in the previous academic year, and up to 3 randomly selected upper primary maths and english teachers.” SDI definitions 2013, Pg 1.

  • 24.

    “This indicators combines data from the Staff Roster Module (used to measure absence rate), the Classroom Observation Module, and reported teaching hours. The teaching time is adjusted for the time teachers are absent from the classroom, on average, and for the time the teacher remains in classrooms based on classroom observations recorded every minute in a teaching lesson.

    A distinction is made between teaching and non-teaching activities based on classroom observation done inside the classroom. Teaching is defined very broadly, including actively interacting with students, correcting or grading student’s work, asking questions, testing, using the blackboard or having students working on a specific task, drilling or memorization, and maintaining discipline in class. Non-teaching activities is defined as work that is not related to teaching, including working on private matters, doing nothing and thus leaving students not paying attention, or leaving the classroom altogether.” SDI definitions 2013, Pg 1.

  • 25.

    “Share of facilities with thermometer, stethoscope and weighing scale refrigerator and sterilization equipment.
    Medical Equipment aggregate: Assign score of one if enumerator confirms the facility has one or more functioning of each of the following: thermometers, stethoscopes, sphygmonometers and a weighing scale (adult or child or infant weighing scale) as defined below. Health centers and first level hospitals are expected to include two additional pieces of equipment: a refrigerator and sterilization device/equipment.
    Thermometer: Assign score of one if facility reports and enumerator observes facility has one or more functioning thermometers.
    Stethoscope: Assign score of one if facility reports and enumerator confirms facility has one or more functioning stethoscopes.
    Sphygmonometer: Assign score of one if facility reports and enumerator confirms facility has one or more functioning sphygmonometers.
    Weighing Scale: Assign score of one if facility reports and enumerator confirms facility has one or more functioning Adult, or Child or Infant weighing scale.
    Refrigerator: Assign score of one if facility reports and enumerator confirms facility has one or more functioning refrigerator.
    Sterilization equipment: Assign score of one if facility reports and enumerator confirms facility has one or more functioning Sterilization device/equipment.” SDI definitions 2013, Pg 3.

  • 26.

    “Share of basic drugs which at the time of the survey were available at the facility health facilities.
    Priority medicines for mothers: Assign score of one if facility reports and enumerator confirms/observes the facility has the drug available and non-expired on the day of visit for the following medicines: Oxytocin (injectable), misoprostol (cap/tab), sodium chloride (saline solution) (injectable solution), azithromycin (cap/tab or oral liquid), calcium gluconate (injectable), cefixime (cap/tab), magnesium sulfate (injectable), benzathinebenzylpenicillin powder (for injection), ampicillin powder (for injection), betamethasone or dexamethasone (injectable), gentamicin (injectable) nifedipine (cap/tab), metronidazole (injectable), medroxyprogesterone acetate (Depo-Provera) (injectable), iron supplements (cap/tab) and folic acid supplements (cap/tab).
    Priority medicines for children: Assign score of one if facility reports and enumerator confirms after observing that the facility has the drug available and non-expired on the day of visit for the following medicines: Amoxicillin (syrup/suspension), oral rehydration salts (ORS sachets), zinc (tablets), ceftriaxone (powder for injection), artemisinin combination therapy (ACT), artusunate (rectal or injectable), benzylpenicillin (powder for injection), vitamin A (capsules)
    We take out of analysis of the child tracer medicines two medicines (Gentamicin and ampicillin powder) that are included in the mother and in the child tracer medicine list to avoid double counting.
    The aggregate is adjusted by facility type to accommodate the fact that not all drugs (injectables) are expected to be at the lowest level facility, dispensaries./health posts where health workers are not expected to offer injections.” SDI definitions 2013, Pg 3.

  • 27.

    “The number of outpatient visits recorded in outpatient records in the three months prior to the survey, divided by the number of days the facility was open during the three month period and the number of health workers who conduct patient consultations (i.e. excluding cadre-types such as public health nurses and out-reach workers).” SDI definitions 2013, Pg 2.

  • 28.

    “Share of facilities with electricity, clean water and improved sanitation.
    Infrastructure aggregate: Assign score of one if facility reports and enumerator confirms facility has electricity and water and sanitation as defined.
    Electricity: Assign score of one if facility reports having the electric power grid, a fuel operated generator, a battery operated generator or a solar powered system as their main source of electricity.
    Water: Assign score of one if facility reports their main source of water is piped into the facility, piped onto facility grounds or comes from a public tap/standpipe, tubewell/borehole, a protected dug well, a protected spring, bottled water or a tanker truck.
    Sanitation: Assign score of one if facility reports and enumerator confirms facility has one or more functioning flush toilets or VIP latrines, or covered pit latrine (with slab).” SDI definitions 2013, Pg 3.

  • 29.

    “Number of health workers that are not off duty who are absent from the facility on an unannounced visit as a share of ten randomly sampled workers. Health workers doing fieldwork (mainly community and public health workers) were counted as present. The absence indicator was not estimated for hospitals because of the complex arrangements of off duty, interdepartmental shifts etc.” SDI definitions 2013, Pg 2.

  • 30.

    “For each of the following five case study patients: (i) malaria with anemia; (ii) acute diarrhea with severe dehydration; (iii) pneumonia; (iv) pulmonary tuberculosis; (v) diabetes mellitis.
    For each case study patient, assign a score of one as correct diagnosis for each case study patient if case is mentioned as diagnosis. Sum the total number of correct diagnoses identified. Divide by the total number of case study patients. Where multiple diagnoses were provided by the clinician, the diagnosis is coded as correct as long as it is mentioned, irrespective of what other alternative diagnoses were given.” SDI definitions 2013, Pg 2.

  • 31.

    “For each of the following two case study patients: (i) post-partum hemorrhage; and (ii) neonatal asphyxia. Assign a score of one if a relevant action is proposed. The number of relevant treatment actions proposed by the clinician during consultation is expressed as a percentage of the total number of relevant treatment actions included in the questionnaire.” SDI definitions 2013, Pg 2.

  • 32.

    “Unweighted average of the share of relevant history taking questions, the share of relevant examinations performed.
    For each of the following five case study patients: (i) malaria with anemia; (ii) acute diarrhea with severe dehydration; (iii) pneumonia; (iv) pulmonary tuberculosis; and (v) diabetes mellitis.
    History Taking Questions: Assign a score of one if a relevant history raking question is asked. The number of relevant history taking questions asked by the clinician during consultation is expressed as a percentage of the number of important history questions to be asked based of the guidelines for management of the case (IMIC and Kenya National guidelines).

    Relevant Examination Questions: Assign a score of one if a relevant examination question is asked. The number of relevant examination taking questions asked by the clinician during consultation is expressed as a percentage of the total number of relevant examination questions included in the questionnaire.
    For each case study patient: Unweighted average of the: relevant history questions asked, and the percentage of physical examination questions asked. The history and examination questions considered are based on the Kenya National Clinical Guidelines and the guidelines for Integrated Management of Childhood Illnesses (IMCI).” SDI definitions 2013, Pg 2.

  • 33.

    “To reach all target groups, the dissemination strategy at country level will include key types of outputs:

    • Each national survey will have a complete Program report including methodology, results and discussion of findings. These reports are the basis for all other products developed.
    • Media reports and press releases and policy briefs targeting specific audiences.
    • Presentations of indicator findings at key stakeholder forums, including high-level government meetings, relevant ministries, specific advocacy groups, research conferences, etc. These presentations will focus on fostering dialogue around the findings and their implications. Presentations and discussions of the results should also be held in regions or districts where data have been collected.

    For the national dissemination, close cooperation will be sought with international agencies and organizations that effectively can champion the dissemination efforts. These organizations need to be able to reach out effectively to a wide range of the identified target groups…
    The main dissemination output at the international level will be:

    • A website where country results are presented and visualized in a user-friendly format. The website can also be expanded to an interactive forum where service providers and beneficiaries can voice their own experiences with service quality. The website will have links to the World Bank Microdata website,11 and where all data are available for further analysis.
    • An annual Service Delivery Indicators report. This status report will progressively have data for all countries as data collection expands. Not all countries will have a survey in a given year. Therefore, the annual report will present the most current data of the respective service delivery core indicators across all countries involved in the Program.
    • A detailed Communication Strategy will accompany each annual report, including presentations at global and regional policy forums, media reports and press releases and policy briefs targeting specific audiences.”

    SDI Program Document 2011, Pg 14.

  • 34.

    “To ensure that capacity building is realized, the terms of reference of the country advisers will reflect specific skills and tasks for this role. In addition, the performance indicators of the SDI Program require a national expert to be the co-principal investigator of the surveys. National partners will also be exposed to the survey process and outcomes of other participating countries for learning purposes.” SDI Program Document 2011, Pg 15.

  • 35.

    Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013

  • 36.

    SDI Program Document 2011, Pg 11.

  • 37.

    Gayle Martin, World Bank Senior Economist, call with GiveWell on April 17, 2013
    Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013

    See “Annex II. Planned Timeline of Events in 2013/2014” section of SDI Implementation Update January 2013, Pg 8, for expected timing of launches by country.

  • 38.

    Gayle Martin, World Bank Senior Economist, call with GiveWell on April 17, 2013.

  • 39.

    “Improving efficiency is important for SDI given the scale of the surveys (10-15 countries) repeated every 2-3 years.” SDI Implementation Update January 2013, Pg 2.

    “Assuming that a new survey is produced in each country every third year, there will be three survey waves, each wave encompassing one third of the countries in the Program. As a point of departure, we suggest five countries in each wave, i.e., 15 countries in total. The number of countries can be expanded as additional resources become available.” SDI Program Document 2011, Pg 17.

  • 40.

    “During the two first years of the Program, while the number of countries has not yet reached 15, survey implementation will occupy a larger share of the Country Advisors’ time given learning by doing.” SDI Program Document 2011, Pg 36.

    Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013