GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD

Size: px
Start display at page:

Download "GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD"

Transcription

1 GUIDE TO THE HEALTH FACILITY DATA QUALITY REPORT CARD

2 Introduction No health data is perfect and there is no one definition of data quality.. No health data from any source can be considered perfect: all data are subject to a number of limitations related to quality, such as missing values, bias, measurement error, and human errors in data entry and computation. Data quality assessment is needed to understand how much confidence can be put in the health data presented. In particular, it is important to know the reliability of national coverage estimates and other estimates derived from HIS data that are generated for health sector reviews, as these often form the basis for annual monitoring. However, there is no one definition of data quality that is used consistently across institutions. Data quality is a multi-dimensional construct. Overall data quality, then, becomes a function of each of its dimensions. If data quality is a latent construct and is defined as a function of its different dimensions, by extension the assessment of data quality would entail an assessment of each of its dimensions. There are different tools that assess different dimensions of data quality. 1 Some tools focus on assessing the status of the system that is producing the data. This is based on questions about presence of e.g. trained staff, forms and electronic media, procedures, adherence to definitions, and ways in which data quality is assessed. Other tools focus on the assessment of the quality of the actual data that are generated by the system. The first step is a desk review of data quality. A desk review of data will not necessarily identify the underlying causes of inaccurate data but it will identify problems of data completeness, accuracy and external consistency.. Health facility data are a critical input into assessing national progress and performance on an annual basis and they provide the basis for subnational / district performance assessment. WHO proposes the Health Facility Data Quality Report Card (DQRC), which is a methodology that examines certain dimensions of data quality through a desk review of available data and a data verification component. The aim of DQRC is to ensure systematic assessment of completeness and internal and external consistency of the reported data or computed statistics and determine whether there are any data quality problems that need to be addressed. The desk review component of the DQRC is conducted through the use of the WHO Data Quality Assessment (DQA) Tool, an Excel-based tool, that reviews the quality of data generated by a health facility-based information system for four key indicators: antenatal care first visit (ANC1), health facility deliveries, diphtheria-pertussis-tetanus third dose (DTP3) and outpatient department (OPD) visits. Through analysis of these four standard tracer indicators, the tool quantifies problems of data completeness, accuracy and external consistency and thus provides valuable information on fit-for-purpose of health facility data to support planning and annual monitoring. Data verification refers to the assessment of reporting correctness, that is, comparing health facility source documents to HIS reported data to determine the proportion of the reported numbers that can be verified from the source documents. It checks whether the information contained in the source documents has been transmitted correctly to the next higher level of reporting, for each level of reporting, from the health facility level to the national level. It is recommended to implement data verification with the annual health facility survey (Service Availability Readiness Assessment (SARA)) on a 1 Different frameworks and different dimensions of data quality will be discussed in a data quality assessment guideline document that is under development. In this document, we will only focus on two of the dimensions of data quality that are assessed by the Data Quality Report Card.

3 representative sample of health facilities to obtain a national level estimate of the verification factor for the health information system. The DQRC examines four dimensions of data quality. These dimensions are: 1) completeness of reporting; 2) internal consistency of reported data; 3) external consistency of population data; 4) external consistency of coverage rates. There are two levels of assessment for the indicators in the DQRC: 1) an assessment of each indicator at the national level; and 2) performance of sub-national units, mostly districts or provinces/regions, on the selected indicator. The indicator definitions change when evaluated nationally or sub-nationally. The DQRC has been primarily designed to be examined at the national level. However, it is possible that certain provinces/states/regions in a country might want to look just at their own performance. For example, a province might want to examine data quality in their districts. This is possible to do in the DQRC. Any reference to the word national can be replaced with a sub-national unit. 2 For each of the four dimensions a small set of indicators is used. The indicators can, with adaptations, be used for most indicators that can be derived from health facility data. Data quality problems are usually systemic and are not specific to any one program area. For instance, there may be a group of districts or facilities that do not report at all or poorly. Or the denominators of the coverage indicators, based on population projections, are systematically off. Even if it is not possible to do an exhaustive data quality analysis of all the key indicators in the national health strategy, conducting a data quality analysis as described below can indicate potential problems in multiple program areas. The focus in this manual is on maternal and child health indicators. The DQRC should be generated on annual basis to evaluate the quality of the data to be used for annual reviews. The following table gives a quick overview of the indicators of the DQRC. Detailed explanation for each of the indicator are given in the subsequent section. 2 This should, however, be done cautiously. For external comparison, survey aggregation levels are usually only at the state/province/regional levels. If a province wants to look at within province data quality performance, it will not be able to make external comparisons of their facility data with survey data if the survey aggregation level from the most recent population-based survey is only available at the province level.

4 National Profile Sub-national Profile Indicator Definition Application of indicator at the Subnational unit level Completeness of reporting Completeness of subnational unit reporting as district) reports received for a % of monthly sub-national unit (such specified period time (usually one year) Number and % of sub-national units that had less than 80% completeness of monthly reporting for a specified period time (usually one year) Completeness of facility reporting Completeness of indicator data (zero/missing values) Total # of sub-national unit reports received Total # of expected sub-national unit reports % of expected monthly facility reports received for a specified period time (usually one year) Total # of facility reports received nationally Total # of expected facility reports nationally % of monthly Sub-national unit reports that are NOT zero/missing (Average of 4 indicators: ANC1, Deliveries, DTP3, OPD) Total # of zero/missing values for all subnational units for the reporting year for ANC1 + Deliveries + DTP3 + OPD Total # of sub-national units X 12 X 4 # of sub-national units with less than 80% reporting completeness nationally Total # of sub-national units in the country Number and % of sub-national units with monthly facility reporting rates below 80% for a specified period time (usually one year) # of sub-national units with monthly reporting rates less than 80% nationally Total # of sub-national units in the country Number and % of sub-national units with at least 20% zero/missing values (Average of four indicators: ANC1, Deliveries, DTP3, OPD) ((# of sub-national units with more than 20% zero and missing values for all four indicators (ANC1 + Deliveries + DTP3 + OPD) combined* Total # sub-national units in the country *A sub-national unit has more than 20% missing/zero values for the four indicators combined when the equation below is greater than 20%: Internal consistency of reported data Extreme outliers % of monthly Sub-national unit values that are extreme outliers (at least 3 standard deviations (SD) from the mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Total number of extreme outliers for ANC1 + Deliveries + DTP3 + OPD Total # sub-national units X 12 expected monthly reported value per sub-national unit for 1 indicator X 4 (for the four indicators) # of zero/missing values in ANC1 + Deliveries + DTP3 + OPD Total # expected values for the reported year for each indicator X 4 Number and % of sub-national units in which even one of the monthly sub-national unit values in any of the four indicators is an extreme outlier (± 3SD from the sub-national unit mean) # of sub-national units with at least one extreme outlier in ANC1, Delivery, DTP1, or OPD Total # of sub-national districts in the country

5 Moderate outliers Consistency over time Internal consistency between indicators Consistency between DTP1 and DTP3 % of Sub-national unit values that are moderate outliers (between ±2-3 SD from the mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Total number of moderate outliers for ANC1 + Deliveries + DTP3 + OPD Total # sub-national units X 12 expected monthly reported value per sub-national unit for 1 indicator X 4 (for the four indicators) Number of events for the current year divided by the mean of preceding 3 years (average for ANC1, Deliveries, DTP3, OPD) Ratio of the total number of events for the current year to the mean number of events for up to 3 preceding years for ANC1 + Deliveries + DTP3 + OPD 4 THERE IS NO NATIONAL LEVEL CONSISTENCY CHECK FOR THIS INDICATOR Number of DTP3 immunization divided by the number of DTP1 immunizations (should be less than 1) Total # of national DTP3 immunization doses recorded Total # of national DTP1 immunization doses recorded Number and % of sub-national units in which 5% or more of the monthly Sub-national unit values for all the four indicators combined are moderate outliers (between ±2-3 SD from the Sub-national unit mean) Total # of sub-national units with 5% or more moderate outliers for all four indicators combined (*) Total # of sub-national units in the country *A sub-national unit has 5% or more moderate outliers for the four indicators combined when the equation below is greater or equal to 5%: # of moderate outliers in ANC1 + Deliveries + DTP3 + OPD Total # expected values for the reported year for each indicator X 4 Number and % of Sub-national unit with at least 33% difference between national ratio for the current year to the mean of the preceding 3 years and the Sub-national unit ratio for the current year to the mean of the preceding three years. # of sub-national units whose ratio is more than 33% different from the national ratio Total # of sub-national units in the country Number and % of Sub-national units whose ratio of DTP1 to ANC1 from HMIS data is at least 33% different from the national HMIS ratio of DTP1 to ANC1. # of sub-national units whose DTP1 to ANC1 ratio is more than 33% different from the national ratio for DTP1 to ANC1 Total # of sub-national units in the country Number and % of Sub-national units with the number of DTP3 immunizations over 2% higher than DTP1 immunization # of sub-national units with DTP3 doses that were over 2% higher than DTP1 dose Total # of sub-national units in the country

6 Verification of reporting consistency through facility survey % of agreement between data in sampled facility records and national records for the same facilities # of recounted events = verification factor # of reported events THIS INDICATOR CANNOT CURRENTLY BE CALCULATED IN THIS TOOL The verification can be done for any indicator. If done for more than one indicator, the verification factors can be averaged. External consistency of population data Consistency of population Population projection of number of projection (UN) live births from the Bureau of Statistics divided by UN projection based live births THIS INDICATOR IS NOT CALCULATED SUB-NATIONALLY Consistency of denominator (estimated number of pregnant women) Consistency of denominator (estimated number of children under 1 year) Population projection of live births from Bureau of Statistics Population projection of live births from UN population division Ratio of the official denominator estimate for pregnant women divided by an alternative denominator for the number of pregnant women (derived by dividing ANC1 total events from HMIS by ANC1 coverage rate estimated from the most recent population based survey) Official denominator for pregnant women Alternative denominator for pregnant women* *Calculated by: ANC1 total events from HMIS ANC1 coverage rate from most recent population based survey Ratio of the official denominator for the number of children under 1 year divided by an alternate denominator for children under 1 year of age (derived by dividing DTP1 total events from HMIS by DTP1 coverage rate estimated from the most recent population based survey) Official denominator for children under 1 Alternative denominator for children under 1* *Calculated by: DTP1 total events from HMIS DTP1 coverage rate from most recent population based survey Number and % of aggregation units used for the most recent populationbased survey (such as a province/state/region) whose official estimates of total number of pregnant women and alternative estimates of pregnant women are at least 33% different from each other # of province/state/region whose official estimate for pregnant women is more than 33% different from the alterative estimate for pregnant women Total number of province/state/region Number and % aggregation unit used for the most recent population-based survey (such as a province/state/region) whose official under-1 estimates and alternative under-1 estimates are at least 33% different from each other # of province/state/region whose official estimate for children under 1 is more than 33% different from the alterative estimate for children under 1 Total number of province/state/region

7 External Comparison External Comparison: of ANC1 External Comparison: of Deliveries External Comparison: of DTP3 ANC1 coverage rates based on facility reports divided by survey coverage reports ANC1 coverage rate from facility data ANC1 coverage rate from most recent population-based survey Delivery coverage rates based on facility reports divided by survey coverage reports Delivery coverage rate from facility data Delivery coverage rate from most recent population-based survey DTP3 coverage rates based on facility reports divided by survey coverage reports DTP3 coverage rate from facility data DTP3 coverage rate from most recent population-based survey Number and % aggregation unit used for the most recent population-based survey (such as a province/state/region) whose ANC1 facility-based coverage rates and survey coverage rates are at least 33% different from each other # of province/state/region whose ANC1 facility coverage rate is more than 33% different from the ANC1 coverage rate from the most recent population-based survey Total number of province/state/region Number and % of aggregation unit used for the most recent populationbased survey (such as a province/state/region) whose facilitybased coverage rates for deliveries and survey coverage rates for deliveries are at least 33% different from each other # of province/state/region whose Delivery facility coverage rate is more than 33% different from the Delivery coverage rate from the most recent population-based survey Total number of province/state/region Number and % of aggregation unit used for the most recent populationbased survey (such as a province/state/region) whose DTP3 facility-based coverage rates and survey coverage rates are at least 33% different from each other # of province/state/region whose DTP3 facility coverage rate is more than 33% different from the DTP3 coverage rate from the most recent population-based survey Total number of province/state/region

8 Data Quality Report Card Indicators This section describes in detail the indicators that comprise the DQRC and how they are calculated. The DQRC is a method that can be done by anyone without any specific tool. However, to make it easier to calculate the indicators included in the DQRC, there is an accompanying Excel-based data quality assessment tool that provides results for the DQRC indicators based on data that is inputted. Dimension 1: Completeness of reporting The purpose of this dimension is to examine if all entities that are supposed to report are actually reporting. The indicators in this dimension include completeness of reporting at the health facility level-usually the 1 st administrative unit level, completeness of reporting at levels higher than a health facility, and the completeness of data elements in a report (or otherwise presence of missing data). 1a. Completeness of administrative unit reporting- In many countries health facilities send their monthly reports to the next reporting administrative unit, e.g. a district. These administrative units compile the information and forward it to the next level of reporting. All administrative units have a schedule for reporting that they are supposed to adhere to. This indicator whether the administrative unit is reporting according to schedule. For this indicator the reporting rates of facilities within the administrative boundaries does not matter. Only the reporting rates of the actual administrative unit counts; this provides an indication of the health office performance in compiling and submitting their monthly reports on a timely basis. In many countries where the HMIS is now web-based from the health facility onwards, this indicator is redundant. Definition Completeness of district/provincial reporting (%) is defined as the number of district/provincial monthly reports received on time divided by the total number of reports expected for a specified time period (usually one year). A completeness rate of 100% indicates that all units reported on time. At the national level, this is a straightforward calculation according to the definition above. A completeness rate of 100% indicates that all units reported on time. At the subnational level (e.g. district or province), a completeness rate is computed for each administrative unit over the specified time period (usually one year). Units that have a completeness rate below 80% are considered to have poor reporting. The percentage of units that have a completeness rate below 80% is shown in the summary table of results.

9 Example: At the national level, if the country has 10 districts, the expected number of reports would be 120 reports (10 reports/month X 12 months). The actual number of reports received was 97 (shown in Table 1). Therefore, the completeness rate would be 97/120=81%. At the subnational level, suppose there are ten districts which are expected to report monthly. Table 2 shows an example of monthly reporting by ten districts over the span of twelve months. Five out of the 10 districts (50%) have district completeness reporting rate of less than 80%. The summary of the results would be shown as in Table 2. Table 1: District reporting example. District health offices submitting monthly reports on time are indicated with tick marks. Districts with poor reporting (ie. completeness rate below 80%) are shown in red. Month Completeness Total rate District % District % District % District % District % District % District % District % District % District % National % Table 2: Example summary results. National district monthly reporting completeness rate Number (%) of districts with completeness rate below 80% Districts with completeness rate below 80% Year 81% 5 (50%) District 1, District 6, District 7, District 9, District 10 1b. Completeness of facility reporting All facilities are expected to submit reports on key service outputs on a pre-determined schedule. In most countries this schedule is monthly. The best-case scenario would include reporting from all public facilities, private facilities, facilities run by nongovernmental organizations, faith-based organizations, etc. However, in most developing countries only the public health facilities and sometimes facilities run by non-governmental organizations and faithbased organizations report in to the health management information system (HMIS). It is critical to know the facility reporting completeness rate to make informed interpretation on key indicators. If facility reporting completeness is less than 100%, one only has partial and incomplete information on health indicators. The total expected reports would include all facilities that are supposed to report to the HMIS. Definition Completeness of facility reporting(%) is defined as the number of reports received, according to schedule, from all health facilities nationally, divided by the total expected reports from all

10 facilities that are supposed to report to the HMIS for a specified time period (usually one year). The numerator is the actual number of facilities that submit a report and the denominator is the total number of health facilities that are expected to submit a report. At the national level, this is a straightforward calculation according to the definition above. A completeness rate of 100% indicates that all units reported on time. At the subnational level (e.g. district or province), a facility reporting completeness rate is computed for each administrative unit over the specified time period (usually one year). The total actual number of facilities that submit a report are divided by the total number of health facilities that are expected to submit a report for each administrative unit. Subnational units that have facility reporting rates below 80% within their administrative boundaries are considered to have poor reporting. Example: At the national level, if a country has 1000 facilities that report to the HMIS, the total number of expected reports for one year would be 1000 X 12 = 12,000 reports. At the end of the year only 10,164 reports have been received (shown in Table 1 below). Completeness of facility reporting rate = 10,164/12,000 or 85%. At the subnational level, facility reporting rates within each of the 10 districts are examined. Districts that have less than 80% facility reporting completeness are shown in red. Three out of 10 districts (30%) have facility reporting rates of less than 80%. Summary of results are shown in Table 2. Table 1: Facility reporting rate within districts. Districts with facility reporting rate of less than 80% are shown in red. Total # of facilities Expected reports (Total facilities X 12 months) Actual # of reports received in 12 months Facility completeness rate (%) District % District % District % District % District % District % District % District % District % District % National % Table 2: Example summary results. Year National facility reporting completeness rate 85% Number (%) of districts with facility reporting completeness rate below 80% 3 (30%) Districts with completeness rate below 80% District 2, District 5,District 9

11 1c. Completeness of indicator data (zero/missing values) Completeness of missing data is an indicator of the extent to which facility and district reports include all reportable events. Missing data should be clearly differentiated from true zero values in district and facility reports. A true zero value indicates that no reportable events occurred in the specified reporting period; a missing value indicates that reportable events occurred but were not actually reported. In many HMIS reports, missing entries are assigned a value of 0, making it impossible to distinguish between a true zero value (no events occurred) from a missing value (events occurred but were not reported). Since it is difficult to differentiate between a true zero value and a true missing value, both these criteria have been combined together into one indicator. This completeness of indicator data can be examined at a more aggregate administrative unit level such as district or province or at the facility level. The preferred level of analysis would be looking at zero/missing values at the facility level but if data is only available at a more aggregate level, one can also look at zero/missing values this level. Definition Completeness of indicator data (zero/missing values) (%) is defined as the average percentage of monthly values for antenatal care first visit (ANC1), deliveries, diphtheria-pertussis-tetanus third dose (DTP3) and outpatient department (OPD) combined that are not zero or missing values for the specified time period (usually one year). That is, the indicator is calculated by subtracting the percentage of values that are zeros or missing from 100%. At the national level this indicator is as defined above an average percentage of values that are zero or missing across the four indicators. At the subnational level (e.g. district or province), it is the percentage of administrative units in which more than 20% of the monthly values of the four indicators combined are missing values. This percentage is calculated by summing all the missing values within an administrative unit, across the four indicators, for a specified period of time and dividing by the total number of expected values, across the four indicators, for the administrative unit for the same specified period of time.

12 Example: The example below shows the percentage zero/missing values for ANC1. Each means that the district had non-zero or non-missing value for the month in question. When examining monthly district level data for ANC1 over a period of one year, it is seen that, nationally, there are 21 occasions where district level data shows 0 or missing values. -The numerator, 21, is the national total of district level zero/missing values for ANC1. -The denominator is the total expected number of values. With 10 districts and 12 expected monthly values for each district for ANC1, the total expected values nationally are The total % of zero/missing values nationally for ANC1 is 17.5% (21/120). However, since we are calculating values that are not zero/missing, our indicator is 100%-17.5% =82.5%. At the subnational level, Table 1 shows that 4 out of 10 districts (40%) have more than 20% zero/missing values for ANC1 within their districts Table 1: Zero/missing values by district for ANC1. Districts are marked in red if with 20% or more of their values are zero or missing. Month Total zero/missing value % zero/missing District % District 2 0 0% District 3 0 0% District 4 1 8% District % District % District % District 8 0 0% District % District % National % A similar calculation is done for the other three indicators (Deliveries, DTP3, and OPD). The missing values for Deliveries, DTP3 and OPD are 30, 5 and 10, respectively. At the national level, the indicator is calculated by averaging zero/missing values across the four indicators -The numerator for the is the total zero/missing for the four indicators ( ) = 66 -The denominator is the total expected number of values for the four indicators. With 10 districts and 48 expected monthly values for each district for the four indicators (ANC1, deliveries, DTP3, OPD), the total expected values nationally are The total % of zero/missing values nationally for the four indicators is 13.75% (66/480). However, since we are calculating values that are not zero/missing, our indicator is 100%-13.75% =86.25%. At the subnational level, the number of districts with more than 20% of zero/missing values for ANC1, deliveries, DTP3, OPD is 4, 5, 1, 2, respectively. The average number of districts with zero/missing values across the four indicators is (( )/4 = 3). Therefore, 30% (3/10) of the districts have more than 20% zero/missing values. 3.3

13 Example (continued): Subnationally, the indicator is calculated by summing missing/zero values across all four indicators. Table 2 below gives an example of this. The steps for calculating the indicator are given below: a) Total number of missing/zero values Sum the total number of missing/zero values across the four indicators for each district. For example, District 4 has missing/zero values in three of the indicators with a total of 11 missing/zero values. b) Total expected monthly reported values - The total number of expected monthly reported values for each district for one year for each indicator is 12. That is, one expects to see 12 monthly values for ANC1, 12 monthly values for deliveries, 12 monthly values for DTP3 and 12 monthly values for OPD. The total number of expected monthly values for the four indicators is ( = 48). c) Calculate the average % of missing/zero values For each district divide the total number of missing/zero values for the four indicators by the total number of expected monthly reported values for the four indicators. For District 4 (as seen in Table 2 below), the total number of missing/zero values is 11. The total number of expected monthly reported values is 48 for each district. The percentage of missing/zero values is 11/48 = 23%. Three districts out of 10 have 20% or more of the reported monthly values across the four indicators that are missing/zero values. Table 2: Districts with more than 5% of values that are moderate outliers are highlighted in red Total # of zero/ missing ANC1 Deliveries DTP3 OPD values District % District % District % District % District % District % District % District % District % District % Summary results at the national and subnational level are shown in Table 3 Table 3: Example summary results. % of monthly reports that are not zero/missing (average of ANC1, deliveries, DTP3, OPD) Number (%) of districts with 20% or more values zero/missing for the four indicators combined Year 86.25% 3 (30%)

14 Dimension 2: Internal consistency of reported data This dimension looks at the accuracy and reliability of the category of data that are classified as numerators (or event data) when calculating coverage indicators. Proposed indicators within this subdimension examine outliers (more than 2 and/or 3 standard deviations from the annual average), compare events (numerator information) of similar indicators to see the level of congruence, examine trends over time, and comparing source data from health facility registers to actual reported data in the HMIS. 2a. Accuracy of event reporting: outliers in the current year: Most often reported data follow a pattern. Usually this means that the reported event data over a period of time (such as monthly data) look very similar to each other. There are not very large variations in the numbers. So, when significant variations do happen, it is important to examine them to confirm if these variations are legitimate or there is an issue with data quality. If the reported values follow a normal distribution, 68% percent of all values are supposed to be within 1 standard deviation (SD) of the mean, 95% of all values fall within 2 SD from the mean, and 99% of the values are within 3 SD from the mean. This indicator examines values that are outliers. Outliers are classified as moderate or extreme. A moderate outlier is any single reported value in a specified period that is between 2 to 3 SD from the average value for the same reported period. An extreme outlier is defined as any single reported value in a specified period that is more than 3 SD from the average value for the same reported period. The cut-offs for outliers have been purposefully kept wide to ensure that small variation in values due to cyclical or other programmatic reasons are not mistakenly captured as outliers. Based on the definition of moderate and extreme outliers, less than 5% of the values should be moderate outliers and less than 1% of the values should be an extreme outliers. Definition Accuracy of event reporting (%) moderate outlier is defined as the average % of reported monthly data for the four indicators (ANC1, deliveries, DTP3, and OPD) that are moderate outliers (between 2 to 3 SD from the average value of the indicator), for a specified reporting period (usually one year). Accuracy of event reporting (%) extreme outliers is defined as the average % of reported monthly data for the four indicators (ANC1, deliveries, DTP3, and OPD) that are extreme outliers (at least 3 standard deviations (SD) from the mean) for a specified time period (usually one year). At the national level this indicator is defined as above. Moderate outliers for the four indicators are summed and divided by the expected number of values for all the indicators. If the time period of analysis is one year, the total number of expected values for one indicator is (Total number of administrative units of analysis X 12). Total number of expected values for the four indicators combined is (Total number of administrative units of analysis X 12 X 4). A similar calculation is done for extreme outliers. Moderate outliers: At the subnational level (e.g. district or province), it is the percentage of administrative units in which more than 5% of the combined monthly values of the four indicators are moderate outliers (between ±2-3 SD from the administrative unit mean). This percentage is calculated by summing all the moderate outliers within an administrative unit for the four indicators for a specified period of time and dividing by the total number of expected values for the administrative unit for the same specified period of time.

15 Extreme outliers: At the subnational level (e.g. district or province), it is the percentage of administrative units in which even one of the monthly administrative unit values in any of the four indicators is an extreme outlier (± 3SD from the administrative unit mean). This percentage is calculated by dividing the total number of administrative units with extreme outliers for the specified time period with the total number of administrative units. Some administrative units might have an extreme outlier in more than one of the indicators; however, they are only counted once here. Example: First, we will examine outliers for ANC1. Table 1 below shows moderate outliers for ANC1. There are 8 moderate outliers for ANC1. They are highlighted in red. Eight of the districts have at least one occurrence of a monthly ANC1 value that is a moderate outlier. Table 1: Monthly ANC1 values by district. Values in red are moderate outliers. Months District District District District District District District District District District National Nationally, this indicator is an average percentage of values that are moderate outliers across the four indicators. -The numerator is a sum of moderate outliers across the four indicators for all administrative units. If the total number of ANC1, delivery, DTP3 and OPD moderate outliers for in the 13 districts for one year are 8, 5, 7 and 2, respectively, the numerator would be the sum of these values ( = 22 total moderate outliers). The denominator is the sum of total number of expected reported values for all four indicators for all the administrative units. It is calculated by multiplying the total number of units (in the selected administrative unit level) with the expected number of reported values for one indicator for one administrative unit with 4 (to get the total figure for 4 indicators). The denominator is then calculated as follows: 10 districts X 12 expected monthly reported values per district for one indicator X 4 indicators =480 total expected reported value]. The average percentage of reported values that are moderate outliers equals ((22/480)* %). Subnationally, the indicator is calculated by summing moderate outliers across all four indicators. Table 2 below gives an example of this. The steps for calculating the indicator are given below: a) Total number of moderate outliers Sum the total number of moderate outliers across the four indicators for each district. For example, District 2 has moderate outliers in three of the indicators with a total of 3 moderate outliers.

16 Example (continued): b) Total expected monthly reported values - The total number of expected monthly reported values for each district for one year for each indicator is 12. That is, one expects to see 12 monthly values for ANC1, 12 monthly values for deliveries, 12 monthly values for DTP3 and 12 monthly values for OPD. The total number of expected monthly values for the four indicators is ( = 48). c) Calculate the average % of moderate outliers For each district divide the total number of moderate outliers for the four indicators by the total number of expected monthly reported values for the four indicators. For District 2 (as seen in Table 2 below), the total number of moderate outliers is 3. The total number of expected monthly reported values is 48 for each district. The percentage of moderate outliers is 3/48 = 6%. Three districts out of 10 have 5% or more of the reported monthly values across the four indicators that are moderate outliers. Table 2: Districts with more than 5% of values that are moderate outliers are highlighted in red Total # of ANC1 Deliveries DTP3 OPD outliers District % District % District % District % District % District % District % District % District % District % Table 3 gives the summary national results and the sub-national Table 3: Example summary results. % of district monthly values that are moderate outliers (between ±2-3 SD from the district mean) -- Average of 4 indicators (ANC1, Deliveries, DTP3, OPD) Number and % of districts in which 5% or more of the monthly district values for all the four indicators combined are moderate outliers (between ±2-3 SD from the district mean) Year 4.6% 3 (30%) Districts 1, 5 and 7 The national calculation for the extreme outlier is done the same way as for the moderate outliers. For the subnational calculation, for the extreme outlier, even if one of the indicators has an extreme outlier, the district is flagged. So if District 3 has 1 extreme outlier for OPD and no other districts have extreme outliers for any of the other indicators, the number and percentage of districts that have extreme outliers across the four indicators is 1/10 = 10%

17 2b. Consistency over time- This indicator shows the consistency of the values for key indicators in the most recent year compared with the mean value of the same indicator for the previous three years combined. Differences in values are expected from one year to the next; however, if the differences are very large, they warrant further scrutiny. While large differences usually suggest some type of reporting error, it is also possible an introduction of a new intervention might have contributed to a large percentage increase in indicator values from one year to the next. Definition Consistency over time (%) is defined as the average ratio of events/service outputs for the current year of analysis to the mean events/service outputs of up to three preceding years for ANC1, Deliveries, DTP3 and OPD. At the national level this indicator is as defined above an average of the ratios of the four indicators. Sub-nationally, this indicator looks at the percentage of administrative units in the selected administrative level of analysis with at least 33% difference between their ratio and the national ratio across the four indicators. The purpose of this indicator sub-nationally is to see how different an administrative unit value is from the national value. National values can often mask intra-country differences. A large difference between an administrative unit and national value shows an administrative unit that is performing very differently (has a very different trend) when compared to the nation as a whole. If the performance of the unit is better than the nation, it would be useful to examine further the possible factors that are contributing to the improved performance; and similarly, examine factors that are contributing to poor performance if the administrative unit is performing poorly compared to the nation. Example: First, we will examine consistency over time for ANC1. National total for ANC1 for 2011 = 355,000 National total for ANC1 for 2010 = 300,000 National total for ANC1 for 2009 = 288,000 National total for ANC1 for 2008 = 260,000 Mean of 2008, 2009 and 2010 = ((260, , ,000)/3) = (848,000/3) 282,667 Ratio of current year to the mean of the past three years for ANC1= 355,000/282, The national ratios for the other three indicators are also calculated similarly. They are 1.34,.99 and 1.34 for deliveries, DTP3 and OPD, respectively. The national consistency over time indicator is calculated as an average of the ratios of the four indicators. It is (( )/4) 1.23 The average ratio of 1.23 shows that there is an overall 23% increase in the service outputs for 2011 when compared to the average service outputs for the preceding three years across the four indicators. Table 1 below shows a presentation of this indicator sub-nationally. For example, District 2 has an ANC1 ratio of 1.1. The national ratio comparing ANC1 of the current year to the preceding three years is The % difference between District 2 ratio and the national ratio is:

18 Example (continued): istrict ratio National ratio National ratio The percentage difference between the district ratio and the national ratio for ANC1 for District 2 is less than 33%. However, there is a difference of approximately 64% between istrict 3 s OP ratio and the national ratio. To calculate this indicator sub-nationally, all administrative units whose ratios are different from the country s ratio by 33% or more are counted. In the example below, only District 3 has a difference greater than 33%. Therefore, 1 out of 10 districts (10%) has a ratio that is more than 33% different from the national ratio. However, if any one administrative unit is different by more than 33% in more than one indicator, it is also only counted once. For example, if District 3 also had its delivery ratio more than 33% different from the national delivery ratio, the subnational indicator value would still be 1 out of 10 districts (10%). As it is the same district, it is not counted twice. However, if District 10 has more than 33% difference in its ANC1 ratio compared to the national ratio, than there are 2 separate districts (or 2/10 or 20%) that have greater than 33% difference between their district and national ratios. Table 1: Consistency trend: comparison of district ratios to national ratios. More than 33% difference between district and national ratio is highlighted in red ANC1 ratio of current year to mean of preceding 3 years ±33% difference from ANC1 national ratio Deliveries ratio of current year to mean of preceding 3 years ±33% difference from Deliveries national ratio DTP3 ratio of current year to mean of preceding 3 years ±33% difference from DTP3 national ratio OPD ratio of current year to mean of preceding 3 years ±33% difference from OPD national ratio District % % % % District % % % % District % % % % District % % % % District % % % 1 9.1% District % % % % District % % % % District % % % 1 9.1% District % % % 1 9.1% District % % % % Nationally Table 3 gives the summary results Table 3: Example summary results. Average ratio of events/service outputs for current year to the mean events/service outputs for three preceding years for ANC1, Deliveries, DTP3 and OPD. Number (%) of districts with at least 33% difference between their ratio and the national ratio across the four indicators. Year (10%) District 3

19 2c. Internal consistency between indicators Certain indicators have similar patterns of behavior in health care in developing countries, such as ANC1 and DTP1. Typically these indicators have high coverage as they are usually point of entries for pregnant women and children, respectively, into the health system. Most pregnant women that are going to seek care during their pregnancy have at least one ANC visit and most children that will be seek care in the first year of their life will have at least one visit to the health facility, and we would expect women who seek care during pregnancy to also seek care for their children after birth. Definition: Internal consistency between indicators is the number (%) of administrative units whose DTP1 to ANC1 ratio is more than 33% different from the national DTP1 to ANC1 ratio. This indicator is not calculated at the national level. At the sub-national level the indicator looks at the percentage of administrative units of with at least 33% difference between their DTP1 to ANC1 ratio and the national ratio for the same indicators. We want to examine sub-national whose relationship between ANC1 and DTP1 are markedly different from the national relationship between ANC1 and DTP1.

20 Example: Suppose District 1 has a DTP1-ANC1 ratio of The national DTP1-ANC1 ratio is The percentage difference between the District 1 ratio and the national ratio is computed as follows: Since the percentage difference between the District 1 ratio and the national ratio is less than 33%, it is not flagged. Please see Table 1 below for examples of districts that have a greater than 33% difference between their DTP1-ANC1 ratio and the national DTP1-ANC1 ratio. Table 1: % difference between the district and national ratios. Districts with more than a 33% difference between the district and national ratios are shown in red. District ratio National ratio % Difference District % District % District % District % District % District % District % District % District % District % The total number of districts that have more than 33% difference from the national ratio is 2 out of a total of 10 districts, ie. 20%. Table 2 shows the summary results. Table 2: DTP1-ANC1 consistency example summary results. Year National ANC1-DTP1 consistency 0.87 Districts with DTP1/ANC1 ratio 33% above national ratio (DTP1 too high) Districts with DTP1/ANC1 ratio 33% below national ratio (DTP1 too low) 1 (10%) District 7 1 (10%) District 9 2d. Consistency between DTP1 and DTP3 DTP1 is the first vaccine dose in the DTP schedule. DTP3 is the 3 dose in the DTP schedule. The first dose of DTP will always precede the third dose of DTP. While it is theoretically possible for the number of DTP third doses to be slightly higher than the number of first doses, such as administrative units with a lot of in-migration or due to the size of cohorts, it is unlikely to happen systematically. Thus, if DTP3 numbers are higher than DTP1, data quality problems are usually indicated.

21 Definition: Consistency between DTP1 and DTP3 is defined as the total number of DTP3 doses administered divided by the total number of DTP1 doses administered. One would expect the ratio to be 1 or below (the number of DTP1 doses should be either more than DTP3 or the same). At the national level, this indicator is ratio of the total number of DTP3 doses administered divided by the total number of DTP1 doses administered. At the sub-national level, this indicator shows the percentage of administrative units that have DTP3 immunization numbers that are 2% or higher than DTP1 Example: National level: If the total number of national DTP3 doses is 305,000 and the total number of national DTP1 doses is 312,000, the ration of DTP3 to DTP1 is:. Subnational level: If District 3 has DTP1=7682 and DTP3=6978, the percentage difference is calculated by the following formula. Table 1 below shows the DTP1 and DTP3 ratio for all the districts. Table 1: % difference between DTP1 and DTP3. District where DTP3 is higher than 2% from DTP1 is highlighted in red % DTP1 DTP3 difference District % District % District % District % District % District % District % District % District % District % Table 2 shows the summary results. Table 2: DTP1-ANC1 consistency example summary results Year National DTP1-DTP3 ratio 0.98 Number (%) of districts where DTP3 is more than 2% greater than DTP1 1 (10%) District 5

22 2e. Verification of reporting consistency through facility survey: Data verification refers to the assessment of reporting correctness, that is, comparing health facility source documents to HIS reported data to determine the proportion of the reported numbers that can be verified from the source documents. It checks whether the information contained in the source documents has been transmitted correctly to the next higher level of reporting, for each level of reporting, from the health facility level to the national level. This allows the identification of systematic errors that occur in the reporting of data, and to identify problem districts or health facilities that consistently make errors. The data verification provides a quantitative measure ( verification factor ) of the proportion of reported events that can be verified using source documents. The verification factor is a summary indicator of the quality of data reporting which measures the extent to which reported results match verified results. It is the ratio of the number of recounted events from source documents and the number of reported events from the reporting forms. The data for this indicator needs to be collected separately through additional data collection. Currently, the GAVI s QA and QS and the Global Fund QA and R QA allow the calculation of a verification factor. However, the verification factor calculated from these activities is most often not representative at the national level. As an example, if a hospital has reported 1794 as the total number of deliveries for the 1 st quarter but in the recount from the source documents for the same period showed the total number of deliveries to be 1695, the verification factor would be: (the recounted deliveries divided by the reported deliveries). This example calculation is shown for one hospital. When calculating the verification factor for a sample of health facilities, the overall verification factor would use weights (unless a simple random sample methodology was used) to adjust for differences in the sample when compared to the sampling frame. A national verification factor of less than one indicates over-reporting: more events were reported than could be verified from the source documents. There are many possible reasons for over-reporting, such as errors in computation when aggregating data and incomplete source documents. Similarly, a verification factor greater than one indicates underreporting. It is useful as an indicator of data quality as it is a quantitative measure summarizing the reliability of the reporting system. Dimension 3: External consistency of population data The purpose of this dimension is to compare two difference sources of population estimates (values are calculated differently) to see the level of congruence between the two sources. If the two data sources are discrepant, coverage estimates for a program area can be very different based on the source used. The higher the level of consistency between denominators from different sources, the more confidence can be placed in the accuracy of the population projections. 3a. Consistency with United Nations (UN) population projection Denominators are often cited as the leading problem when coverage derived from the HMIS are very different from coverage rates derived from household surveys. If denominators from two different sources are very different, this could potentially indicate a problem. For this indicator, we can compare the denominator (total population of interest) used for one of the four indicators included in the Report Card to the UN population projections. Denominators used to calculate rates and ratios are usually derived from the census or civil registration system. Denominators from the census are usually population projections based on estimates of natural growth and migration. The most common denominator used for calculating ANC

23 rates and delivery rates is the total number of live births in a specified period of time, for immunization it is the total number of surviving infants (total live births adjusted for infant mortality) and for OPD, it is the total population. Comparable denominators available from UN Projections are births and total population. The user can compare either births or total population figures from their country projections to the UN projections, based on availability. Definition: Consistency with United Nations (UN) population projection is defined as the ratio between the official country projection for live births for the year of interest divided by the UN official projection for live births for the same year. This indicator is not calculated at the subnational level. At the national level it is as defined above. Example: If the official live birth estimate for the year of analysis is 255,000 and the projected UN population is 200,000, the ratio of country population projections to UN population projection is 255,000/200, This ratio shows that the country population projection for live births is higher than the UN population projection for the same year. 3b.1. Consistency of denominator (for pregnant women) Population denominators used to calculate coverage rates for facility data (usually projections from the national bureau of statistics) can be compared to denominators derived from alternate sources of data. This indicator compares the existing estimates for pregnant women and compares it to an alternate estimate for pregnant women derived from survey data. There are two pieces of information necessary for the creation of this alternate denominator estimate: 1) an estimate of coverage for a specific intervention for national and sub-national levels (if available) from survey data; and (2) numerator estimates for the indicators from facility data. There is one necessary condition that needs to be met to calculate a robust alternate denominator: Confidence in the quality of the numerator data of the selected indicator, such as 1. 20% of the data is missing or has 0 values; 2. 5% the values are outliers An additional preferred characteristic of the indicator would be high levels of coverage and relatively little variability across the country. Examples of indicators that usually fit this criterion are first antenatal visit or first vaccination (BCG or DPT1). Definition: Consistency of denominator (for pregnant women) is defined as the ratio of the official denominator estimate for pregnant women divided by an alternative denominator for the number of pregnant women (calculated by dividing ANC1 total events from HMIS by ANC1 coverage rate estimated from the most recent population based survey) At the national level this indicator is as defined above: a ratio of two denominators.

24 At the subnational level (e.g. district or province), the ratio of denominators is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two denominators is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Table 1 below provides an example of how this indicator is calculated at the national and subnational level Table 1: Comparison of official versus alternate denominators and highlighting discrepancies of more than 33% between the two (highlighted in red) No. of women making ANC1 visit Survey Estimate Official denominator of pregnant women Alternate denominator Ratio of offical to alternate denonimantor ±33% difference between official and alternate denominator District 1 27, % 26, % District 2 18, % 19, % District 3 9, % 6, % District 4 14, % 18, % District 5 13, % 13, % District 6 24, % 26, % District 7 11, % 13, % District 8 14, % 12, % District 9 15, % 11, % District 10 20, % 22, % National 169,257 93% 172, , % At the national level: -Total number of ANC1 from the health facility data for the year 2011 is 169,257 -Coverage for ANC1 from most recent population-based survey is 93% Alternate ANC denominator = The reason for calculating an alternate denominator by this method is that if the numerator value from the facility data is accurate and the survey estimate is reliable, our given numerator value of 169,257 women seeking first ANC should be 93% of a population of pregnant women. By dividing the given numerator by the 93% gives us an alternate population denominator to which we can compare the official denominator. -The official denominator used for calculating ANC1 coverage rate is 172,277. The ratio of the two denominators :.

25 Example (cont.): If the ratio is 1, it means that the two denominators are exactly the same. If the ratio is >1, it means that the official denominator is higher than the alternate denominator If the ratio is <1, it means that the alternate denominator is higher than the official denominator The ratio of 0.95 shows that the two denominator values are fairly similar to each other and there is approximately 5% difference between the two values. At the subnational level, the ratio of denominators is calculated for each administrative unit. Districts with at least 33% difference between their two denominators are flagged. In Table 1 above, District 3 and District 9 have at least 33% difference between their two ratios. Table 2 below shows the summary results Table 2: Official denominator-alternate denominator consistency example summary results 2011 National ANC1 denominator consistency ratio Districts with consistency ratio under 2 (15%) 0.67 (official estimate too low) District 3, District 9 Districts with consistency ratio over 0 (0%) 1.33 (official estimate too high) 3b.2. Consistency of denominator (estimated number of children under 1 year) A similar comparison of official and alternate denominators is done for children under 1 year as done above for pregnant women. Definition: Consistency of denominator (for children under 1) is defined as the ratio of the official denominator estimate for children under 1 year divided by an alternative denominator for children under 1 year (calculated by dividing DTP1 total events from HMIS by DTP1 coverage rate estimated from the most recent population based survey) At the national level this indicator is as defined above: a ratio of two denominators. At the subnational level (e.g. district or province), the ratio of denominators is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two denominators is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Please see example for 3b.1 above which compares the official and alternate denominators for pregnant women. This indicator is calculated exactly the same way)

26 Dimension 4: External consistency of coverage rates The purpose of this dimension is to examine the external consistency of the indicators generated from the health facility data. This implies a comparison with coverage data from a recent household survey. Often we use the survey data as the gold standard : we consider it the true value of the indicator in the population. This has to be done cautiously. First, surveys have their own data quality problems and if there are systematic problems, then the survey-based coverage estimate can also be well off the true population value. Second, surveys are based on a sample and therefore have confidence intervals. In the comparison with facility data we need to take the 95% confidence limits into account. If the facility value lies within the range, it cannot be concluded that there is a significant difference. Third, survey data may refer to different time periods (often three or five years before the survey), while facility-based coverage rates are usually for the most recent year. 4a External comparison of ANC1 - If the HMIS is accurately detecting all ANC visits in the country (not just those limited to public sector) and the denominators are sound, the coverage rate for ANC1 derived from the HMIS should be very similar to the ANC1 coverage rate derived from population surveys. However, often times the coverage rates from HMIS are often different from survey coverage rates for the same indicator. Definition: External comparison of ANC1 is defined as the coverage rate for ANC1 based on the facility reports divided by the coverage rate for ANC1 based on household survey data. At the national level this indicator is as defined above: a ratio of the two ANC1 coverage rates (HMIS and survey) at the national level. At the subnational level (e.g. district or province), the ratio of the ANC1 coverage rates (HMIS and survey) is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two coverage rates is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level.

27 Example: Table 1 below provides an example of how this indicator is calculated at the national and subnational level Table 1: Comparison of HMIS and survey coverage rates for ANC1 and highlighting discrepancies of more than 33% between the two (highlighted in red) Facility coverage rate Survey coverage rate Ratio of Facility to Survey rates 33% difference between official and alternate denominator District 1 105% 95% % District 2 91% 97% % District 3 139% 90% % District 4 76% 95% % District 5 96% 80% % District 6 93% 98% % District 7 84% 86% % District 8 110% 98% % District 9 138% 92% % District 10 91% 79% % National 98% 93% % At the national level: -The coverage rate from HMIS is 98% -The coverage rate from the most recent population-based survey is 93% The ratio of the two coverage rates is :. If the ratio is 1, it means that the two coverage rates are exactly the same. If the ratio is >1, it means that the HMIS coverage is higher than the survey coverage rate If the ratio is >1, it means that the survey coverage rate is higher than the HMIS coverage rate The ratio of 1.05 shows that the two denominator values are fairly similar to each other and there is approximately 5% difference between the two values. At the subnational level, the ratio of denominators is calculated for each administrative unit. Districts with at least 33% difference between their two denominators are flagged. In Table 1 above, District 3 and District 9 have at least 33% difference between their two ratios. Table 2 below shows the summary results. Table 2: HMIS coverage rate-survey coverage rate consistency example summary results 2011 National ANC1 coverage rates consistency ratio Districts with ANC1 consistency ratio below 0.67 (Survey coverage rate is 0 (0%) higher) Districts with ANC1 consistency ratio above 1.33 (HMIS coverage rate is higher) 2 (15%) District 3, District 9

28 4b. External comparison of deliveries This indicator is calculated in exactly the same way as the previous two for deliveries. With deliveries there is often a much larger private sector component. If the private sector does not report through the HMIS, then the comparison should focus on the public sector data from the survey. Definition: External comparison of deliveries is defined is defined as the coverage rate for deliveries based on the facility reports divided by the coverage rate for deliveries based on household survey data. At the national level this indicator is as defined above: a ratio of the two deliveries coverage rates (HMIS and survey) at the national level. At the subnational level (e.g. district or province), the ratio of the deliveries coverage rates (HMIS and survey) is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two coverage rates is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Please see example for 4a above which compares the HMIS coverage rates to survey coverage rates for ANC1. The comparison of deliveries HMIS and survey coverage rates is done exactly the same way) 4c.External comparison of DTP3 The comparison of the DTP1 coverage rate from facility data to DTP1 coverage rate from survey data is calculated exactly the same way as the comparison of ANC1 rates from facility data to survey-based coverage rates. Definition: External comparison of DTP3 is defined is defined as the coverage rate for DTP3 based on the facility reports divided by the coverage rate for DTP3 based on household survey data. At the national level this indicator is as defined above: a ratio of the two DPT3 coverage rates (HMIS and survey) at the national level. At the subnational level (e.g. district or province), the ratio of the DTP3 coverage rates (HMIS and survey) is calculated for each administrative unit. Any administrative ratio that has at least 33% difference between the two coverage rates is flagged. The number and percentage of administrative units with at least 33% difference is calculated. This comparison is only possible if you have the survey coverage estimates for the indicator for the same administrative level. For example, if your administrative unit of analysis is a district but survey coverage rates for the indicator are not available at the district level, this sub-national comparison will not be possible at the district level. However, if province or regional level survey data is available, the comparison can be done at the province level. Example: Please see example for 4a above which compares the HMIS coverage rates to survey coverage rates for ANC1. The comparison of DTP3 HMIS and survey coverage rates is done exactly the same way)

DATA QUALITY ASSESSMENT TOOL INSTRUCTIONS FOR DATA PREPARATION

DATA QUALITY ASSESSMENT TOOL INSTRUCTIONS FOR DATA PREPARATION DATA QUALITY ASSESSMENT TOOL INSTRUCTIONS FOR DATA PREPARATION 1 The data quality assessment (DQA) tool provides a stepwise method to assess the quality of health facility data for some key coverage indicators.

More information

Health Visitor Service Delivery Metrics, England. Quarter 2 2013/14 (July to September 2013) to Quarter 3 2014/15 (October to December 2014)

Health Visitor Service Delivery Metrics, England. Quarter 2 2013/14 (July to September 2013) to Quarter 3 2014/15 (October to December 2014) Health Visitor Service Delivery Metrics, England Quarter 2 2013/14 (July to September 2013) to Quarter 3 2014/15 (October to December 2014) Published 22 nd May 2015 Background The Health Visiting service

More information

Which flu vaccine should you or your child

Which flu vaccine should you or your child have? Click on an age group to find out Birth to under six months Babies of this age cannot have flu vaccine so the best way to protect them is for their mother to have the vaccination while pregnant.

More information

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Role in quality management system Quality Control (QC) is a component of process control, and is a major element of the quality management

More information

Module 7 Expanded Programme of Immunization (EPI)

Module 7 Expanded Programme of Immunization (EPI) Module 7 Expanded Programme of Immunization (EPI) (including Vitamin A, Tetanus Toxoid and Growth Monitoring) CONTENTS 7.1 What are the tools used for data collection?....................................2

More information

Means, standard deviations and. and standard errors

Means, standard deviations and. and standard errors CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

More information

The Data Improvement Guide. A curriculum for improving the quality of data in PMTCT

The Data Improvement Guide. A curriculum for improving the quality of data in PMTCT The Data Improvement Guide A curriculum for improving the quality of data in PMTCT TABLE OF CONTENTS Introduction... 3 Module #1: Introduction, forming a Data Improvement Team, and developing an Aim Statement...

More information

A Pricing Model for Underinsured Motorist Coverage

A Pricing Model for Underinsured Motorist Coverage A Pricing Model for Underinsured Motorist Coverage by Matthew Buchalter ABSTRACT Underinsured Motorist (UIM) coverage, also known as Family Protection coverage, is a component of most Canadian personal

More information

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com

More information

Measurement Information Model

Measurement Information Model mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides

More information

Methodology. Discounting. MVM Methods

Methodology. Discounting. MVM Methods Methodology In this section, we describe the approaches taken to calculate the fair value of the insurance loss reserves for the companies, lines, and valuation dates in our study. We also describe a variety

More information

Total Cost of Care and Resource Use Frequently Asked Questions (FAQ)

Total Cost of Care and Resource Use Frequently Asked Questions (FAQ) Total Cost of Care and Resource Use Frequently Asked Questions (FAQ) Contact Email: [email protected] for questions. Contents Attribution Benchmarks Billed vs. Paid Licensing Missing Data

More information

Data Management Implementation Plan

Data Management Implementation Plan Appendix 8.H Data Management Implementation Plan Prepared by Vikram Vyas CRESP-Amchitka Data Management Component 1. INTRODUCTION... 2 1.1. OBJECTIVES AND SCOPE... 2 2. DATA REPORTING CONVENTIONS... 2

More information

MATERNAL AND CHILD HEALTH

MATERNAL AND CHILD HEALTH MATERNAL AND CHILD HEALTH 9 George Kichamu, Jones N. Abisi, and Lydia Karimurio This chapter presents findings from key areas in maternal and child health namely, antenatal, postnatal and delivery care,

More information

APPLICATIONS AND MODELING WITH QUADRATIC EQUATIONS

APPLICATIONS AND MODELING WITH QUADRATIC EQUATIONS APPLICATIONS AND MODELING WITH QUADRATIC EQUATIONS Now that we are starting to feel comfortable with the factoring process, the question becomes what do we use factoring to do? There are a variety of classic

More information

A Comparative Analysis of Income Statistics for the District of Columbia

A Comparative Analysis of Income Statistics for the District of Columbia Occasional Studies A Comparative Analysis of Income Statistics for the District of Columbia ACS Income Estimates vs. DC Individual Income Tax Data Jayron Lashgari Office of Revenue Analysis Office of the

More information

Malaria programmatic gap analysis : Guidance notes. Introduction

Malaria programmatic gap analysis : Guidance notes. Introduction Malaria programmatic gap analysis : Guidance notes Introduction A comprehensive programmatic gap analysis outlines the complete programmatic requirement needed to fully implement the strategic plan of

More information

Math Released Set 2015. Algebra 1 PBA Item #13 Two Real Numbers Defined M44105

Math Released Set 2015. Algebra 1 PBA Item #13 Two Real Numbers Defined M44105 Math Released Set 2015 Algebra 1 PBA Item #13 Two Real Numbers Defined M44105 Prompt Rubric Task is worth a total of 3 points. M44105 Rubric Score Description 3 Student response includes the following

More information

APPENDIX N. Data Validation Using Data Descriptors

APPENDIX N. Data Validation Using Data Descriptors APPENDIX N Data Validation Using Data Descriptors Data validation is often defined by six data descriptors: 1) reports to decision maker 2) documentation 3) data sources 4) analytical method and detection

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

Windows-Based Meta-Analysis Software. Package. Version 2.0

Windows-Based Meta-Analysis Software. Package. Version 2.0 1 Windows-Based Meta-Analysis Software Package Version 2.0 The Hunter-Schmidt Meta-Analysis Programs Package includes six programs that implement all basic types of Hunter-Schmidt psychometric meta-analysis

More information

REPUBLIC OF RWANDA MINISTRY OF HEALTH. Data Validation and Verification. Procedure Manual

REPUBLIC OF RWANDA MINISTRY OF HEALTH. Data Validation and Verification. Procedure Manual REPUBLIC OF RWANDA MINISTRY OF HEALTH Data Validation and Verification Procedure Manual Version, 2011 2 ACKNOWLEDGEMENT The Ministry of Health is grateful to all the different stakeholders who contributed

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Northumberland Knowledge

Northumberland Knowledge Northumberland Knowledge Know Guide How to Analyse Data - November 2012 - This page has been left blank 2 About this guide The Know Guides are a suite of documents that provide useful information about

More information

How to Verify Performance Specifications

How to Verify Performance Specifications How to Verify Performance Specifications VERIFICATION OF PERFORMANCE SPECIFICATIONS In 2003, the Centers for Medicare and Medicaid Services (CMS) updated the CLIA 88 regulations. As a result of the updated

More information

AP Physics 1 and 2 Lab Investigations

AP Physics 1 and 2 Lab Investigations AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks

More information

Part II Management Accounting Decision-Making Tools

Part II Management Accounting Decision-Making Tools Part II Management Accounting Decision-Making Tools Chapter 7 Chapter 8 Chapter 9 Cost-Volume-Profit Analysis Comprehensive Business Budgeting Incremental Analysis and Decision-making Costs Chapter 10

More information

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing. Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

More information

Attempt of reconciliation between ESSPROS social protection statistics and EU-SILC

Attempt of reconciliation between ESSPROS social protection statistics and EU-SILC Attempt of reconciliation between ESSPROS social protection statistics and EU-SILC Gérard Abramovici* * Eurostat, Unit F3 ([email protected]) Abstract: Two Eurostat data collection, ESSPROS

More information

Data quality and metadata

Data quality and metadata Chapter IX. Data quality and metadata This draft is based on the text adopted by the UN Statistical Commission for purposes of international recommendations for industrial and distributive trade statistics.

More information

Description of the Sample and Limitations of the Data

Description of the Sample and Limitations of the Data Section 3 Description of the Sample and Limitations of the Data T his section describes the 2007 Corporate sample design, sample selection, data capture, data cleaning, and data completion. The techniques

More information

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions. Page 1 of 13 Review of Linear Expressions and Equations Skills involving linear equations can be divided into the following groups: Simplifying algebraic expressions. Linear expressions. Solving linear

More information

Toolkit on monitoring health systems strengthening HEALTH INFORMATION SYSTEMS

Toolkit on monitoring health systems strengthening HEALTH INFORMATION SYSTEMS Toolkit on monitoring health systems strengthening HEALTH INFORMATION SYSTEMS June 2008 Table of contents 1. Introduction... 2 2. Expectations of a country health information system... 3 3. Sources of

More information

Data Quality Assessment

Data Quality Assessment Data Quality Assessment Leo L. Pipino, Yang W. Lee, and Richard Y. Wang How good is a company s data quality? Answering this question requires usable data quality metrics. Currently, most data quality

More information

C-IMCI Program Guidance. Community-based Integrated Management of Childhood Illness

C-IMCI Program Guidance. Community-based Integrated Management of Childhood Illness C-IMCI Program Guidance Community-based Integrated Management of Childhood Illness January 2009 Summary This document provides an overview of the Community-based Integrated Management of Childhood Illnesses

More information

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to

More information

India. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

India. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report India Introduction The 2015 Human Development Report (HDR) Work for Human Development

More information

Internal Audit. Audit of the Inventory Control Framework

Internal Audit. Audit of the Inventory Control Framework Internal Audit Audit of the Inventory Control Framework June 2010 Table of Contents EXECUTIVE SUMMARY...4 1. INTRODUCTION...7 1.1 BACKGROUND...7 1.2 OBJECTIVES...7 1.3 SCOPE OF THE AUDIT...7 1.4 METHODOLOGY...8

More information

Analyzing Portfolio Expected Loss

Analyzing Portfolio Expected Loss Analyzing Portfolio Expected Loss In this white paper we discuss the methodologies that Visible Equity employs in the calculation of portfolio expected loss. Portfolio expected loss calculations combine

More information

parent ROADMAP MATHEMATICS SUPPORTING YOUR CHILD IN HIGH SCHOOL

parent ROADMAP MATHEMATICS SUPPORTING YOUR CHILD IN HIGH SCHOOL parent ROADMAP MATHEMATICS SUPPORTING YOUR CHILD IN HIGH SCHOOL HS America s schools are working to provide higher quality instruction than ever before. The way we taught students in the past simply does

More information

Performance Level Descriptors Grade 6 Mathematics

Performance Level Descriptors Grade 6 Mathematics Performance Level Descriptors Grade 6 Mathematics Multiplying and Dividing with Fractions 6.NS.1-2 Grade 6 Math : Sub-Claim A The student solves problems involving the Major Content for grade/course with

More information

Determining Measurement Uncertainty for Dimensional Measurements

Determining Measurement Uncertainty for Dimensional Measurements Determining Measurement Uncertainty for Dimensional Measurements The purpose of any measurement activity is to determine or quantify the size, location or amount of an object, substance or physical parameter

More information

Human Development Index (HDI)

Human Development Index (HDI) Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Iraq Introduction The 2015 Human Development Report (HDR) Work for Human Development

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Russian Federation. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

Russian Federation. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Russian Federation Introduction The 2015 Human Development Report (HDR) Work for

More information

Bolivia (Plurinational State of)

Bolivia (Plurinational State of) Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Bolivia (Plurinational Introduction The 2015 Human Development Report (HDR) Work

More information

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

More information

How To Factor Quadratic Trinomials

How To Factor Quadratic Trinomials Factoring Quadratic Trinomials Student Probe Factor Answer: Lesson Description This lesson uses the area model of multiplication to factor quadratic trinomials Part 1 of the lesson consists of circle puzzles

More information

Schools Value-added Information System Technical Manual

Schools Value-added Information System Technical Manual Schools Value-added Information System Technical Manual Quality Assurance & School-based Support Division Education Bureau 2015 Contents Unit 1 Overview... 1 Unit 2 The Concept of VA... 2 Unit 3 Control

More information

United Kingdom. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

United Kingdom. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report United Kingdom Introduction The 2015 Human Development Report (HDR) Work for Human

More information

TIPS DATA QUALITY STANDARDS ABOUT TIPS

TIPS DATA QUALITY STANDARDS ABOUT TIPS 2009, NUMBER 12 2 ND EDITION PERFORMANCE MONITORING & EVALUATION TIPS DATA QUALITY STANDARDS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance

More information

CONTACT CENTER SCHEDULE ADHERENCE

CONTACT CENTER SCHEDULE ADHERENCE CONTACT CENTER SCHEDULE ADHERENCE SEVEN NOT-SO-BEST PRACTICES AND HOW TO MAKE THEM BETTER CONTENTS Introduction...3 The Real Purpose of Schedule Adeherence...3 Making Adherence Look Good...3 7 Not-So-Best

More information

Confidence Intervals for the Difference Between Two Means

Confidence Intervals for the Difference Between Two Means Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

More information

A Framework for Prioritizing Economic Statistics Programs

A Framework for Prioritizing Economic Statistics Programs A Framework for Prioritizing Economic Statistics Programs Thomas L. Mesenbourg, Ronald H. Lee Associate Director for Economic Programs Senior Financial Advisor Office of the Associate Director for Economic

More information

MDG 4: Reduce Child Mortality

MDG 4: Reduce Child Mortality 143 MDG 4: Reduce Child Mortality The target for Millennium Development Goal (MDG) 4 is to reduce the mortality rate of children under 5 years old (under-5 mortality) by two-thirds between 1990 and 2015.

More information

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur Module 2 Software Life Cycle Model Lesson 4 Prototyping and Spiral Life Cycle Models Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what a prototype is.

More information

Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have

Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have 8.6 Rational Exponents 8.6 OBJECTIVES 1. Define rational exponents 2. Simplify expressions containing rational exponents 3. Use a calculator to estimate the value of an expression containing rational exponents

More information

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS (Section 0.6: Polynomial, Rational, and Algebraic Expressions) 0.6.1 SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS LEARNING OBJECTIVES Be able to identify polynomial, rational, and algebraic

More information

Pregnancy Child Tracking and Health Services Management System (PCTS) Gap Analysis

Pregnancy Child Tracking and Health Services Management System (PCTS) Gap Analysis Pregnancy Child Tracking and Health Services Management System (PCTS) Gap Analysis Rajasthan Model Districts Health Project Columbia Global Centers South Asia (Mumbai) Earth Institute, Columbia University

More information

HEALTHCARE FINANCE: AN INTRODUCTION TO ACCOUNTING AND FINANCIAL MANAGEMENT. Online Appendix B Operating Indicator Ratios

HEALTHCARE FINANCE: AN INTRODUCTION TO ACCOUNTING AND FINANCIAL MANAGEMENT. Online Appendix B Operating Indicator Ratios HEALTHCARE FINANCE: AN INTRODUCTION TO ACCOUNTING AND FINANCIAL MANAGEMENT Online Appendix B Operating Indicator Ratios INTRODUCTION In Chapter 17, we indicated that ratio analysis is a technique commonly

More information

Report of the Assistant Director Strategy & Performance to the meeting of Corporate Governance & Audit Committee to be held on 20 March 2009.

Report of the Assistant Director Strategy & Performance to the meeting of Corporate Governance & Audit Committee to be held on 20 March 2009. Report to the Corporate Governance & Audit Committee. Report of the Assistant Director Strategy & Performance to the meeting of Corporate Governance & Audit Committee to be held on 20 March 2009. Subject:

More information

Table 1. Active User Projections for Planned M-Banking Service

Table 1. Active User Projections for Planned M-Banking Service Market Research MARKET SIZING OVERVIEW Market sizing is traditionally defined as estimating the number of buyers of a particular product, or users of a service. Because of the relative newness of mobile

More information

Summary of GAVI Alliance Investments in Immunization Coverage Data Quality

Summary of GAVI Alliance Investments in Immunization Coverage Data Quality Summary of GAVI Alliance Investments in Immunization Coverage Data Quality The GAVI Alliance strategy and business plan for 2013-2014 includes a range of activities related to the assessment and improvement

More information

chapter 5. Quality control at the population-based cancer registry

chapter 5. Quality control at the population-based cancer registry chapter 5. Quality control at the population-based cancer registry All cancer registries should be able to give some objective indication of the quality of the data that they have collected. The methods

More information

Nepal. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

Nepal. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Nepal Introduction The 2015 Human Development Report (HDR) Work for Human Development

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

GRADE 6 MATH: SHARE MY CANDY

GRADE 6 MATH: SHARE MY CANDY GRADE 6 MATH: SHARE MY CANDY UNIT OVERVIEW The length of this unit is approximately 2-3 weeks. Students will develop an understanding of dividing fractions by fractions by building upon the conceptual

More information

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Battery Bias

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Battery Bias Glossary of Terms Ability A defined domain of cognitive, perceptual, psychomotor, or physical functioning. Accommodation A change in the content, format, and/or administration of a selection procedure

More information

1.3 Algebraic Expressions

1.3 Algebraic Expressions 1.3 Algebraic Expressions A polynomial is an expression of the form: a n x n + a n 1 x n 1 +... + a 2 x 2 + a 1 x + a 0 The numbers a 1, a 2,..., a n are called coefficients. Each of the separate parts,

More information

El Salvador. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

El Salvador. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report El Salvador Introduction The 2015 Human Development Report (HDR) Work for Human

More information

Briefing note for countries on the 2015 Human Development Report. Philippines

Briefing note for countries on the 2015 Human Development Report. Philippines Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Philippines Introduction The 2015 Human Development Report (HDR) Work for Human

More information

MARKET CONDUCT ASSESSMENT REPORT

MARKET CONDUCT ASSESSMENT REPORT MARKET CONDUCT ASSESSMENT REPORT PART 1 STATUTORY ACCIDENT BENEFITS SCHEDULE (SABS) PART 2 RATE VERIFICATION PROCESS Phase 1 (2012) Financial Services Commission of Ontario (FSCO) Market Regulation Branch

More information

USER MANUAL for Data Entry & Verification

USER MANUAL for Data Entry & Verification USER MANUAL for Data Entry & Verification Dated : 09-Jan-2012 Developed by NIC- Health Informatics Division in collaboration with NIC Gujarat. Contact e-mail id : [email protected] 1 About the

More information

Detecting Fraud in Health Insurance Data: Learning to Model Incomplete Benford s Law Distributions

Detecting Fraud in Health Insurance Data: Learning to Model Incomplete Benford s Law Distributions Detecting Fraud in Health Insurance Data: Learning to Model Incomplete Benford s Law Distributions Fletcher Lu 1 and J. Efrim Boritz 2 1 School of Computer Science, University of Waterloo & Canadian Institute

More information

Local outlier detection in data forensics: data mining approach to flag unusual schools

Local outlier detection in data forensics: data mining approach to flag unusual schools Local outlier detection in data forensics: data mining approach to flag unusual schools Mayuko Simon Data Recognition Corporation Paper presented at the 2012 Conference on Statistical Detection of Potential

More information

CHAPTER 17, DEFINED BENEFIT ACCRUALS

CHAPTER 17, DEFINED BENEFIT ACCRUALS CHAPTER 17, DEFINED BENEFIT ACCRUALS by Avaneesh Bhagat, (Western) Ann Trichilo, (Rulings and Agreements) INTERNAL REVENUE SERVICE TAX EXEMPT AND GOVERNMENT ENTITIES TABLE OF CONTENTS INTRODUCTION:...

More information

Malawi. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

Malawi. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Malawi Introduction The 2015 Human Development Report (HDR) Work for Human Development

More information

Corporate Data Quality Policy

Corporate Data Quality Policy Appendix A Corporate Data Quality Policy Right first time Author: Head of Policy Date: November 2008 Contents 1. INTRODUCTION...3 2. STATEMENT OF MANAGEMENT INTENT...3 3. POLICY AIM...3 4. DEFINITION OF

More information

2003 Annual Survey of Government Employment Methodology

2003 Annual Survey of Government Employment Methodology 2003 Annual Survey of Government Employment Methodology The U.S. Census Bureau sponsors and conducts this annual survey of state and local governments as authorized by Title 13, United States Code, Section

More information

Annex 6 BEST PRACTICE EXAMPLES FOCUSING ON SAMPLE SIZE AND RELIABILITY CALCULATIONS AND SAMPLING FOR VALIDATION/VERIFICATION. (Version 01.

Annex 6 BEST PRACTICE EXAMPLES FOCUSING ON SAMPLE SIZE AND RELIABILITY CALCULATIONS AND SAMPLING FOR VALIDATION/VERIFICATION. (Version 01. Page 1 BEST PRACTICE EXAMPLES FOCUSING ON SAMPLE SIZE AND RELIABILITY CALCULATIONS AND SAMPLING FOR VALIDATION/VERIFICATION (Version 01.1) I. Introduction 1. The clean development mechanism (CDM) Executive

More information

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12 This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

ANALYTIC HIERARCHY PROCESS (AHP) TUTORIAL

ANALYTIC HIERARCHY PROCESS (AHP) TUTORIAL Kardi Teknomo ANALYTIC HIERARCHY PROCESS (AHP) TUTORIAL Revoledu.com Table of Contents Analytic Hierarchy Process (AHP) Tutorial... 1 Multi Criteria Decision Making... 1 Cross Tabulation... 2 Evaluation

More information

Sierra Leone. Country coverage and the methodology of the Statistical Annex of the 2015 HDR

Sierra Leone. Country coverage and the methodology of the Statistical Annex of the 2015 HDR Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Sierra Leone Introduction The 2015 Human Development Report (HDR) Work for Human

More information

Validation of Compensation of Employees Survey

Validation of Compensation of Employees Survey International Comparison Program Validation of Compensation of Employees Survey Draft version Operational Guide Contents INTRODUCTION... 3 1.1. DATA TO BE COLLECTED... 3 1.2. OCCUPATION TYPES... 3 1.3.

More information

No Goal Indicator Operational definition Data source/ collection Baseline Target Milestones

No Goal Indicator Operational definition Data source/ collection Baseline Target Milestones No Goal Indicator Operational definition Data source/ collection Baseline Target Milestones G 1.1 Achieve a world free of poliomyelitis Interrupt wild poliovirus transmission globally G1.2 Certification

More information

Briefing note for countries on the 2015 Human Development Report. Niger

Briefing note for countries on the 2015 Human Development Report. Niger Human Development Report 2015 Work for human development Briefing note for countries on the 2015 Human Development Report Niger Introduction The 2015 Human Development Report (HDR) Work for Human Development

More information

SAMPLE CLINIC ADDENDUM TO SAMPLE PRO-VENDOR MAINTENANCE AGREEMENT

SAMPLE CLINIC ADDENDUM TO SAMPLE PRO-VENDOR MAINTENANCE AGREEMENT Comment: This sample Addendum is one example of how a clinic might respond to a typical, one-sided, unfair, provendor Maintenance Agreement. Most vendors will not accept clinics form of agreements, but

More information

Texas Mandated Benefit Cost and Utilization Summary Report. October 2005 - September 2006 Reporting Period. Texas Department of Insurance

Texas Mandated Benefit Cost and Utilization Summary Report. October 2005 - September 2006 Reporting Period. Texas Department of Insurance Texas Mandated Benefit Cost and Utilization Summary Report October 2005 - September 2006 Reporting Period Texas Department of Insurance Table of Contents Executive Summary.. 1 Survey Overview..... 5 Legislation...........

More information

2013 New Jersey Alternate Proficiency Assessment. Executive Summary

2013 New Jersey Alternate Proficiency Assessment. Executive Summary 2013 New Jersey Alternate Proficiency Assessment Executive Summary The New Jersey Alternate Proficiency Assessment (APA) is a portfolio assessment designed to measure progress toward achieving New Jersey

More information