Metrics for Measuring Data Quality at NASS s National Operations Center

Size: px
Start display at page:

Download "Metrics for Measuring Data Quality at NASS s National Operations Center"

Transcription

1 Metrics for Measuring Data Quality at NASS s National Operations Center Jeffrey M. Boone United States Department of Agriculture National Agricultural Statistics Service 3251 Old Lee Hwy., Fairfax, VA Abstract The National Agricultural Statistics Service (NASS) centralized much of its survey data collection activities at its new National Operations Center (NOC) in August Metrics measuring the quality of the data collected at the NOC and the production of NOC staff have been developed. In this paper, some select metrics will be discussed in depth. For these, topics such as the past use of the metric, requirements to measure and track it, benefits of tracking it, methods of displaying it, and how it improves data quality will be presented. Application to both interviewer evaluation and data collection process control will be examined. The use of control charts and other graphical methods to monitor the metrics will also be discussed and examples demonstrating these graphical methods will be provided. Key Words: data collection, quality control, control charts, paradata 1. Introduction Data collection relies on humans for the majority of the collection process, so the errors inherent in a human translate to errors in data collection. To ensure that these errors do not have a large impact on the quality of the data being collected, some aspects of the data collection must be observed. These aspects are known as metrics. Metrics may be composed of one or more variables collected from the data collection process. For example, the metric records completed per hour worked for a given interviewer includes two variables, namely the number of records completed and the number of hours worked for a given interviewer. The data from these individual variables are often called paradata in the context of survey data collection. Bates, Dalhammer, Phipps, Safir, and Tan (2010) provide some examples of paradata, including call records, observations of interviewers and respondents, audio recordings for interviewer and respondent interactions, items generated by computer-assisted instruments, such as response times and key strokes, etc. Lyberg (2009) explains that paradata can provide continuous updates of progress and stability checks, monitoring, input to long-run process improvement of product quality, and analysis for special and common cause variation, and input to methodological changes by finding and eliminating root cause problems. It is very important that the chosen metrics are not only applicable to the process and serve the purported purpose of maintaining high-quality data, but they must be usable by the staff whom are meant to use them. LaFlamme (2009) explains that the indicators must be easily understood, measureable and comparable at any point in data collection, consistently updated throughout data collection, and relevant, interpretable, and comparable at different levels of aggregation. These aspects of a metric must be considered when developing the quality control system. Each metric chosen must be measured for a purpose, not just for the sake of measurement. Each must respond to a question, such as is this interviewer collecting enough usable records? for a productivity indicator and what times are best to

2 call? for a management indicator. It is also important to ensure that enough relative metrics are collected and usable in a manner to fully answer these questions. Although many metrics appear to be focused on evaluating interviewers, keyers, or other staff, they can also be representative of the broader concept of process control. Statistical process control, or SPC, is often considered solely in the area of manufacturing, but the concepts and techniques may be applied to a data collection process as well. Biemer and Lyberg (2003) state that to achieve error prevention and continuous quality improvement, a process perspective should be adopted. The ultimate goal of SPC is to achieve process stability and improve capability through the reduction of variability (Montgomery, 2005). In the area of survey data collection, this applies well: a stable process ensures that the data are collected in a uniform manner; a capable process ensures that the data can adequately be collected; and a process low in variation ensures that the data collected are as expected. Thus, these concepts generally considered for manufacturing are applicable to survey data collection. Many data collection managers do not view metrics until the data collection has ended. This practice, although common, is often problematic since any errors that were occurring during the data collection could not be resolved, and the quality of the data collected may be compromised. Budget overruns are also common since potential overruns cannot be detected during the data collection. If a quality monitoring system, complete with the necessary metrics and displays, is employed, these errors and potential problems may be detected during data collection, ensuring that every chance to correct errors may be made. The quality of the data collected depends on adequate responses to any problems that arise during data collection. The United States Department of Agriculture s National Agricultural Statistics Service (NASS) provides timely, accurate, and useful statistics in service to U.S. agriculture. NASS conducts hundreds of surveys a year, conducts the Census of Agriculture every 5 years, provides data relating to America s agricultural products, and provides data used to determine commodity prices. Some examples of the data collected are that related to production and supplies of food and fiber, prices paid and received by farmers, farm labor and wages, farm finances, chemical use, and changes in the demographics of U.S. agricultural producers. The issue of developing quality control metrics began with NASS s centralization of many of its data collection procedures with the opening of a National Operations Center (NOC) in St. Louis, MO, in August, The main objectives of this initiative are to reduce the source of error inherent to data collection activities, improve data quality, and reduce operational costs. The center will have various functions, including telephone interviewing, the processing of paper questionnaires, maintenance of NASS s list frame, training, survey development, Blaise programming, and web survey system programming. The project to develop the quality control program began months before the opening of the NOC and began by exploring metrics that would be used to evaluate the performance of the data collection, including the performance of the NOC and its staff. Ascertaining these metrics can be quite difficult. It has been said that discovering whether data are of acceptable quality is a measurement task, and not a very easy one (Herzog, Scheuren, and Winkler, 2007). Moreover, once this early stage of the process is complete, the remaining portions of the system, such as the displays and quality control protocols and standards. Work is underway to complete the project. For more information, see Boone, Parsons, Feld, Levy, and Flaherty (2011) and Boone (2012). This paper focuses on two survey data collection quality control metrics: the average length of call and the number of edits required. Both are discussed in depth, including the method of gathering the information to calculate the metric as well as the method of monitoring it. A section on the description and use of control charts applied to these two examples is also presented.

3 2. Average Length of Call The average length of call (ALOC) is simple-to-gather metric and is often calculated and stored in CATI software. The ALOC may be viewed as both a measure of interviewer productivity and of data quality. Note that the ALOC should only be measured for completed records. Refusals and no answers, for example, should not be considered since they will yield very different call lengths from completed interviews. As a measure of productivity, the ALOC shows how long on average interviewers are spending to complete a record. If too much time is being spent collecting the data, additional training may be needed to ensure the data is collected in a time-efficient manner. As time spent per record increases, the cost per record increases. Thus, to keep costs down, the time spent per record should be kept down. Some cases, however, may take much more time than others. For example, in completing a survey that includes questions regarding multiple types of crops, those operators with more different types of crops may take longer to interview than those with a fewer number of crops. Therefore, large variation may exist beyond just the natural variation in call lengths. As a measure of data quality, the ALOC may describe interviews that are far too short in completion time. An uncommonly short interview may be the result of the interviewer rushing through the interview, possibly leading to inaccurate data. This short time may also be an indicator of record falsification where an interviewer does not actually participate in an interview and only enters data into the fields. Note that these two cases might only be recognized when viewing this metric by interviewer since it may be smoothed out over groups of interviewers. NASS has not used ALOC directly in the past. However, a slightly different approach to the length of a call, the cost per usable complete, has been used. A usable complete is defined as a completed interview that is not coded as a refusal or an inaccessible (cannot make contact with a respondent). This metric is calculated by dividing the number of hours worked by the number of usable completes and multiplying by some average wage of the interviewers (to mask the individual interviewer s salary). Thus, the ALOC is a similar measure, with the length of call used instead of the number of hours worked and the wage not included. Both use the number of usable completes as a paradata component to the metric. A drawback to using the hours worked as a component is that much of the time the interviewer is scheduled to be working, he/she is not actually making calls. It is, however, useful in the metric cost per usable complete since he/she is still getting paid during those hours, and thus a cost is associated with the entire work schedule. Contrarily, for the ALOC, the length of call should be used instead of the number of hours worked since the focus in on the call itself, not the interviewer. A major issue with this measure is that the expected length of call and its variance may differ greatly depending on the survey being conducted. This is an especially large problem for NASS since many different surveys are conducted concurrently and the data collection period is often very short, some being as short as three days. Thus, it would not be proper to view just the raw ALOC values across surveys. To correct for the varying mean and variance across various surveys, the data may be standardized. This standardization involves shifting the values (centering) so that the mean is constant across all surveys and scaling the values so that the spread is constant across all surveys. The following equation may be used to accomplish this. m ti = x tij x j s j,

4 where m ti is the i th standardized length of call at time t, x tij is the i th length of call at time t for survey j, x j is the estimated average call length for survey j, and s j is the estimated standard deviation for survey j. The plotting statistic would then be m t, or the average of the standardized call lengths at time t. An application of this formula is as follows. Consider the following data below in Table 1. For this particular survey, the expected length of call is approximately 10 minutes and the standard deviation of the call lengths is estimated to be 3 minutes (determined from a sample of calls). Table 1: Standardizing Length of Calls Length of Call (x tij ) in minutes Standardized Length of Call (m ti ) in minutes Now consider the following time series plot of ALOCs over a 9-day period in Figure 1. The data is a mix of two different surveys. Notice that it seems the Average Length of Call Figure 1: Average Length of Call by Day Average Length of Call by Day Although the pattern is clearly visible in the chart, it is impossible to determine whether or not an ALOC value, say 18 minutes on Day 4, is too long since the data comes from multiple surveys. For example, 18 minutes may be far too long for Survey A but not for Survey B. If both are being conducted on Day 4, it cannot be determined if there is anything unusual about the ALOC values. Using the standardization formula seen above, a much more meaningful graph may be created. Once the data is standardized, the expected value on the chart is zero, so anything far from zero is unacceptable. Luckily, since the ALOC values are now scaled across surveys, what is unusual for one survey is also unusual for another, i.e. the problem seen above where it is impossible to determine if a problem actually exists when viewing information across surveys is alleviated. The standardized data can be seen in Figure 2 below. Day

5 Std. Average Length of Call Standardized ALOC by Day Day Figure 2: Standardized Average Length of Call by Day The pattern is essentially the same in this case, but the values may now be interpretable. The ALOC values seem to hang near the zero line until Day 8 where they start to drift upward away from the center. Thus, a problem may be occurring and the issue should be resolved quickly. The incident that occurred to create this problem, assuming that one has actually occurred and there is not just some random anomaly in the data, should be resolved before much more data is collected, causing more data that is collected to be of potentially bad quality. The scenario presented in this chart will be discussed later in this article. It is important to note, however, that sometimes anomalies do occur without any particular correctable reason, so just because a data point may look unusual, it cannot be ascertained that an incident may have occurred. 3. Edits Required A rarely-tracked metric involves the number of edits required for a given record. This metric, either as a count or an average per record, may be used as an evaluation metric for interviewers, data collection locations, or even entire survey programs. For example, a large percentage of edits for a given interviewer may indicate that the interviewer is making numerous recording errors. Note that this is not necessarily the case since many errors arise from the reporting of the data, not the recording of it. Thus, an assumption that must be made is that the distribution of reporting errors is constant across different interviewers, locations, or surveys. This assumption ensures that unusual values of the metric might be attributed to recording errors. However, a high variation in reporting errors may mask unusual values of recording errors. To further complicate the issue of using this metric, the distribution of reporting errors may not be constant across surveys. Consider the distribution of the edits required metric being composed of both the distribution of reporting errors and that of the recording errors. If the distribution of one component varies, the distribution of the metric will vary as well. Thus, it may be erroneous to assume a set distribution of the metric across surveys due to the varying distribution of either the reporting or recording errors. For normal survey operations (excluding the Census of Agriculture), NASS has not maintained the count of edits required for a given record. Therefore, no metric based on this value has ever been used. NASS has automated editing systems in place and is testing a new one. NASS also has a system to present

6 unusual responses to a survey administrator to allow for manual editing of these responses. The count of the records edited is maintained, but the number of actual edits within a questionnaire is not. Thus, to implement the use of this measure, these values need to be captured and stored in a manner that will provide the data necessary to properly track this metric. Collecting this data may be very difficult. Once the data for a given record is gathered, it may sit untouched for a period of time before it is processed through an editing system. Thus, this metric cannot be viewed for the data as it is being collected, so a retroactive view must be seen. Note that it may still be possible to view this information during data collection, and if so, waiting until the data collection period is over should not be done; as with any metric, understanding any issue quickly after it has occurred may assist in resolving the issue before much damage is done. Another complicating factor involves the type of edit that is being performed. Many data collection systems have multiple types of edits in the process, and an individual record may be passed through multiple edits (and even the same editing process more that once). These different types of edits may be automatic or manual. Automatic edits are generally processed much quicker that manual edits, so it may only be possible to view the number of automatic edits during data collection, leaving the number manual edits for post-survey analysis. However, an important note to make here is that it is very likely that the interviewers for a particular survey may return for a later one, so any issues that a given interviewer is having on one may translate to a later one. Thus, viewing the edits in post-survey analysis can still be helpful in the further training or discipline of the interviewer. Beyond the automatic and manual classifications of edits, the extremity of the edit should also be considered. Although this may be a difficult concept to track, it may yield much greater information than just the raw number of edits. For example, consider a question regarding the total acres of farmland operated by the farmer. Suppose there is no reporting error and the farmer responds with a value of 100 acres. The interviewer records this value as 1,000. Based on previously-reported data, the automated editing system changes the value to 100. This is a much more serious error than recording, say, 200. This leads to another issue as to whether a miss-type of one key (adding another 0) is worse than changing more than one number in the response. Adding a 0 has a more severe effect on the estimate, but it may be considered a much easier keying mistake to make. Thus, determining what is meant by an extreme error is difficult. As mentioned earlier, the number of edits required may be used for a higher management perspective to evaluate survey programs. This metric may be employed to compare different survey programs to ensure that those that are underperforming may be adjusted in a manner to bring them up to the level of the other programs. One issue that may arise in this involves the universal use of the editing systems. Some surveys may not pass through the same systems as others, yielding an unfair comparison. Another issue is the divisor of the count of the number of edits required. Using only the number of edits per record does not control for the varying number of questions per survey. A more reasonable divisor would be the number of questions, making the metric be the number of edits per question. Note also that this will be a very small number, so some expansion factor should be used, such as multiplying by 1,000, so that the value will be easier to view. Since there are currently no data available to provide an example from practice, a hypothetical situation is presented. Consider the following data, constructed for various interviewers over the data collection period for a particular survey (Survey X).

7 Table 2: Number of Edits per 100 records by Interviewer and Date Interviewer Date Jeff Jim Jack Jill Jane Mean 4/16/ /17/ /18/ /19/ /20/ /21/ /22/ /23/ /24/ /25/ Mean The values within the table represent the number of edits required per 100 records. The averages by day and by interviewer over the 10-day period are also provided in the table. First, note the natural variation in all of the values. This can be attributed to the varying difficulty in the cases or just the natural variation seen in nearly any process. It also seems that interviewer Jill has many more edits that the other interviewers in this example. Statistical tests, such as a t-test, may be employed to determine if a significant different exists between the various interviewers or data collection dates. As observed, Jill had a significantly higher value that the other interviewers. Also, no difference is seen in the different days. Recall an earlier point that the data must be made available in a manner for simple interpretation. To solve this problem, a time series plot, similar to the one created for the earlier metric, may be created. In this case, unlike the previous metric, the data have some delay. Edits Required per 100 Records for Survey X Jeff Jim Jack Jill Jane Figure 3: Edits Required per 100 records for Survey X

8 The time series plot provides a graphical view of the data presented above. From this chart, it is easily seen that Jill had a consistently higher rate of edits that the other interviewers. It is also seen that the other interviewers tend to vary consistently around 2, with some slight, though not significant, differences between them. The question remains of what to do with the knowledge gained from this analysis. Since this analysis began after Survey X was completed, any corrections or focused training could only be applied to subsequent surveys. Note also that it may be difficult to determine accurately if the corrections actually decrease the number of edits required since there may some variation across surveys. In addition, the issue discussed earlier in this section involving the interpretation of the metric still remains. It is difficult to declare that a higher number of edits required means that more errors were made by the interviewer. However, in this example, it is seen that interviewer Jill has a consistently higher edit rate, enabling the analyst to conclude that it is of high likelihood that her edits are due to errors she has made. 4. Control Charting A primary tool in statistical quality control is a control chart (Montgomery, 2005). Control charts may be used for observing the properties of a process over time by monitoring the tendency of one or more quality characteristics. The chart may view both changes in the central tendency (such as a mean value) and changes in the variation of a process. The metrics discussed in this paper can be monitored using control charts. A major issue in the application of most control charts is the lack of understanding of the underlying distribution of the metric. This not only includes the statistical distribution, such as the normal distribution, but the parameters, such as the mean and variance, as well. The assumption of the underlying distribution is either assumed by using some prior knowledge of the process or by viewing the distributional properties of the metric. If there is little or no data available, the error in these assumptions may be very high. Note also that if any change is made to the process, the underlying distribution may change. Thus, if the estimation of the parameters, or the determination of the distribution itself, is performed before a change is made, the assumptions may be erroneous. A control chart will be created for the standardized Average Length of Call metric seen in earlier section. This chart, the x -s chart, is used to monitor both changes in the central tendency (mean) and variation (standard deviation) of the process. The chart applied to the data seen earlier is provided below. Note that the top chart displays the average length of call calculated for each day and the bottom chart displays the standard deviation calculated for each day. The dashed lines represent the control limits, the solid horizontal line represents the center line, and the moving line represents the average (in the upper chart) and the range (in the lower chart) of the data at the corresponding time period. The distribution of the metric, not including its parameters, may meet the assumption of normality due to the central limit theorem, assuming the sample size at each time period, i.e. the number of calls each time period, is large enough. Large enough is also a complex issue, but many works agree that a sample size of at least thirty is acceptable in most cases, which is often not a problem when observing the metric across multiple enumerators. For simplicity, at least for this paper, normality will be assumed.

9 Figure 4: Example of an x -s chart When evaluating the results of the control charting procedure using the combined x -s chart, the first area to observe is the bottom chart, or the s chart. This chart shows the variation of the process at each point in time, so if the s chart is in control (i.e. to points lie inside the control limits, in this case the plotted value only needs to be below the upper control limit), the values plotted on the x chart may be validly interpreted. Observing the s chart in this example, it can be seen that the variation of the process is in control and the x chart may be interpreted. First, recall that the values plotted on the x chart do not represent the average length of call; each represents the average standardized length of call. Thus, a value of zero is desired, and far beyond or below zero (outside the three-sigma control limits) is considered questionable and potentially a problem. Other patterns should also be considered, such as those described by the Western Electric Rulebook (1956) including two or three consecutive points beyond two-sigma units from the center line, four out of five consecutive points beyond the one-sigma unit from the center line, or eight consecutive points on one side of the center line. In this example, all points from Day 1 to Day 8 pass these tests, and thus the process is in control. However, the value at Day 9 is above the upper control limit, showing that a problem may have occurred. Thus, an investigation into the cause of the out-of-control point should be mounted. This investigation may begin by viewing the data that were summarized in the computation of the value of the out-of-control point. This could be part of a drill-down ability in the system, such as the ability to click on a given point to view more information used in the computation of that point. Also, the use of a Computer-Assisted Recorded Interviewing system may provide the ability to view specific interviews to further evaluate the unusual observation. In this example, the average call time on Day 9 should be observed by survey being conducted. If this shows that the problem only exists in one survey, then the investigation may be focused on that particular survey. The data may also be separated by interviewer making the call (assuming the data plotted on the original chart is across multiple enumerators). If only some interviewers are showing unusual call lengths, an inquiry of the issue may be targeted to those specific sources, potentially leading to instruction or increased training.

10 Note also that the sample size may vary across time periods. A control chart may be constructed for varying sample sizes. The chart has varying control limits, each based on the sample size at the given time period. If the chart is not adjusted for varying sample sizes, the conclusions drawn from the analysis will be inaccurate. Many different types of control charts exist and are used for various types of processes. Thus, it is important to understand the process metric under study and any properties that it may have. Some of these properties include the distribution of the metric, such as its central tendency and variation, possible autocorrelation across time periods (see for example Montgomery and Mastrangelo (1991) and Adams and Lin (1994)), and some things as simple as the type of variable (discrete, continuous, etc.). Fully understanding the metric will ensure that the proper control chart is used. 5. Conclusion Many of the concepts presented in this paper are not yet implemented, and further investigations must be made before determining which of these should be used and how to do so. The two metrics presented are currently not being collected or stored in a way to carry out the investigation. Thus, the next step is to begin collecting and storing the data in a manner that may be useful in the future. Once some data is collected, a study can be performed to determine which metrics are appropriate for use in data collection quality control. Alternative means for collecting similar information should also be investigated. For example, a primary purpose of monitoring the number of edits required is to grasp issues relating to recording errors. An alternative could be to use Computer-Assisted Audio Recording (CARI), allowing someone hearing the recording to ensure that the reported data have been properly recorded. Thus, it is important to recall the purpose for tracking the metric so that alternative methods, potentially yielding better results, may be discovered. A common theme in the use of many metrics is the need to gather some, and in most times a large amount, of preliminary data. These data are used for determining some central tendency or expected value of the metric, as well as some idea of an expected variation. It has also been noted that this information may vary for different circumstances, such as across different surveys or even times of the day or year. It may be impossible to collect enough information to determine exactly what is expected of the metric in every case, so a large amount of variation may be present. This is especially detrimental to the use of control charts since the width of the control limits increases with higher variation. Thus, it may be difficult to determine whether the value of a metric is actually unusual or only appears unusual due to a high variation in the metric. To reduce this difficulty, large amounts of preliminary data may need to be collected. Do not forget for whom the visuals are created. Some very complex displays should not be used due to their difficult interpretation. Control charts often have this drawback. Lyberg, Biemer, Collins, de Leeuw, Dippo, Schwartz, and Trewin (1997) state that interpretation of the control charts and even the modified control charts is often difficult. Be sure to always consider alternative methods to display the information required to make informed decisions. Flags or other warning mechanisms may be employed instead of some charts so that the user does not necessarily need to know how to use the charts to determine if a potential issue has surfaced. Remain aware of the user.

11 References Adams, B. M. and Lin, W. (1994). Monitoring Autocorrelated Data with a Combined EWMA-Shewhart Control Chart, Proceedings of the Section on Quality and Productivity: American Statistical Association: pp Bates, N., Dalhammer, J., Phipps, P., Safir, A., and Tan, L. (2010). Assessing Contact History Paradata Quality Across Several Federal Surveys, Proceedings of the American Statistical Association s 2010 Joint Statistical Meetings: American Statistical Association: Biemer, P. and Lyberg, L. (2003). Introduction to Survey Quality: John Wiley and Sons, New York. Boone, J. M., Parsons, J. L., Feld, S. R., Levy, J. N., and Flaherty, K. L. (2011). Implementing Quality Control Procedures at NASS s National Operations Center, Proceedings of the International Methodology Symposium: Statistics Canada. Boone, J. M. (2012). Implementing Quality Control Procedures at NASS s National Operations Center, Proceedings of the Federal Committee on Statistical Methodology Research Conference: Federal Committee on Statistical Methodology. Herzog, T. N., Scheuren, F. J., and Winkler, W. E. (2007). Data Quality and Record Linkage Techniques: Springer, New York. Laflamme, F. (2009). Experiences in Assessing, Monitoring and Controlling Survey Productivity and Costs at Statistics Canada, Proceedings of the 57 th Session of the International Statistical Institute: Lyberg, L., Biemer, P., Collins, M., de Leeuw, E., Dippo, C., Schwartz, N., and Trewin, D. (1997). Survey Measurement and Process Quality: John Wiley and Sons, New York. Lyberg, L. (2009). The Paradata Concept in Survey Research. Presentation. Presented at NCRM Paradata Network, London, UK, August 24, 2009: Montgomery, D. C. and Mastrangelo (1991). Some Statistical Process Control Methods for Autocorrelated Data, Journal of Quality Technology, 23: pp Montgomery, D. C. (2005). Statistical Quality Control, 4 th Ed.: John Wiley and Sons, New Jersey. Western Electric (1956). Statistical Quality Control Handbook: Western Electric Corporation: Indianapolis, IN.

Implementing Quality Control Procedures at NASS s National Operations Center

Implementing Quality Control Procedures at NASS s National Operations Center Implementing Quality Control Procedures at NASS s National Operations Center Jeffrey M. Boone National Agricultural Statistics Service, United States Department of Agriculture 3251 Old Lee Hwy., Rm. 305,

More information

A Better Statistical Method for A/B Testing in Marketing Campaigns

A Better Statistical Method for A/B Testing in Marketing Campaigns A Better Statistical Method for A/B Testing in Marketing Campaigns Scott Burk Marketers are always looking for an advantage, a way to win customers, improve market share, profitability and demonstrate

More information

SPC Data Visualization of Seasonal and Financial Data Using JMP WHITE PAPER

SPC Data Visualization of Seasonal and Financial Data Using JMP WHITE PAPER SPC Data Visualization of Seasonal and Financial Data Using JMP WHITE PAPER SAS White Paper Table of Contents Abstract.... 1 Background.... 1 Example 1: Telescope Company Monitors Revenue.... 3 Example

More information

Cash Rents Methodology and Quality Measures

Cash Rents Methodology and Quality Measures ISSN: 2167-129X Cash Rents Methodology and Quality Measures Released August 1, 2014, by the National Agricultural Statistics Service (NASS), Agricultural Statistics Board, United States Department of Agriculture

More information

APPENDIX N. Data Validation Using Data Descriptors

APPENDIX N. Data Validation Using Data Descriptors APPENDIX N Data Validation Using Data Descriptors Data validation is often defined by six data descriptors: 1) reports to decision maker 2) documentation 3) data sources 4) analytical method and detection

More information

Part II Management Accounting Decision-Making Tools

Part II Management Accounting Decision-Making Tools Part II Management Accounting Decision-Making Tools Chapter 7 Chapter 8 Chapter 9 Cost-Volume-Profit Analysis Comprehensive Business Budgeting Incremental Analysis and Decision-making Costs Chapter 10

More information

The 5 Questions You Need to Ask Before Selecting a Business Intelligence Vendor. www.halobi.com. Share With Us!

The 5 Questions You Need to Ask Before Selecting a Business Intelligence Vendor. www.halobi.com. Share With Us! The 5 Questions You Need to Ask Before Selecting a Business Intelligence Vendor www.halobi.com Share With Us! Overview Over the last decade, Business Intelligence (BI) has been at or near the top of the

More information

700 Analysis and Reporting

700 Analysis and Reporting Jefferson Science Associates, LLC 700 Analysis and Reporting Project Control System Manual Revision 7-50 - 700 Analysis and Reporting This chapter of the JSA Project Control System Manual explains how

More information

How to Win the Stock Market Game

How to Win the Stock Market Game How to Win the Stock Market Game 1 Developing Short-Term Stock Trading Strategies by Vladimir Daragan PART 1 Table of Contents 1. Introduction 2. Comparison of trading strategies 3. Return per trade 4.

More information

Jaki S. McCarthy. United States Department of Agriculture. National Agricultural Statistics Service

Jaki S. McCarthy. United States Department of Agriculture. National Agricultural Statistics Service United States Department of Agriculture National Agricultural Statistics Service Research and Development Division Washington DC 20250 RDD Research Report Number RDD-07-04 Pre-Recorded Telephone Messages

More information

How To Check For Differences In The One Way Anova

How To Check For Differences In The One Way Anova MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

More information

Jaki S. McCarthy. United States Department of Agriculture. National Agricultural Statistics Service

Jaki S. McCarthy. United States Department of Agriculture. National Agricultural Statistics Service United States Department of Agriculture National Agricultural Statistics Service Research and Development Division Washington DC 20250 RDD Research Report Number RDD-08-04 Pre-Recorded Telephone Messages

More information

THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING

THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING A User s Guide to SPC By David Howard Management-NewStyle "...Shewhart perceived that control limits must serve industry in action.

More information

Time series Forecasting using Holt-Winters Exponential Smoothing

Time series Forecasting using Holt-Winters Exponential Smoothing Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract

More information

Simple Predictive Analytics Curtis Seare

Simple Predictive Analytics Curtis Seare Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

More information

Analyzing Quantitative Data Ellen Taylor-Powell

Analyzing Quantitative Data Ellen Taylor-Powell G3658-6 Program Development and Evaluation Analyzing Quantitative Data Ellen Taylor-Powell Statistical analysis can be quite involved. However, there are some common mathematical techniques that can make

More information

Fixed-Effect Versus Random-Effects Models

Fixed-Effect Versus Random-Effects Models CHAPTER 13 Fixed-Effect Versus Random-Effects Models Introduction Definition of a summary effect Estimating the summary effect Extreme effect size in a large study or a small study Confidence interval

More information

Analyzing and interpreting data Evaluation resources from Wilder Research

Analyzing and interpreting data Evaluation resources from Wilder Research Wilder Research Analyzing and interpreting data Evaluation resources from Wilder Research Once data are collected, the next step is to analyze the data. A plan for analyzing your data should be developed

More information

STATISTICAL REASON FOR THE 1.5σ SHIFT Davis R. Bothe

STATISTICAL REASON FOR THE 1.5σ SHIFT Davis R. Bothe STATISTICAL REASON FOR THE 1.5σ SHIFT Davis R. Bothe INTRODUCTION Motorola Inc. introduced its 6σ quality initiative to the world in the 1980s. Almost since that time quality practitioners have questioned

More information

Customer Service Performance Report

Customer Service Performance Report Customer Service Performance Report 2010 Pennsylvania Electric & Natural Gas Distribution Companies Pennsylvania Public Utility Commission Bureau of Consumer Services Table of Contents Introduction...

More information

Planning Usability Tests For Maximum Impact Scott McDaniel, Laura Snyder

Planning Usability Tests For Maximum Impact Scott McDaniel, Laura Snyder Planning Usability Tests For Maximum Impact Scott McDaniel, Laura Snyder Usability tests make products better. Those of us who have seen their results understand their value, but we sometimes have difficulty

More information

ASSURING THE QUALITY OF TEST RESULTS

ASSURING THE QUALITY OF TEST RESULTS Page 1 of 12 Sections Included in this Document and Change History 1. Purpose 2. Scope 3. Responsibilities 4. Background 5. References 6. Procedure/(6. B changed Division of Field Science and DFS to Office

More information

Session 54 PD, Credibility and Pooling for Group Life and Disability Insurance Moderator: Paul Luis Correia, FSA, CERA, MAAA

Session 54 PD, Credibility and Pooling for Group Life and Disability Insurance Moderator: Paul Luis Correia, FSA, CERA, MAAA Session 54 PD, Credibility and Pooling for Group Life and Disability Insurance Moderator: Paul Luis Correia, FSA, CERA, MAAA Presenters: Paul Luis Correia, FSA, CERA, MAAA Brian N. Dunham, FSA, MAAA Credibility

More information

Certification Program Rates and National/State Overall Rates

Certification Program Rates and National/State Overall Rates Users Guide to CMIP Performance MeasureTrend Reports Health Care Staffing Services (HCSS) Certification Program Rates and National/State Overall Rates TABLE OF CONTENTS INTRODUCTION... 3 ACCESSING THE

More information

Variables Control Charts

Variables Control Charts MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. Variables

More information

DATA MONITORING AND ANALYSIS PROGRAM MANUAL

DATA MONITORING AND ANALYSIS PROGRAM MANUAL DATA MONITORING AND ANALYSIS PROGRAM MANUAL LBNL/PUB-5519 (3), Rev. 0 Effective Date: Orlando Lawrence Berkeley National Laboratory LBNL/PUB-5519 (3), Rev. 0 Page 2 of 22 REVISION HISTORY Revision Date

More information

White Paper from Global Process Innovation. Fourteen Metrics for a BPM Program

White Paper from Global Process Innovation. Fourteen Metrics for a BPM Program White Paper from Global Process Innovation by Jim Boots Fourteen Metrics for a BPM Program This white paper presents 14 metrics which may be useful for monitoring progress on a BPM program or initiative.

More information

White Paper Combining Attitudinal Data and Behavioral Data for Meaningful Analysis

White Paper Combining Attitudinal Data and Behavioral Data for Meaningful Analysis MAASSMEDIA, LLC WEB ANALYTICS SERVICES White Paper Combining Attitudinal Data and Behavioral Data for Meaningful Analysis By Abigail Lefkowitz, MaassMedia Executive Summary: In the fast-growing digital

More information

FORECASTING. Operations Management

FORECASTING. Operations Management 2013 FORECASTING Brad Fink CIT 492 Operations Management Executive Summary Woodlawn hospital needs to forecast type A blood so there is no shortage for the week of 12 October, to correctly forecast, a

More information

Papers presented at the ICES-III, June 18-21, 2007, Montreal, Quebec, Canada

Papers presented at the ICES-III, June 18-21, 2007, Montreal, Quebec, Canada A Comparison of the Results from the Old and New Private Sector Sample Designs for the Medical Expenditure Panel Survey-Insurance Component John P. Sommers 1 Anne T. Kearney 2 1 Agency for Healthcare Research

More information

Article. Developing Statistics New Zealand s Respondent Load Strategy. by Stuart Pitts

Article. Developing Statistics New Zealand s Respondent Load Strategy. by Stuart Pitts Component of Statistics Canada Catalogue no. 11-522-X Statistics Canada s International Symposium Series: Proceedings Article Symposium 2008: Data Collection: Challenges, Achievements and New Directions

More information

Common Tools for Displaying and Communicating Data for Process Improvement

Common Tools for Displaying and Communicating Data for Process Improvement Common Tools for Displaying and Communicating Data for Process Improvement Packet includes: Tool Use Page # Box and Whisker Plot Check Sheet Control Chart Histogram Pareto Diagram Run Chart Scatter Plot

More information

On Correlating Performance Metrics

On Correlating Performance Metrics On Correlating Performance Metrics Yiping Ding and Chris Thornley BMC Software, Inc. Kenneth Newman BMC Software, Inc. University of Massachusetts, Boston Performance metrics and their measurements are

More information

Customer Service Performance Report

Customer Service Performance Report Customer Service Performance Report 2009 Pennsylvania Electric & Natural Gas Distribution Companies Pennsylvania Public Utility Commission Bureau of Consumer Services Table of Contents Introduction I.

More information

Earned Value Management Tutorial Module 1: Introduction to Earned Value Management. Prepared by:

Earned Value Management Tutorial Module 1: Introduction to Earned Value Management. Prepared by: Earned Value Management Tutorial Module 1: Introduction to Earned Value Management Prepared by: Module1: Introduction to Earned Value Welcome to Module 1. The objective of this module is to introduce you

More information

Selecting SPC Software for Batch and Specialty Chemicals Processing

Selecting SPC Software for Batch and Specialty Chemicals Processing WHITE PAPER Selecting SPC Software for Batch and Specialty Chemicals Processing Statistical Process Control (SPC) is a necessary part of modern chemical processing. The software chosen to collect quality

More information

THE SELECTION OF RETURNS FOR AUDIT BY THE IRS. John P. Hiniker, Internal Revenue Service

THE SELECTION OF RETURNS FOR AUDIT BY THE IRS. John P. Hiniker, Internal Revenue Service THE SELECTION OF RETURNS FOR AUDIT BY THE IRS John P. Hiniker, Internal Revenue Service BACKGROUND The Internal Revenue Service, hereafter referred to as the IRS, is responsible for administering the Internal

More information

Benchmarking Student Learning Outcomes using Shewhart Control Charts

Benchmarking Student Learning Outcomes using Shewhart Control Charts Benchmarking Student Learning Outcomes using Shewhart Control Charts Steven J. Peterson, MBA, PE Weber State University Ogden, Utah This paper looks at how Shewhart control charts a statistical tool used

More information

Measurement with Ratios

Measurement with Ratios Grade 6 Mathematics, Quarter 2, Unit 2.1 Measurement with Ratios Overview Number of instructional days: 15 (1 day = 45 minutes) Content to be learned Use ratio reasoning to solve real-world and mathematical

More information

Writing a Requirements Document For Multimedia and Software Projects

Writing a Requirements Document For Multimedia and Software Projects Writing a Requirements Document For Multimedia and Software Projects Rachel S. Smith, Senior Interface Designer, CSU Center for Distributed Learning Introduction This guide explains what a requirements

More information

TIPS DATA QUALITY STANDARDS ABOUT TIPS

TIPS DATA QUALITY STANDARDS ABOUT TIPS 2009, NUMBER 12 2 ND EDITION PERFORMANCE MONITORING & EVALUATION TIPS DATA QUALITY STANDARDS ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues related to performance

More information

WebCAT: Enhancement through Analysis of Additional Data

WebCAT: Enhancement through Analysis of Additional Data WebCAT: Enhancement through Analysis of Additional Data Eric J. Ast, Joshua E.W. Mines, Amrita Mukhopadhyay, Donald E. Brown, James H. Conklin Abstract -- Since the year 2000, the University of Virginia

More information

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing. Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

More information

POLYNOMIAL FUNCTIONS

POLYNOMIAL FUNCTIONS POLYNOMIAL FUNCTIONS Polynomial Division.. 314 The Rational Zero Test.....317 Descarte s Rule of Signs... 319 The Remainder Theorem.....31 Finding all Zeros of a Polynomial Function.......33 Writing a

More information

WHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability

WHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability WHITE PAPER Using SAP Solution Manager to Improve IT Staff Efficiency While Reducing IT Costs and Improving Availability Sponsored by: SAP Elaina Stergiades November 2009 Eric Hatcher EXECUTIVE SUMMARY

More information

A Primer on Forecasting Business Performance

A Primer on Forecasting Business Performance A Primer on Forecasting Business Performance There are two common approaches to forecasting: qualitative and quantitative. Qualitative forecasting methods are important when historical data is not available.

More information

Practical Business Application of Break Even Analysis in Graduate Construction Education

Practical Business Application of Break Even Analysis in Graduate Construction Education Journal of Construction Education Spring 1999, Vol. 4, No. 1, pp. 26-37 Copyright 1999 by the Associated Schools of Construction 1522-8150/99/$3.00/Educational Practice Manuscript Practical Business Application

More information

Exploratory data analysis (Chapter 2) Fall 2011

Exploratory data analysis (Chapter 2) Fall 2011 Exploratory data analysis (Chapter 2) Fall 2011 Data Examples Example 1: Survey Data 1 Data collected from a Stat 371 class in Fall 2005 2 They answered questions about their: gender, major, year in school,

More information

Fault Slip Through Measurement in Software Development Process

Fault Slip Through Measurement in Software Development Process Fault Slip Through Measurement in Software Development Process Denis Duka, Lovre Hribar Research and Development Center Ericsson Nikola Tesla Split, Croatia denis.duka@ericsson.com; lovre.hribar@ericsson.com

More information

The Power of Two: Combining Lean Six Sigma and BPM

The Power of Two: Combining Lean Six Sigma and BPM : Combining Lean Six Sigma and BPM Lance Gibbs and Tom Shea Lean Six Sigma (LSS) and Business Process Management (BPM) have much to contribute to each other. Unfortunately, most companies have not integrated

More information

Chapter 6: Constructing and Interpreting Graphic Displays of Behavioral Data

Chapter 6: Constructing and Interpreting Graphic Displays of Behavioral Data Chapter 6: Constructing and Interpreting Graphic Displays of Behavioral Data Chapter Focus Questions What are the benefits of graphic display and visual analysis of behavioral data? What are the fundamental

More information

Quality Tools, The Basic Seven

Quality Tools, The Basic Seven Quality Tools, The Basic Seven This topic actually contains an assortment of tools, some developed by quality engineers, and some adapted from other applications. They provide the means for making quality

More information

HMRC Tax Credits Error and Fraud Additional Capacity Trial. Customer Experience Survey Report on Findings. HM Revenue and Customs Research Report 306

HMRC Tax Credits Error and Fraud Additional Capacity Trial. Customer Experience Survey Report on Findings. HM Revenue and Customs Research Report 306 HMRC Tax Credits Error and Fraud Additional Capacity Trial Customer Experience Survey Report on Findings HM Revenue and Customs Research Report 306 TNS BMRB February2014 Crown Copyright 2014 JN119315 Disclaimer

More information

Analyzing Portfolio Expected Loss

Analyzing Portfolio Expected Loss Analyzing Portfolio Expected Loss In this white paper we discuss the methodologies that Visible Equity employs in the calculation of portfolio expected loss. Portfolio expected loss calculations combine

More information

Using Simulation to Understand and Optimize a Lean Service Process

Using Simulation to Understand and Optimize a Lean Service Process Using Simulation to Understand and Optimize a Lean Service Process Kumar Venkat Surya Technologies, Inc. 4888 NW Bethany Blvd., Suite K5, #191 Portland, OR 97229 kvenkat@suryatech.com Wayne W. Wakeland

More information

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

More information

Guidelines for Using the Retrospective Think Aloud Protocol with Eye Tracking

Guidelines for Using the Retrospective Think Aloud Protocol with Eye Tracking Guidelines for Using the Retrospective Think Aloud Protocol with Eye Tracking September, 2009 Short paper by Tobii Technology Not sure of how to design your eye tracking study? This document aims to provide

More information

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR) 2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

More information

Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th

Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th Vertical Alignment Colorado Academic Standards 6 th - 7 th - 8 th Standard 3: Data Analysis, Statistics, and Probability 6 th Prepared Graduates: 1. Solve problems and make decisions that depend on un

More information

Capital budgeting & risk

Capital budgeting & risk Capital budgeting & risk A reading prepared by Pamela Peterson Drake O U T L I N E 1. Introduction 2. Measurement of project risk 3. Incorporating risk in the capital budgeting decision 4. Assessment of

More information

4. Answer c. The index of nominal wages for 1996 is the nominal wage in 1996 expressed as a percentage of the nominal wage in the base year.

4. Answer c. The index of nominal wages for 1996 is the nominal wage in 1996 expressed as a percentage of the nominal wage in the base year. Answers To Chapter 2 Review Questions 1. Answer a. To be classified as in the labor force, an individual must be employed, actively seeking work, or waiting to be recalled from a layoff. However, those

More information

In this chapter, you will learn improvement curve concepts and their application to cost and price analysis.

In this chapter, you will learn improvement curve concepts and their application to cost and price analysis. 7.0 - Chapter Introduction In this chapter, you will learn improvement curve concepts and their application to cost and price analysis. Basic Improvement Curve Concept. You may have learned about improvement

More information

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

!!!#$$%&'()*+$(,%!#$%$&'()*%(+,'-*&./#-$&'(-&(0*.$#-$1(2&.3$'45 !"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"!"#"$%&#'()*+',$$-.&#',/"-0%.12'32./4'5,5'6/%&)$).2&'7./&)8'5,5'9/2%.%3%&8':")08';:

More information

Dynamic Load Balance Algorithm (DLBA) for IEEE 802.11 Wireless LAN

Dynamic Load Balance Algorithm (DLBA) for IEEE 802.11 Wireless LAN Tamkang Journal of Science and Engineering, vol. 2, No. 1 pp. 45-52 (1999) 45 Dynamic Load Balance Algorithm () for IEEE 802.11 Wireless LAN Shiann-Tsong Sheu and Chih-Chiang Wu Department of Electrical

More information

Statistical Process Control (SPC) Training Guide

Statistical Process Control (SPC) Training Guide Statistical Process Control (SPC) Training Guide Rev X05, 09/2013 What is data? Data is factual information (as measurements or statistics) used as a basic for reasoning, discussion or calculation. (Merriam-Webster

More information

Amajor benefit of Monte-Carlo schedule analysis is to

Amajor benefit of Monte-Carlo schedule analysis is to 2005 AACE International Transactions RISK.10 The Benefits of Monte- Carlo Schedule Analysis Mr. Jason Verschoor, P.Eng. Amajor benefit of Monte-Carlo schedule analysis is to expose underlying risks to

More information

Assessing Measurement System Variation

Assessing Measurement System Variation Assessing Measurement System Variation Example 1: Fuel Injector Nozzle Diameters Problem A manufacturer of fuel injector nozzles installs a new digital measuring system. Investigators want to determine

More information

The Power of Business Intelligence in the Revenue Cycle

The Power of Business Intelligence in the Revenue Cycle The Power of Business Intelligence in the Revenue Cycle Increasing Cash Flow with Actionable Information John Garcia August 4, 2011 Table of Contents Revenue Cycle Challenges... 3 The Goal of Business

More information

9. Sampling Distributions

9. Sampling Distributions 9. Sampling Distributions Prerequisites none A. Introduction B. Sampling Distribution of the Mean C. Sampling Distribution of Difference Between Means D. Sampling Distribution of Pearson's r E. Sampling

More information

Comparing Alternate Designs For A Multi-Domain Cluster Sample

Comparing Alternate Designs For A Multi-Domain Cluster Sample Comparing Alternate Designs For A Multi-Domain Cluster Sample Pedro J. Saavedra, Mareena McKinley Wright and Joseph P. Riley Mareena McKinley Wright, ORC Macro, 11785 Beltsville Dr., Calverton, MD 20705

More information

Encoding Text with a Small Alphabet

Encoding Text with a Small Alphabet Chapter 2 Encoding Text with a Small Alphabet Given the nature of the Internet, we can break the process of understanding how information is transmitted into two components. First, we have to figure out

More information

Errors in Operational Spreadsheets: A Review of the State of the Art

Errors in Operational Spreadsheets: A Review of the State of the Art Errors in Operational Spreadsheets: A Review of the State of the Art Stephen G. Powell Tuck School of Business Dartmouth College sgp@dartmouth.edu Kenneth R. Baker Tuck School of Business Dartmouth College

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Lecture 2: Descriptive Statistics and Exploratory Data Analysis Lecture 2: Descriptive Statistics and Exploratory Data Analysis Further Thoughts on Experimental Design 16 Individuals (8 each from two populations) with replicates Pop 1 Pop 2 Randomly sample 4 individuals

More information

Service Desk/Helpdesk Metrics and Reporting : Getting Started. Author : George Ritchie, Serio Ltd email: george dot- ritchie at- seriosoft.

Service Desk/Helpdesk Metrics and Reporting : Getting Started. Author : George Ritchie, Serio Ltd email: george dot- ritchie at- seriosoft. Service Desk/Helpdesk Metrics and Reporting : Getting Started Author : George Ritchie, Serio Ltd email: george dot- ritchie at- seriosoft.com Page 1 Copyright, trademarks and disclaimers Serio Limited

More information

2013 AMERICAN COMMUNITY SURVEY RESEARCH AND EVALUATION REPORT MEMORANDUM SERIES ACS13-RER- 11

2013 AMERICAN COMMUNITY SURVEY RESEARCH AND EVALUATION REPORT MEMORANDUM SERIES ACS13-RER- 11 4/4/13 2013 AMERICAN COMMUNITY SURVEY RESEARCH AND EVALUATION REPORT MEMORANDUM SERIES ACS13-RER- 11 MEMORANDUM FOR ACS Research and Evaluation Advisory Group From: James B. Treat (signed on 04/10/2013)

More information

Process Quality. BIZ2121-04 Production & Operations Management. Sung Joo Bae, Assistant Professor. Yonsei University School of Business

Process Quality. BIZ2121-04 Production & Operations Management. Sung Joo Bae, Assistant Professor. Yonsei University School of Business BIZ2121-04 Production & Operations Management Process Quality Sung Joo Bae, Assistant Professor Yonsei University School of Business Disclaimer: Many slides in this presentation file are from the copyrighted

More information

Myth or Fact: The Diminishing Marginal Returns of Variable Creation in Data Mining Solutions

Myth or Fact: The Diminishing Marginal Returns of Variable Creation in Data Mining Solutions Myth or Fact: The Diminishing Marginal Returns of Variable in Data Mining Solutions Data Mining practitioners will tell you that much of the real value of their work is the ability to derive and create

More information

Basic Tools for Process Improvement

Basic Tools for Process Improvement What is a Histogram? A Histogram is a vertical bar chart that depicts the distribution of a set of data. Unlike Run Charts or Control Charts, which are discussed in other modules, a Histogram does not

More information

Danny R. Childers and Howard Hogan, Bureau of the,census

Danny R. Childers and Howard Hogan, Bureau of the,census MATCHING IRS RECORDS TO CENSUS RECORDS: SOME PROBLEMS AND RESULTS Danny R. Childers and Howard Hogan, Bureau of the,census A. INTRODUCTION This project has two principal aims: to investigate the feasibility

More information

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting

More information

!"#$%&&'(#)*+,+#*-#./(0/1#2'3*4,%(5#%(#678'1# /(&#9:/,#;*0#)/(#<*#/=*0,#>:'?# !"#$%&'()%*&

!#$%&&'(#)*+,+#*-#./(0/1#2'3*4,%(5#%(#678'1# /(&#9:/,#;*0#)/(#<*#/=*0,#>:'?# !#$%&'()%*& !"#$%&&'(#)*+,+#*-#./(0/1#2'3*4,%(5#%(#678'1# /(&#9:/,#;*0#)/(#:'?#!"#$%&'()%*& @0??/4A# Since it was released in 1985, Excel has been the standard for business reporting and analysis. With each

More information

Characteristics of Binomial Distributions

Characteristics of Binomial Distributions Lesson2 Characteristics of Binomial Distributions In the last lesson, you constructed several binomial distributions, observed their shapes, and estimated their means and standard deviations. In Investigation

More information

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Role in quality management system Quality Control (QC) is a component of process control, and is a major element of the quality management

More information

10 Keys to a Successful DCAA Audit

10 Keys to a Successful DCAA Audit www.tdgovernmentsolutions.biz 814-242-2410 tim.diguiseppe@tdgovernmentsolutisn.biz 10 Keys to a Successful DCAA Audit Tim Di Guiseppe 2001 All Rights Reserved Page 1 What is a successful DCAA audit? From

More information

AN EXPERIMENT IN CALL SCHEDULING. Patricia Cunningham, David Martin, J. Michael Brick Westat, 1650 Research Boulevard, Rockville, MD 20850-3195

AN EXPERIMENT IN CALL SCHEDULING. Patricia Cunningham, David Martin, J. Michael Brick Westat, 1650 Research Boulevard, Rockville, MD 20850-3195 AN EXPERIMENT IN CALL SCHEDULING Patricia Cunningham, David Martin, J. Michael Brick Westat, 1650 Research Boulevard, Rockville, MD 20850-3195 Keywords: Random Digit Dial, Telephone Survey, Calling Protocol

More information

Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios

Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios By: Michael Banasiak & By: Daniel Tantum, Ph.D. What Are Statistical Based Behavior Scoring Models And How Are

More information

AN ILLUSTRATION OF COMPARATIVE QUANTITATIVE RESULTS USING ALTERNATIVE ANALYTICAL TECHNIQUES

AN ILLUSTRATION OF COMPARATIVE QUANTITATIVE RESULTS USING ALTERNATIVE ANALYTICAL TECHNIQUES CHAPTER 8. AN ILLUSTRATION OF COMPARATIVE QUANTITATIVE RESULTS USING ALTERNATIVE ANALYTICAL TECHNIQUES Based on TCRP B-11 Field Test Results CTA CHICAGO, ILLINOIS RED LINE SERVICE: 8A. CTA Red Line - Computation

More information

I. Introduction. II. Background. KEY WORDS: Time series forecasting, Structural Models, CPS

I. Introduction. II. Background. KEY WORDS: Time series forecasting, Structural Models, CPS Predicting the National Unemployment Rate that the "Old" CPS Would Have Produced Richard Tiller and Michael Welch, Bureau of Labor Statistics Richard Tiller, Bureau of Labor Statistics, Room 4985, 2 Mass.

More information

Chapter 27 Using Predictor Variables. Chapter Table of Contents

Chapter 27 Using Predictor Variables. Chapter Table of Contents Chapter 27 Using Predictor Variables Chapter Table of Contents LINEAR TREND...1329 TIME TREND CURVES...1330 REGRESSORS...1332 ADJUSTMENTS...1334 DYNAMIC REGRESSOR...1335 INTERVENTIONS...1339 TheInterventionSpecificationWindow...1339

More information

The power of money management

The power of money management The power of money management One trader lost ($3000) during the course of a year trading one contract of system A. Another trader makes $25,000 trading the same system that year. One trader makes $24,000

More information

4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: "What do the data look like?"

4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: What do the data look like? Data Analysis Plan The appropriate methods of data analysis are determined by your data types and variables of interest, the actual distribution of the variables, and the number of cases. Different analyses

More information

MEASURING INCOME DYNAMICS: The Experience of Canada s Survey of Labour and Income Dynamics

MEASURING INCOME DYNAMICS: The Experience of Canada s Survey of Labour and Income Dynamics CANADA CANADA 2 MEASURING INCOME DYNAMICS: The Experience of Canada s Survey of Labour and Income Dynamics by Maryanne Webber Statistics Canada Canada for presentation at Seminar on Poverty Statistics

More information

Change-Point Analysis: A Powerful New Tool For Detecting Changes

Change-Point Analysis: A Powerful New Tool For Detecting Changes Change-Point Analysis: A Powerful New Tool For Detecting Changes WAYNE A. TAYLOR Baxter Healthcare Corporation, Round Lake, IL 60073 Change-point analysis is a powerful new tool for determining whether

More information

Module 2: Introduction to Quantitative Data Analysis

Module 2: Introduction to Quantitative Data Analysis Module 2: Introduction to Quantitative Data Analysis Contents Antony Fielding 1 University of Birmingham & Centre for Multilevel Modelling Rebecca Pillinger Centre for Multilevel Modelling Introduction...

More information

How To Measure An Rsp

How To Measure An Rsp Rule 003 (Formerly EUB Directive 003) Regulated Service Provider (RSP) Service Standard and Reliability Performance, Monitoring, and Reporting Rules The Alberta Utilities Commission (AUC/Commission) has

More information

The Math. P (x) = 5! = 1 2 3 4 5 = 120.

The Math. P (x) = 5! = 1 2 3 4 5 = 120. The Math Suppose there are n experiments, and the probability that someone gets the right answer on any given experiment is p. So in the first example above, n = 5 and p = 0.2. Let X be the number of correct

More information

Table of Contents Author s Preface... 3 Table of Contents... 5 Introduction... 6 Step 1: Define Activities... 7 Identify deliverables and decompose

Table of Contents Author s Preface... 3 Table of Contents... 5 Introduction... 6 Step 1: Define Activities... 7 Identify deliverables and decompose 1 2 Author s Preface The Medialogist s Guide to Project Time Management is developed in compliance with the 9 th semester Medialogy report The Medialogist s Guide to Project Time Management Introducing

More information

Apple Health. An amazing app that visualizes health data in a useful and informative way!** By Nic Edwards

Apple Health. An amazing app that visualizes health data in a useful and informative way!** By Nic Edwards Apple Health An amazing app that visualizes health data in a useful and informative way!** By Nic Edwards Apple Health An amazing app that visualizes health data in a useful and informative way!** **Another

More information

MAINTAINANCE LABOR HOUR ANALYSIS: A CASE STUDY OF SCHEDULE COMPLIANCE USAGE. BY FORTUNATUS UDEGBUE M.Sc. MBA. PMP. CMRP

MAINTAINANCE LABOR HOUR ANALYSIS: A CASE STUDY OF SCHEDULE COMPLIANCE USAGE. BY FORTUNATUS UDEGBUE M.Sc. MBA. PMP. CMRP Introduction The objective of this paper is to draw our attention to the importance of maintenance labor hour analysis and the usage of one of the associated metrics Maintenance schedule compliance. This

More information