Data-Driven Resource Allocation Decision-Making for FEMA s Disaster Recovery Centers Julia Moline Federal Emergency Management Agency julia.moline@fema.dhs.gov Erica Gralla The George Washington University egralla@email.gwu.edu Jarrod Goentzel Massachusetts Institute of Technology goentzel@mit.edu ABSTRACT Resource allocation decisions in post-disaster operations are challenging because of situational dynamics, insufficient information, organizational culture, political context, and urgency. In this paper, we present a data-driven decision process for the United States Federal Emergency Management Agency s (FEMA s) Disaster Recovery Center (DRC) Program that enables timely, transparent, and consistent decision-making during crisis. We define the decisions that must be made, identify relevant data sources, and present numerical thresholds, quantitative relationships, and optimization models to support DRC decision making. In this paper, we present the decision process and briefly discuss the development of each relationship, threshold, and indicator. We present the results of an analysis in which we applied the decision process to three historical disasters, including the finding that FEMA could have reduced costs by as much as 53-70% while providing sufficient capacity for disaster-affected individuals. Keywords FEMA, Disaster Recovery Centers, Decision Making, Resource Allocation INTRODUCTION Resource allocation decisions in post-disaster operations are challenging because of situational dynamics, insufficient information, organizational culture, political context, and urgency. In this paper, we present a data-driven decision process for the United States Federal Emergency Management Agency s (FEMA s) Disaster Recovery Center (DRC) Program that enables timely, transparent, and consistent decision-making during crisis. We define the decisions that must be made, identify relevant data sources, and present numerical thresholds, quantitative relationships, and optimization models to support DRC decision making. In this paper, we present the decision process and briefly discuss the development of each relationship, threshold, and indicator. We present the results of an analysis in which we applied the decision process to three historical disasters, including the finding that FEMA could have reduced costs by as much as 53-70% while providing sufficient capacity for disaster-affected individuals. Note that additional consideration of geographic factors may reduce the cost savings. This paper is based upon a case study used to illustrate a general methodology for data-driven decision making outlined in Data-Driven Resource Allocation Decisions: FEMA s Disaster Recovery Centers (Moline, 2014). This paper simplifies the analyses in Moline, 2014 and clarifies the results. RESEARCH QUESTION This work begins with the following research question: How can available data be leveraged to make and refine DRC resource allocation decisions over time?
This question has four key parts. First, we begin with DRC resource allocation decisions. For DRCs, the three primary resource considerations are facilities, staff, and equipment. The second part of the question is how decisions should be made. Quantitative analyses are extremely useful, but they must be coupled with experience and judgment. The third part of the question is critically important: to refine those decisions over time. Every disaster is different, and it is impossible to make one decision immediately after the disaster occurs that will be appropriate for the remainder of operations. Therefore any useful post-disaster decision process must involve the flexibility to revise and refine decisions over time. Finally, we look to available data as a basis for making and refining these decisions. We are interested in making use of all of the best available data for decision making, including data from past disasters, data available immediately after the disaster occurs, and data collected in the course of post-disaster operations. LITERATURE REVIEW We focused our literature search on two areas: work related to post-disaster operations, FEMA, and DRCs helped us to understand the disaster space, and work related to decision making in general helped to frame the decision process. Several statutes, regulations, and policies describe FEMA s authority after disasters (Federal Disaster Assistance, 2011, Robert T. Stafford Disaster Relief and Emergency Assistance Act, as Amended, 2013; McCarthy, 2011). Very few publications relate specifically to DRCs or even to FEMA s decision making after disasters. One study developed a methodology for selecting specific DRC facilities in a particular county (Dekle, Lavieri, Martin, Emir-Farinas, & Francis, 2005) while another article examined low-cost alternatives to DRCs (Vanhemert, 2014). In addition, several publications characterize the nature and challenges associated with decision making (Kahneman, 2003; Vessey, 1994). A subset of these address decision-making in post-disaster operations, offering insights into uncertainty, organizational dynamics, and more in crisis (Darcy, Stobaugh, Walker, & Maxwell, 2013; Day, Junglas, & Silva, 2009; Turoff, White, & Plotnick, 2011). Some work has been done on evidence-based decision making, offering the basis for indicators and thresholds used throughout our methodology (Bradt, 2009; Dijkzeul, Hilhorst, & Walker, 2013; Hagan & Whitman, 2006; Mainz, 2003). In Figure 1: DRC Decision Process
addition, the importance of refining decisions over time is highlighted in several works (Crawford & Bryce, 2003; Roberts & Hofmann, 2004) APPROACH AND DECISION PROCESS A generalized approach to answering questions about resource allocation decision making is described in Data-Driven Resource Allocation Decisions: FEMA s Disaster Recovery Centers (Moline, 2014). For the purposes of this paper, we describe the DRC-specific decision process we developed and the underlying approach. We introduce our decision process with a flowchart that includes five primary decision points. The process includes two types of decisions: 1. Comparisons of relationships against thresholds (denoted by diamonds) and 2. Optimizations (denoted by rectangles). Decisions are made at the county level (denoted by blue), meaning the decision is made for the entire county, or at the DRC level (denoted by orange), meaning the decision is made for each DRC. In addition, the process requires a review of other factors at two points. These points allow for the review of qualitative considerations, disaster-specific considerations, and other factors not otherwise included in the process. The general decision process is shown in Figure 1. revisited and refined on a daily basis for the first weeks of operation and weekly thereafter. Data and Methodology In this section, we walk through the data and methodologies used to arrive at each threshold and relationship. Unless otherwise specified, analyses are based on data from the 2013 Colorado flash floods. Data Collection The analyses discussed below were possible because of systematic data collection in the field, most notably in the form of daily activity logs kept at each DRC throughout disaster operations and logs of registrations through FEMA s NEMIS system. These data were supplemented with planning assumptions provided by seasoned DRC staff. It should be noted that, while the methodology described here is applicable to many other resource allocation problems, data availability is a critical component of the work. Without systematic data collection in the field that is connected to the resource allocation decisions in question, this method would not be useful. The Nature of Post-Disaster Data and the Role of Time Three types of data are available during post-disaster operations: historic data from past disasters, initial data from the first day or days, and trending data over the course of operations. Initial data is the only basis we have to make decisions in the immediate aftermath of a disaster, and we can combine it with historical data to estimate needs. However, initial and historical data form an imperfect basis for decisions. Speed, rather than accuracy, is the primary driver in collecting initial data. Historical data may not be directly relevant to the type and location of the disaster in question. Data collected over the course of the disaster is more accurate, and so we must distinguish between the initial decision and later decisions. Further, we distinguish between decisions made in the first weeks of operation and decisions made later. We do this for two reasons. First, the situation changes rapidly in the first weeks after a disaster. Second, initial estimates could vastly under- or over-estimate the number of affected individuals or the scale of the damage. To account for this uncertainty, we specify that decisions should be Analysis 1: Does Expected Demand Justify Opening a DRC? This first decision is the most critical how do we know whether the demand we expect to see justifies opening a DRC? This question has two parts: first, the expected demand, and second, the threshold above which demand justifies opening a DRC. We discuss each below. In order to determine whether demand would justify opening a DRC, we needed to find the expected maximum or peak demand in each county. We evaluated several initial data sources and found registrations through FEMA s NEMIS system proved to be a significant Figure 2: Peak DRC Visits and Average Daily Registrations
predictor of peak DRC visits; no other data source was a significant predictor (Moline, 2014). We plotted peak DRC visits against average daily registrations for 25 counties across five disasters. Empirically, we determined that there was a power relationship between average daily registrations and DRC visits (Figure 2). Beginning on day 3 after a disaster declaration, we found the following relationship: where = expected peak weekly DRC visits calculated on day m and = average daily registrations through day m. The constant is typically around 15, but changes depending on m (Table 1). m Constant 3 14.9 4 14.7 5 15.2 6 14.2 7 13.5 Table 1: Regression Coefficients for Peak DRC Visitor Forecasting Model More information on the methodology, results, and limitations of this analysis are available in Data-Driven Resource Allocation Decisions (Moline, 2014). In order to determine the minimum visitor threshold, it was necessary to determine the minimum operating capacity of small, medium, and large DRCs. We took the operating capacity, in number of visitors served, to be the product of the number of employees working, the number of operational hours, and the throughput capacity (visitors per employee-hour). We determined minimum, target, and maximum number of employees and operating hours based on DRC standard operating procedures (Table 2). Employees Hours Small Medium Large Small Medium Large Minimum 3 3 3 60 60 60 Target 5 10 15 57 57 57 Maximum 8 15 23 84 84 84 Table 2: Staffing and Operating Hours for Small, Medium, and Large DRCs No previous work defined minimum, target, or maximum values. We used two methodologies to find throughput rates. First, to find target throughput capacity, we analyzed time-stamped records of individual visits to DRCs opened after Hurricane Sandy in New York City to determine the target value. We found that the average DRC visit duration is 45 minutes, i.e. will require 0.75 staff-hours. The goal is to have the exact number of staff required to meet demand, so the target throughput capacity is 1.33 visitors per person-hour. Second, to find maximum and minimum throughput rates, we analyzed the number of visitors given exit interviews in Colorado DRCs after the 2013 floods. These records were not time-stamped but were connected to staffing numbers, which the Sandy data were not. According to the DRC standard operating procedures, the last step in every visit to a DRC is an exit interview; therefore the number of visitors interviewed should be equal to the total number of visitors. However, the percentage of visitors given exit interviews was very low in the first few weeks, when visitor/staff ratios were at their highest. To find maximum capacity, we set a target percentage of exit interviews and then found the corresponding visitor/staff-hour ratios. We found a maximum throughput capacity of 2.126 visitors per person-hour. To find minimum capacity, we assumed that staff should be at least 50% utilized. The minimum, target, and maximum capacities are summarized in Table 3. Visitors per person-hour Minimum 1.063 Target 1.333 Maximum 2.126 Table 3: Throughput capacity in visitors served per person-hour
We now could calculate standard operating capacities by combining Table 2 and Table 3, shown in Table 4. To determine whether to open a DRC, we must determine whether the expected peak weekly DRC visits equal or exceed 96. Small Medium Large Minimum 96 96 96 Target 380 760 1140 Maximum 1431 2684 4115 Table 4: Standard Operating Capacities for Small, Medium, and Large DRCs Analysis 2: Determine the Number and Types of DRCs Required in each County. We now turn to the question of how many DRCs should be opened to serve peak demand. Our goals were: 1. To serve all expected visitors and 2. To minimize cost. Costs (Table 5) included the fixed cost of opening DRCs as well as variable costs of operating DRCs and are based on previous efforts conducted within FEMA (Moline, 2014). Cost Small $204,784 Medium $327,189 Large $482,242 Table 5: Cost Assumptions for Small, Medium, and Large DRCs A simple linear program optimized the number of small, medium, and large DRCs required to meet expected demand. The objective function is as follows: where C is cost (Table 5) and n is the number of DRCs, and where S, M, and L denote small, medium, and large DRCs. The objective function is subject to the following constraints: The total number of DRCs (n s + n M + n L ) must be greater than or equal to 1 The total combined capacity of all DRCs must be greater than or equal to the expected number of visitors, where the operating capacity is the target capacity. In addition, each n must be an integer greater than or equal to 0. demand the day before. Analysis 3: Determine the Registration Equipment Required at each DRC. In order to receive FEMA assistance, individuals must register by phone or computer through FEMA s NEMIS system. We determined that, on average, 25% of DRC visitors register. One registration takes approximately 20 minutes, and a minimum of 2 computers or telephones must be available in any DRC (Moline, 2014). Therefore, the total registration equipment is a simple calculation based upon the above assumptions and the expected peak visitors. The registration equipment required is equal to the maximum of 2 and the number of computers or telephones required to meet 25% of visitors registration needs. Analysis 4: Determine Staffing and Hours for each DRC. Next we determined how to staff DRCs. First, we distributed expected visitors across the DRCs in each county proportionally according to DRC size (see Moline, 2014 for a mathematical description). We then determined the number of employees and operational hours required in each DRC. For each type of DRC (small, medium, and large), we took target values from Table 2. We first determined the minimum number of staff required based on the assumption that it is more expensive to add staff than to have existing staff work longer hours. The minimum number of staff at each DRC is taken to be the maximum of three and the required number of staff to meet demand. The required number of staff is calculated by dividing expected visitors by the target throughput capacity and multiplying by the maximum weekly operating hours. Given the staffing, we calculated the minimum operating hours as the maximum of 60 hours per week and the required number of hours to meet demand. The required number of hours is calculated by dividing expected visitors by the target throughput capacity and multiplying by the required staff calculated above (see Moline, 2014 for a mathematical description). Analysis 5: Is Trending Demand at or above Minimum Capacity? Once initial decisions have been made, trending demand should be evaluated daily during the first two weeks or more and then weekly. In the first two or more weeks, each day s demand can be assumed to be approximately equal to the
To determine demand trends after the first two weeks, we conducted a time-series analysis on DRC visitors. We found a highly significant downward trend; each week, DRC visitors decline by 20%. Therefore, next week s visits can be forecast as 80% of this week s visits. We compared expected demand values against the minimum thresholds to determine whether it is worthwhile to keep DRCs open. For instance, if we expect DRC visits to fall below the minimum threshold next week, we may consider closing DRCs at the end of this week. Other Factors Our decision process includes two exit points: the decision not to open DRCs, and the decision to close DRCs. Before each of these, we require the review of qualitative and disaster-specific considerations that are not otherwise accounted for in our process. Although these factors are important, they are outside the scope of this paper. RESULTS After we developed the decision process, we applied it to three historic disasters to understand its outcomes. We revisited decisions made after the 2013 Illinois flash floods (disaster 4116 per FEMA s numbering), the 2013 Colorado flash floods (4145), and the 2013 Illinois tornadoes (4157). We found that our process would have resulted in a 53%-70% reduction in cost; note that cost savings may be less significant with geographic considerations. In addition, we found the following: The outcomes of our decisions were more efficient than the actual outcomes in that the capacity we provided more accurately reflected demand. We made the correct decision not to open DRCs in several counties; in actual operations, DRCs were opened that were not justified by the demand. We were able to correct wrong decisions faster in that we recognized the need to staff up or staff down more quickly than in actual operations. We closed low-traffic DRCs earlier and kept high-traffic DRCs open longer. In addition, we found that an analysis at the overall disaster level rather than at the county level would result in additional cost savings by opening fewer DRCs to serve multiple counties. Figure 3 illustrates cost savings and the number of DRCs opened across all three disasters. Figure 3: Actual and Proposed DRCs and DRC Costs In each case, our decision process opened many fewer DRCs than were actually opened. This had a direct impact on the overall cost of DRCs, reducing the overall DRC cost by 70%, 61%, and 53% for disasters 4116, 4145, and 4157, respectively. In addition, Figure 4, Figure 5, and Figure 6 illustrate the differences between the actual capacity provided and the capacity that our process would provide. In each case, the actual capacity was much higher than the demand warranted indicating that DRCs were overstaffed and kept open longer than necessary. The proposed capacity aligns much more closely with demand by keeping staffing to the minimum necessary to meet demand and by closing DRCs earlier. County-bycounty capacity comparisons are not included here but can be found in Data- Driven Resource Allocation Decisions.
Figure 4: Actual Visitors and Actual and Proposed Capacity, Illinois Floods (4116) Figure 5: Actual Visitors and Actual and Proposed Capacity, Colorado Floods (4145) Figure 6: Actual Visitors and Actual and Proposed Capacity, Illinois Tornadoes (4157) In addition, the actual capacity provided jumps after the first few weeks for 4116 and 4145, the two larger disasters. In each case, these jumps occur after DRC visits have already begun to decline, indicating that increased staffing occurred too late or unnecessarily. In the case of 4145, a similar capacity jump occurs in the proposed outcome; note, however that the jump occurs earlier and is smaller. Therefore, in each case for the disasters of study, our decision process resulted in capacity that more accurately reflected demand, ensuring appropriate service levels and drastically reducing cost. DISCUSSION AND LIMITATIONS This decision process can and has been used in the field to make resource allocation decisions related to DRCs. A major benefit of breaking the problem down into several decisions is that each analysis can be used on its own or in conjunction with the others. In the future, it may be possible to build a decision tool that walks decision makers through each step of the process. At present, decision makers in the field are using the relationships we developed, most notably throughput capacity and the algorithm for determining the number of staff and operating hours, to make adjustments to ongoing DRC operations in the field.
We have shown that the decision process we developed could result in substantial cost savings to FEMA and the DRC Program. However, the analysis has several limitations that must be acknowledged. First, the process does not account for geographic considerations including where people live. Second, the decision process focuses on individual counties and DRCs. Although this aligns with the disaster declaration process, a disaster-wide perspective could offer more efficient solutions. Third, models and thresholds were developed based on a limited set of disasters due to the availability of data. Fourth, counties in which we do not open DRCs are left without any service at all. Each of these limitations is the subject of ongoing work. CONCLUSION The case presented here is an example of a general approach to resource allocation decision-making presented in Data-Driven Resource Allocation Decisions. We have presented the background, data, and methodologies we used to develop a data-driven resource allocation decision process for FEMA s DRCs. In doing so, we have shown that our process can result in a 60-80% cost savings for DRCs; that we can correct wrong decisions faster; and that decision review periods are critically important. REFERENCES Bradt, D. A. (2009). Evidence-based decision-making in humanitarian assistace. Humanitarian Practice Network, (67). Retrieved from http://www.odihpn.org/index.php?option=com_k2&view=item&layout=item&id= 3080 Crawford, P., & Bryce, P. (2003). Project monitoring and evaluation: a method for enhancing the efficiency and effectiveness of aid project implementation. International Journal of Project Management, 21(5), 363 373. doi:10.1016/s0263-7863(02)00060-1 Retrieved from http://reliefweb.int/sites/reliefweb.int/files/resources/ttufts_1306_acaps_3_online.pdf Day, J. M., Junglas, I., & Silva, L. (2009). Journal of the Association for Information Systems Information Flow Impediments in Disaster Relief Supply Chains *. Journal of the Association for Information Systems, 10(8), 637 660. Dekle, J., Lavieri, M. S., Martin, E., Emir-Farinas, H., & Francis, R. L. (2005). A Florida County Locates Disaster Recovery Centers. Interfaces, 35(2), 133 139. doi:10.1287/inte.1050.0127 Dijkzeul, D., Hilhorst, D., & Walker, P. (2013). Introduction: evidence-based action in humanitarian crises. Disasters, 37 Suppl 1, S1 19. doi:10.1111/disa.12009 Federal Disaster Assistance (2011). United States: GPO. Retrieved from http://www.gpo.gov/fdsys/pkg/cfr-2011-title44-vol1/pdf/cfr-2011-title44- vol1-part206.pdf Hagan, J. M., & Whitman, A. A. (2006). Biodiversity Indicators for Sustainable Forestry : Simplifying Complexity. Journal of Forestry, 104(4), 203 210. Kahneman, D. (2003). Maps of Bounded Rationality: Psychology for Behavioral Economics. The American Economic Review, 93(5), 1449 1475. Mainz, J. (2003). Defining and classifying clinical indicators for quality improvement. International Journal for Quality in Health Care, 15(6), 523 530. McCarthy, F. X. (2011). FEMA s Disaster Declaration Process : A Primer (p. 25). Washington, DC. Darcy, J., Stobaugh, H., Walker, P., & Maxwell, D. (2013). The Use of Evidence in Humanitarian Decision Making ACAPS Operational Learning Paper. Moline, J. (Massachusetts I. of T. (2014). Data-Driven Resource Allocation Decisions: FEMA s Disaster Recovery Centers. Massachusetts Institute of Technology, Cambridge, MA. Retrieved from http://dspace.mit.edu/handle/1721.1/90058
Pedraza-Martinez, A. J., Stapleton, O., & Van Wassenhove, L. N. (2013). On the use of evidence in humanitarian logistics research. Disasters, 37 Suppl 1, S51 67. doi:10.1111/disa.12012 Robert T. Stafford Disaster Relief and Emergency Assistance Act, as Amended (2013). United States: FEMA. Retrieved from http://www.fema.gov/media- library-data/1383153669955-21f970b19e8eaa67087b7da9f4af706e/stafford_act_booklet_042213_508e.pdf Roberts, L., & Hofmann, C.-A. (2004). Assessing the impact of humanitarian assistance in the health sector. Emerging Themes in Epidemiology, 1(1), 3. doi:10.1186/1742-7622-1-3 Turoff, M., White, C., & Plotnick, L. (2011). Dynamic Emergency Response Management for Large Scale Decision Making in Extreme Hazardous Events. In Annals of Information Systems 13 (pp. 181 202). Newark, NJ: Springer Science+Business Media, LLC. doi:10.1007/978-1-4419-7406-8_9 Vanhemert, K. (2014, February). FEMA Enlists Designers to Rethink Disaster Relief. Wired. Retrieved from http://www.wired.com/2014/02/fema-frog-teamedredesign-disaster-relief/ Vessey, I. (1994). The effect of information presentation on decision making: A cost-benefit analysis. Information & Management, 27(2), 103 119. doi:10.1016/0378-7206(94)90010-8