Journal of Public Administration Research and Theory: J-PART, Vol. 1, No. 1. (Jan., 1991), pp

Size: px
Start display at page:

Download "Journal of Public Administration Research and Theory: J-PART, Vol. 1, No. 1. (Jan., 1991), pp. 19-48."


1 Working in Practice but Not in Theory: Theoretical Challenges of "High-Reliability Organizations" Todd R. LaPorte; Paula M. Consolini Journal of Public Administration Research and Theory: J-PART, Vol. 1, No. 1. (Jan., 1991), pp Stable URL: Journal of Public Administration Research and Theory: J-PART is currently published by Oxford University Press. Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is an independent not-for-profit organization dedicated to and preserving a digital archive of scholarly journals. For more information regarding JSTOR, please contact Wed Apr 4 18:43:

2 Working in Practice But Not in Theory: Theoretical Challenges of "High-Reliability Organizations" Todd R. LaPorte Paula M. Consolini University of California, Berkeley,ne quotation in the title is from a remark by Walter Heuer brought to our notice by Richard Hug. This article is a revision of a paper delivered at the meeting of the American Political Science Association, Washington, D.C., September 1988, and the Conference on the Future of Public Administration 11, Minnowbrook Center1 Syracuse Universityr September 19". research was supported in part by Office of Public administration practitioners and scholars harbor no illusions about organizational perfection (cf. Jaffee 1973).' They do not expect bureaucracies to be error-free. People make mistakes, machines break. No one is perfect and no organization is likely to achieve this ideal. Indeed, administrative folklore teaches that errormaking is the normal bureaucratic condition: "Murphy (and his law) Lives!" Yet some organizations must not make serious errors because their work is to0 important and the effects of their failures too disastrous. This is especially true with organizations that operate technologies that are very beneficial, yet costly, and hazardous. Since midcentury, a number of technologies have emerged that have " meat ~roductive as well as destructive mwers. In- 1 creasingly, any failure of these technologies is perceived by both their operators and the public to have such potentially Naval Research contract N k- grave consequences as to warrant the absolute avoidance of 03123, National Science Foundation failure. Examples abound: operating nuclear power plants; ingrants SES and SES , dustrializing genetic engineering; air-traffic control; identifying and the Institutes Governmental dangerous drugs; assuring the safety of bridges and dams; Studies and Transportation Studies, using pesticides in agriculture; and, less dramatically, dis- University of California, Berkeley. The paper draws on discussions of the tributing electric power. Perhaps for the first time in history, the consequences and "costs associated with major failures in some technical operations are greater than the value of the High Reliability Organization Project research team; see note 4. The authors thank Weickf Richard Hug. and several anonymous reviewers for their constructive comments. We thank Austin Hoggatt for this compact phrase. J-PART, 1(1991):1:19-47 lessons learned from them."' This is an altoerether remarkable " and unexpected situation. It suggests for such organizations that learning from trial and error in operating their central production systems, while certainly likely, does not recommend itself as a confident or preferred method of system improvement. 19/fournal of Public Administration Research and Theory

3 The Or high-re1iabi1ity goal has been part of organizational life for some time, for example, in hospital operating rooms, the delivery of water supplies, preventing amidents in the workplace, care in finanaal accounts, and other activities within organizations. Recently, however, high-reliability demands have been applied insistently to technical.. systems of such scale chat the failurefree goal is Organizations. Challenges of 'Wigh-Reliability Organizations" The result is an organizational process colored by efforts to engage in trials without errors, lest the next error be the last trial. The more important the benefit, the more likely the operating organizations will be pressed to sustain failure-free organizational performance--the avoidance altogether of certain classes of incidents or accidents judged by overseers to result in absolutely unacceptable consequences. In effect, organizational and political leaders and the public hold contradictory views. It is said that, "Of course, we can't depend on bureaucracy. Mistakes are made routinely, they're run of the mill. We'll learn from them to do better." Yet, 'We demand this or that operation be run perfectly, or we'll withhold funds and take away authority. These organizations must not fail; we do not wish to have to learn from such failures." Operators and watchful publics assume, indeed insist, that some organizations can avoid system failures. Indeed, a number of regulatory agencies have been established in search of this happy condition. Organizational representatives may play to this hope, assuring the public that they will not fail because they claim sufficient technical knowledge to prevent it. As long as these organizations succeed, one assumes they will continue to do so. The public grows to take their benefits nearly, if perhaps nervously, for granted. Reliability and safety are technically assured so that one need not worry overly about the social and political dynamics in these organizations. Such insistence on sustained failure-free performance is, from a theoretical view, quite extraordinary. From the literature, one cannot expect that it is possible, even to a moderate degree. Yet there are large-scale, highly complex organizations that have taken up this goal and almost always achieve it.3 This is also remarkable and unexpected. Particularly visible examples include nuclear power plant operation, radioactive and toxic-waste management, widely dispersed electrical generation-and-distribution systems, largescale telecommunication and computer networks, express air freight, and maintenance of the purity of blood supplies used for transfusions. It is notable that this class of organizations is deeply embedded in the public sector, many are operated by public servants, and few of them do not draw the searching scrutiny of regulatory bodies and an increasingly nervous public. Yet little is known systematically about the social or management aspects of such activities or the Consequences for the o&rating organizations of attempting to attainnearly failurefree performance. The High Reliability Organization Project at 201J-PART, January 1991

4 Challenges of "High-Reliability Organizations" the University of California, Berkeley, has taken on this task by conducting field research in three very complex, technologyintensive organizations that are held to a failure-free standard! These high-reliability organizations (HROs) operate hazardous systems that present the challenge in an intense form. This article draws on two of the three-air-traffic control and naval air operations at sea. While each example here describes relationships in a specific setting, it also typifies such relationships in both organizations. These organizations share the goal of avoiding altogether serious operational failures. This goal rivals short-term efficiency as a primary operational objective. Indeed, failure-free performance is a condition of providing benq5ts.' The operating challenges are twofold: (1)to manage complex, demanding technologies, making sure to avoid major failures that could cripple, perhaps destroy, the organization; at the same time, (2) to maintain the capacity for meeting periods of very high, peak demand and production whenever these occur. Each organization in the study is large, internally very 'The organizations are the Federal dynamic, and intermittently intensely interactive. Each per- Aviation Administration's air-traffic control system and the two forms very complex and demanding tasks under considerable aircraft carriers and air wings of the time pressure, doing so with a very low error rate and an U.S. Navy's Carrier Group Three, USS almost total absence of catastrophic failure. For example, air- Carl "inson and Enterprise. We traffic control over the past five years has nationally recorded are also studying Paafic Gas and over 75 million instances per year in which a controller Electric Company's electric power system, The illusbations reported here handled an aircraft across an air space. In that time, there were have strong - parallels - in the utility, no instances of a midair collision when both aircraft were including its nuclear power station. under positive radar control. (See La Porte 1988). The. project, team has included Geoffrey Gosling, Transportation Engineer- A U.S. Navy nuclear carrier group involves up to ten ing; Todd R. La Porte, Political Sdence; Karlene H, Roberts, Business ships. The group is centered on an aircraft carrier manned by a Administration; Gene I. Rochlin, Crew of Up to 3,000 that supports an air wing of some 90 air- Energy and Resources; and Paul craft and another 2,800 men. Phases of high readiness include Schulman* daily operations from midmorning to midnight. During these College, with student members Paula phases, the air department may handle up to 200 sorties, Consoliii, Douglas Geed, Jennifer Halpren, ~~~b~ K ~ Edward ~ ~ L, ~ which ~ involve ~ some ~ 300 ~ cycles ~ of aircraft, preparation, position- Suzanne Stout, Alexandra Suchard, and Gaig Thomas. For an overview of the project see La Portel and Rochlin (1989), and Roberts (1989, 1990). The full study also considers organizational logical change. and techno- ing, launching, and arrested landings (at 50- to 60-second intervals). For a deployment period of six months there will typically be over 16,060 arresled landings with no deck accidents. Over 600 daily aircraft movements across portions of the deck are likely with a "crunch raten--i.e., the number of times two aircraft touch each other-+f about 1 in 7,000 moves. Wen systems begin to take on this characteristic, soaeties generally turn to government to assure such perfomance, either as operators or as regulators. It is a remarkable task to shift to the public sector. Like a growing number of other complex organizations, each of the two operates tightly coupled, complex, and highly interdependent technologies. Each also faces very dynamic physical, economic, and political environments. How do such 21/J-PART, January 1991

5 61f this were the case, these organizations would exhibit much the same phenomena as described or predicted in organization and management studies. See Perrow (1984) for a pointed and vivid discussion of the organizational aspects of "normal accidents" in hazardous systems from just such a perspective; cf. La Porte (1982).In null-hypothetical terms, organizations would not vary in internal authority or communication patterns, decisionmaking behavior, or internal culture as a function of the degree to which their production technologies are perceived to be hazardous or to which the consequences of individual failures in production are seen to vary in severity. This hardly seems plausible. Yet organization theory literature rarely speaks to this situation. This literature has been derived almost exclusively from organizations in which trial-and-error learning is the predominant and accepted mode of learning and improvement. Contemporary administrative/organization theories are essentially theories of trial-and-error, failure-tolerant, low-reliability organizations. For the rare exceptions, see Landau (1969, 1973,), Lerner (1986), Lustick (1980), and Woodhouse (1988)for a beginning logic that calls for empirical work. There is an extensive literature on equipment reliability in the engineering literature, but it does not inform the organizational problem. Challenges of "High-Reliability Organizations" high-reliability organizations manage to attain very high levels of reliable performance, while meeting the goals of providing the capacity for sustained peak performance as well? This article outlines the conceptual challenges involved in addressing the phenomena observed in these HROs and argues that these phenomena present major theoretical surprises and puzzles in at least three areas: (1)decisionmaking in the face of catastrophic error, (2) structural responses to hazards and peakloads, and (3) challenges of modeling tightly coupled interdependence. The argument is presented here in the spirit of discovering anomalous data rather than theory disconfirmation. Nor is there an attempt at this time to resolve the theoretical puzzles the authors believe are present in the HRO phenomena. HIGH-RELIABILITY PATTERNS AND CONCEPTUAL PUZZLES Observations from field research suggest patterns of structure and behavior that are surprising. Those patterns cannot be straightforwardly derived from contemporary theory when the latter is used as a basis for predicting what one should see in organizations that attempt steadfastly to realize very high levels of operational reliability in the face of high hazard. Insights from the literature are scant. There is little systematic theoretical or empirical work on the dynamics of those modern organization whose members (and the public) perceive that operational failures could result in increasingly dangerous and harmful consequences. This situation need not be problematic if HROs differed little from those trial-anderror organizations that are "failure tolerant," that is, they operate systems for which production failures are not likely to result in costly consequences and where the value of the lessons so learned is greater than the cost of making them.6 The HROs in this study, however, differ from trial-and-error, failure-tolerant organizations in at least the following respects: 1. Increasingly, the physical technologies and their organizational operating units are tightly coupled so that if important elements in routine production processes fail, the organization's capacity to perform at all is severely threatened. Failure of a component causes such damage that the capacity of the organization to perform is threatened altogether. 2. The results of operational failures are visible and increasingly feared by the public, which perceives, therefore, that 22/J-PART, January 1991

6 'A prior question concerns the characteristics of an organization's production technologies which result in perceptions that its failure is increasingly hazardous. For examples of studies of risk and risk perception, see Fischoff; Slovic; and Dietz, et al. (1991). See also Metlay (1978). Challenges of "High-Reliability Organizations" it has a very high stake in assuring failure-free operations. Strong public, external pressures exist for very reliable internal operations, not only for overall performance or economic profit. 3. These HROs have, until recently, had relatively abundant resources, allowing them to invest heavily in reliabilityenhancing activities. This has nurtured an organizational perspective in which short-term efficiency has taken second seat to very high-reliability operations. The remaining discussion, concentrating on three conceptual areas, distinguishes between risk, error, and hazard, rarely using the term risk. Hazard refers to the characteristics of a production technology such that if it fails significantly the damage to life and property can be very considerable. Risk is taken in the engineering sense as the product of the magnitude of harmful consequences and the probability of an event causing them.7 Error refers to mistakes or omissions in procedure or operational decisions that result in occurrences judged as undesirable and sometimes costly to remedy. Organizations continually experience errors, some of which result in consequences that threaten the viability of the organization in part or whole; this is a system failure. A high-hazard/low-risk system would be one in which a dangerous technology is operated in such a way as almost never to experience an operating failure of grievous consequence; it would be nearly failure-free--a high-reliability organization. Decisionmaking in the Face of Catastrophic Failure The literatures in organizational studies and public management treat decisionmaking largely in terms of planning versus trial-and-error learning, certainty versus uncertainty, and hierarchical versus decentralized processes. These notions suggest reasonably distinct properties that might bound the descriptions of decision dynamics in all organizations. While one sees much that is sensibly ordered by such frameworks, they do not prepare one well to anticipate the dynamics of the decision challenges faced by high-reliability organizations, where empirical evidence overwhelms analytical categories. The complexity and determinacy of the technologies and the certain harmfulness of their hazards do lead toward intensive planning and hierarchical patterns. Yet the remaining uncertainties urge an equal emphasis on operational decentralization and flexible processes. The HROs in this study are charactdrized by very clear, well-agreed-upon operational goals. Those in the organizations 23/J-PART,January 1991

7 Challenges of ''High-Reliability Organizations" carry on intensive efforts to know the physical and dynamic properties of their production technologies, and they go to considerable pains to buffer the effects of environmental surprises. In most regards, the organizations come close to meeting the conditions of closed rational systems, i.e., a wellbuffered, well-understood technical core requiring consistency and stability for effective, failure-free operations. Decision strategies for most situations are straightforward, wellprogrammed, standard operating procedures (SOPS). In a sense, the only decision is which SOP to apply. In other words, there is only routine decisionmaking. (Simon 1957) At first look, one sees what is expected. There is, indeed, a great deal of dependence on operator adherence to the formal procedures of operations. Both air-traffic control and camer operating units have thick manuals of SOPS close at hand and invest much training in learning them "cold." Navy Air's NATOPS (Naval Air Technical Operations Standards) manuals and air-traffic controllers' "brown books" of procedures are good examples. They are the tested, authenticated formal procedures for the operation of most technical aspects of an extraordinary range of jobs and activities (cf. Schulman 1990). The core technologies are hazardous and time critical. Effectiveness in decisions about operations is crucial. Such organizations invest a great deal in recruiting, socialization, and incentives to assure that there is agreement about organizational mission. At the operating levels, there is rarely any question at all. Consensus is unequivocal. Technical operations are treated as if they can be almost fully known, as if surprises and contingencies can be either eliminated or anticipated. In effect, calculative, comprehensive decisionmaking can be achieved. The organizational logic in this situation is to strive for the fully rationalized operational plan. The problem is one of trylng hard enough. These illustrations are nearly pure expressions of Thompson and Tuden's "decision by cal~ulation."~ Recall the early and well-proved work that focused upon the degree of consensus about preferences (goals) and beliefs about causation (means) and the consequences for the effectiveness of decisionmaking structures. Decision strategies vary as agreements about ends or means wax or wane. In cases of the more demanding operational situations, Thompson and Tuden (1959); see Thompson (1967) and Scott (1987a) for the appropriate techniques for equating causeeffect knowledge with more recent uses and interpretations known preferences are quite complicated. The data [are likely to1 be of the logic of each decision strategy. so voluminous for example, that only [a computer] can make sense of 24/ J-PART, January 1991

8 91nterestingly, this seems a precursor to the garbage-can model of decisionmaking in a much different structural situation; see Cohen and March (19n). '?See Reason (1990) for a comprehensive review of studies of human error mainly at the individual level. In contrast, the interest here is on the group or organizational context of human performance. "See,for instance, Simon (1957b), March and Simon (1958), Braybrooke and Lindblom (1963), Lindblom (1959), Etzioni (1963, and especially Landau and Stout (1979). Challenges of '"High-Reliability Organizations" them. Likewise, the particular sequences of action involved in the techniques may be hard to master and difficult to carry out, so that only the highly trained specialist-one for each kind of computation problem we can anticipate-can arrive at an appropriate choice.... [Tlthe strategy for decision is straight forward analysis-decision by computation. (Thompson and Tuden 1959, 198) Such specialists would be constrained by four rules. They would be (1) prohibited from making decisions in issues lying outside their spheres of expert competence, and (2) bound to the organization's preference scale. (3) All pertinent information would be routed to each specialist, and (4) every issue appropriate to his/her specialty would be routed to him/her (Thompson and Tuden 1959, ).9 The result is a formal, hierarchical, Weberian organization employing a classical bureaucratic decision process. It is the image of structure one also expects to see in military and critical organizations. The underlying assumption is that operators and specialists can know enough and, with enough training, production processes can be operated so that grievous errors do not occur. Yet students of organization no longer take for granted that "causation [about means can] be 'known' as soon as a decision issue appears, [and]... that the organization is certain of its preferences regarding the several alternatives apparent" (Thompson and Tuden 1959,197). Indeed, the very idea of complete knowledge of any significant organizational decision situation is arguably impo~sible.'~ Strategies, such as comprehensive analysis, are viewed with suspicion as the source of major program failures." The latter view rejects a centralized, rational decision process model in favor of one in which disagreement about means is likely. When differences of opinion or outright uncertainty about the appropriate means to accomplish an agreedupon goal exist, then professional, skilled judgment is seen as the suitable method of decisionmaking to use: majority voting among those with experience would be the most appropriate basis for deciding. Since the organization (or unit) faced with this situation is not in a position to program or compute its decision analyses, the situation calls for trial and error, a learn-by-doing approach to implementation. Try the means judged most likely to succeed, but be prepared to recognize any failure of method. As soon as it becomes clear that one method has failed, try another. In this process, keep lines of communications open, assure incentives that encourage the collection and reporting of information--learn from the past and do better next time. 25/J-PART, January 1991

9 '%ee the work of Lindblom and others developing the concept of Challenges of "High-Reliability Organizations" The above are clearly the guides for incremental decisionmaking in the context of broadly rational planning. A combination of incrementalism and the hybrid concept "mixed scanning" (Etzioni 1967) should account for the decision dynamics in the kinds of organization at issue here. The incremental perspective expects that errors can never completely be avoided and, as a result, focuses on the use of error as a tool to enhance decisionmaking.12 Incrementalism views decisionmaking alternatives as a choice between making large and small errors. It takes into account "only the marginal or increment differences between a proposed policy or state of social affairs and an existing one" (Harmon and Mayer 1986, 266). It relies in part, on "a sequence of trials, errors, and revised trials" to direct (and improve) decisionmaking (Lindblom 1979, 518). This process of moving an organization in a kind of bumpand-go fashion, backing into the future, is expected to be more effective in the long run than unrealistic attempts to survey carefully and completely and weigh all alternative means. Incrementalists rightly know that the limited cognitive capacity of decisionmakers--their bounded rationality--limits the potential effectiveness of any method of analysis based decisionmaking. "Decisionmakers have neither the assets nor the time to collect the information required for rational choice" (Etzioni 1986, 386; see also Agnew and Brown 1986). [Alll analysis is incomplete, and all incomplete analysis may fail to grasp what turns out to be critical to good policy [and perhaps operations].... [Flor complex problems all attempts at synopsis are incomplete. The choice between synopsis and disjointed incrementalism--or between synopsis and any form of strategic analysis-is simply between ill-considered, often accidental, incompleteness, on one hand, and deliberate, designed incompleteness, on the other. (Lindblom 1979,519) The mixed-scanning extension of incrementalism places trial-and-error decisionmaking in the context of the more general plan that drives the organization. Mixed-scanning analysts emphasize the division of decisionmaking efforts into "wide-angle scanning" and a "zoom" focus. When the wide- incrementalism," "muddling through," angle Scan of organizational activities reveals a problem or and "partisan mutual adjustment." See surprise, decisionmakers should zoom in on the activity in Braybrooke and LindblOm question and determine the exact nature of the surprise and Lindblom (1959)1and Lindblom (19") how to deal with it. The investigations made and questions for early expressions of this perspective, See also Lindblom (1979) and asked are guided by the organization's goals. Trial-and-error Etzioni (1967 and 1986) for a revision decisionmaking is, thus, placed in an organizational context. of mixed scanning and Lustick (1980) and Wimberley and Morrow (1981). 26/J-PART, January 1991

10 Challenges of 'Nigh-Reliability Organizations" The incremental/mixed-scanning perspective is quite reasonable if an implicit, but fundamental, assumption is warranted: Errors resulting from operational or policy decisions are limited and consequences are bearable or reversible, with the costs less than the value of the improvements learned from feedback analysis. For many of the operations on aircraft carriers and in airtraffic control centers, this is certainly the case. Day-to-day operational decisions are bounded by well-formulated and tested SOPS; calculative decisions operate much of the time. And within these bounds, application and adjustments are necessarily incremental. There are trial-and-error processes at work throughout various organizational activities, (e.g., mission planning, team organization, operations scheduling, introduction of new technology and procedures, maintenance.) A great deal of trial-and-error learning goes on in the small, so to speak. Actions are closely monitored so that when errors occur immediate investigations are conducted. "Hot washups," i.e., reporting problems immediately after the end of an operation, and "lessons learned" debriefings are an expected part of the aftermath of any even modestly complex naval training exercise. These are valuable contributions to the "calculative" aspects of air-traffic control and carrier operations. But the trial-and-error aspect of incremental, professional, judgmental decision processes have a sharper, more lethal edge in these organizations than in other more failure-tolerant ones. Often on the basis of operational trials in the past, operators and managers in these organizations have learned that there is a type of often minor errors that can cascade into major, systemwide problems and failures. Some types of system failures are so punishing that they must be avoided at almost any cost. These classes of events are seen as so harmful that they disable the organization, radically limiting its capacity to pursue its goals, and could lead to its destruction. Trial-and-error iterations in these known areas are not welcome, or, as it is sometimes put, "are not habit forming." And there is a palpable sense that there are likely to be similar events that cannot be foreseen clearly, that may be beyond imagining. (See Perrow 1984 and cf. Morone and Woodhouse 1986.) This is an ever-present cloud over operations, a constant concern. HROs, then, have a triple decision-strategy challenge: 27/J-PART,January 1991

11 ')See Etzioni (1967) for a discussion of the difficulties of inam~ental deasionmaking in "fundamental" situations. Etzioni expects errors to occur: qmw hiledxed - - miss areas in which only a detailed camera could reveal trouble" (389), it is less likely than incrementalism miss obvious trouble spots in unfamiliar areas. A similar but unaddressed situation obtains for operational processes of high hazard. "Landau Challenges of lfiigh-reliability Organizations" 1. To extend formal calculative, programmed decision analysis as widely as is warranted by the extent of knowledge, the urgency of operational needs, and the ability to train or compel adherence to correctly calculated SOPS. 2. To be sensitive to those areas in which judgmental, incremental strategies must be used, with sufficient attention to requisites of performance, evaluation, and analysis to improve the process. 3. To be alert to the surprises or lapses that could result in errors small or large that could cascade into major system failures from which there may be no recovery. Decision theorists have dealt with the first two, supposing that an organization will generally have one or the other type of problems to overcome. Rarely is there guidance on the dynamics involved when both calculative and judgmental strategies are necessary in mixed situations. While incrementalists recognize that this strategy does not apply to fundamental decisions, such as declaring war,13 they are largely silent in the face of the important decisionmaking challenges associated with the need to avoid operational failure absolutely. The more agreement there is that an activity is hazardous and calls for high operational reliability, the greater the inherent tension between (a) the behavioral expressions and norms of incremental, successive approximation-rooted strategies and (b) those strategies animating from comprehensive, systemic, anticipatory rationality. As the speed and potential scope in the propagation of error increases, what, then, are the expected dynamics of calculative- or judgmental-based decision processes? Although a great deal of work has been done on organization decisionmaking, there has been little serious consideration of how the challenge to be highly reliable alters decisionmaking ~trategies.'~ Decisionmaking strategies in the organizations described here are significantly different--in mix and dynamics--from those described and prescribed by incrementalists. For some major functions, these organizations cannot wait for problems to occur and then correct them, though for other functions they do. Even the use of "sophisticated trial-and-error" decision strategies, such as "taking more stringent initial precautions MOrOne and Woodhouse (1986, chapters 8 and 9), Woodthan are really expected to be necessary," is not enough house (1988), and Lustick (1980) are exceptions, See also perrow (1984) and (Woodhouse 1988, 218). Errors in major portions of operations Schulman (1980) for views that touch on these issues. must also be avoided. The alternative is, therefore, to strive for trials without errors. 28/J-PART, January 1991

12 Challenges of 'Nigh-Reliability Organizations" HROs struggle with decisions in a context of nearly full knowledge of the technical aspects of operations in the face of recognized great hazard. They court the dangers of attempting coordinated, integrated, and detailed attention to operations that are at once greatly beneficial and often very dangerous. The people in these organizations know almost everything technical about what they are doing--and fear being lulled into supposing that they have prepared for any contingency. Yet even a minute failure of intelligence, a bit of uncertainty, can trigger disaster. They are driven to use a proactive, preventative decisionmaking strategy. Analysis and search come before as well as after errors." They try to be synoptic while knowing that they can never fully achieve it. In the attempt to avoid the pitfalls in this struggle, decisionmaking patterns appear to support apparently contradictory production-enhancing and error-reduction activities. The patterns encourage reporting errors without encouraging a lax attitudes toward the commission of errors; initiatives to identify flaws in SOPSand nominate and validate changes in those that prove to be inadequate; error avoidance without stifling initiative or operator rigidity; and mutual monitoring without counterproductive loss of operator confidence, autonomy and trust. Without attention to both the mix and the special decision requirements of high-reliability units, then current analyses and prescriptions are likely to range from irrelevant to confounding and dangerous.16the challenge to students of organizational decisionmaking is to forward conceptual and prescriptive understanding of mixed-decision structures, when both comprehensive and incremental strategies may sharply increase risk and when there is not (yet) a clear sense of the dilemmas or dynamics of high-reliability decision processes. Structural Responses to Hazards and Peakloads The operational challenge for the HROs here is to stand ready to increase performance of a complex of technologies to 'ISee Schulman (lggo)and deal with peakloads at any time and to avoid crippling opera- La Porte and Thomas (1990) for an unusual case from another example of tional failures in doing so. Do the formulations of organization a HRO. '6See Rochlin (1988) for a description of this situation during flight operations at sea. theory provide a sure guide for what to expect regarding organization structure and, particularly, patterns of authority? 29/J-PART,January 1991

13 Challenges of "High-Reliability Organizations" In a cogent, cryptic summary of literature on the relation of technology to structure, Scott (1987a) provides a starting point:i7 [Wle expect technical complexity to be associated with structural complexity or performer complexity (professionalization); technical uncertainty, with lower formalization and decentralization of decisionmaking; and interdependence with higher levels of coordination. Complexity, uncertainty and interdependence are alike in at least one respect: each increases the amount of information that must be processed during the course of a task performance. Thus, as complexity, uncertainty, and interdependence increase, structural modifications need to be made that will either (1) reduce the need for information processing, for example, by lowering the level of interdependence or by lowering performance standards; or (2) increase the capacity of the information-processing system, by increasing the channel and node capacity of the hierarchy or by legitimating lateral connections among participants. (239, emphasis added) The technical systems at the core of the HROs here are quite complex, requiring considerable differentiation of task groupings. They also require tight (coupled) horizontal coordination between different technical units in order to produce the desired benefits and services. Two of the three conditions noted above--structural complexity and interdependence--are met. The third-technical uncertainty--is not evident and does not increase with complexity and coordination interdependence. While the summary quoted seems implicitly to expect correlative increases in complexity, interdependence, and uncertainty, this need not be the case. These organizations have gone to considerable effort to understand the physical and operational subtleties and behavior of their technical systems. There is substantial investment in often very detailed technical descriptions, analyses, and continuous review of system performance. This drive for operational predictability has resulted in relatively stable technical processes that have become quite well understood within each HRO. I7Seealso Thompson (1967) who argues that administration in these situations is likely to be programmed, with hierarchical authority structure. The literature leads one to expect that when the task structure is complex and well-known, a finely articulated division of labor with a centralized, directive authority structure is likely to result: stable, hierarchically complex structures with substantial information flows in the interests of coordination. Departmentalization of function into homogeneous working groups will minimize coordination costs (Thompson 1967). Both formal and informal information exchanges should be evident within a framework of rules and programs representing agreements (e.g., SOPS) about how thin& will be done (Galbraith 1973 and 1977). "Switching rules" will signal which of a variety of activities should be performed and in what expected order, with strong emphasis on schedules to manage 30/J-PART, January 1991

14 Challenges of "High-Reliability Organizations" work flow (March and Simon 1958, , and Scott 1987a, 215). These are acute predictions in complex organizations of scale, especially those that are stable and whose production technologies do not present high hazard. Are they adequate descriptors when the pace quickens and hazards grow? Certainly, one observes in the HROs the predicted structure and processes outlined above, particularly during times of routine operations. Each organization shows a face of the bureaucratic mode of operations much of the time. This forms the ordering, status/rank-oriented background structure of the organization and is adequate for organizational responses to low to moderate demand. Is this structure adequate for response during peakload or high-tempo operations? Extensive field observations on board both aircraft carriers and within air-traffic control centers found an unexpected degree of structural complexity and highly contingent, layered authority patterns that were hazard related. Peak demands or high-tempo activities became a solvent of bureaucratic forms and processes. The same participants who shortly before acted out the routine, bureaucratic mode switched to a second layer or mode of organizational behavior. And, just below the surface, was yet another, preprogrammed emergency mode waiting to be activated by the same company of members. There appear to be richly variegated overlays of structural complexity comprised of three organizational modes available on call to the members of hazard-related units?' Authority structures shifted among (a) routine or bureaucratic, (b) hightempo, and (c) emergency modes as a function of the imrninence of overload and breakdown. Each mode has a distinctive pattern, with characteristic practices, communication pathways, and leadership perspectives. 'Tf. Rochlin (1989) for a complementary view stressing patterns of informal organization. The routine mode is the familiar bureaucratic one. It is the most often observed and is associated with the many servicing and ordering functions that involve relatively error-limited and semiskilled activities. SOPS and job procedures are reasonably good at covering many job responsibilities. Superiors can know much of what is going on. One sees the familiar hierarchical pattern of authority, rank structure, and authority of formal position. Disciplined, reliable performance is based primarily on fear of superordinate sanction. "Do what I tell you, don't negotiate!" Feedback is not valued; it is a time of punishmentcentered operations. 31/J-PART,January 1991

15 Challenges of "High-Reliability Organizations" Just beneath the surface of routine operations is another, quite different pattern. The high-tempo mode, practiced by the same operators who engage in bureaucratic patterns during slack times, is the pattern of cooperation and coordination necessary to deliver optimum capacity for sustained periods of time. It emerges in response to the rigors of increasing demand and peakload. For example, this mode is evident during concentrated periods of flight operations at sea. During these, a variety of closely packed missions are flown, often by seventy of the Air Wing's ninety aircraft. The latter range over the five different types on board, with day and night schedules stretching from 10 am that morning to 1 am that night, a 15-hour period. A somewhat less-intense period for air-traffic control occurs at peak hours (9:30-11 am and 3-5 pm) nearly every day during the summer and midwinter times of heavy air travel. Critical operational functions involve relatively complex, tightly coupled activities that may involve substantial hazards during concentrated operation, some of which are described in the next section. Many of these jobs can be specified in close detail, but contingencies may arise that threaten potential failures and increase the risk of harm and loss of operational capacity. In the face of such surprises, there is a need for rapid adjustment that can only rarely be directed from hierarchical levels that are removed from the arena of operational problems. As would be expected, superiors have difficulty in comprehending enough about the technical or operational situation to intervene in a timely, confident way. In such times, organizational norms dictate noninterference with operators, who are expected to use considerable discretion. Authority patterns shift to a basis of functional skill. Collegial authority (and decision) patterns overlay bureaucratic ones as the tempo of operations increases. Formal rank and status declines as a reason for obedience. Hierarchical rank defers to the technical expertise often held by those of lower formal rank. Chiefs (senior noncommissioned officers) advise commanders, gently direct lieutenants, and cow ensigns. Criticality, hazards, and sophistication of operations prompt a kind of functional discipline, a professionalization of the work teams. Feedback and (sometimes conflictual) negotiations increase in importance; feedback about "how goes it" is sought and valued. "On the floor" in air-traffic control centers, peakload, hightempo times put each sector's radar controllers and associate radar controllers under considerable pressure. They can expect 32/1;PART, January 1991

16 Challenges of t'high-reliability Organizations" the challenge of "handling" up to twenty-two to twenty-five aircraft simultaneously--"twenty-five spots moving on the screenm-perhaps for several hours. It is a time of challenge, rising excitement and strain, especially for the senior radar controller who ''has the sector," that is, who is responsible for "controlling" and communicating with the aircraft aloft. The number of aircraft to be controlled is displayed on a screen next to the radar. It indicates, by columns that each hold eleven flight numbers, the aircraft already in the sector and those due within fifteen minutes. As first one column (11 planes) fills up, then two columns (22 planes), and now is lapping over to a third, another controller silently joins the two who are coordinating the sector, one at the radar, the other the assistant. The one who joins may be a senior controller just off a break. It may be the area supervisor who oversees the five sectors. These adjunct controllers join vicariously in the small drama being played out during this hour of high tempo. They are watchers, "extra pairs of eyes," experts who are able to see the evolving situation and give supportive assistance, sound alerts, and provide suggestions, usually in the form of questions rather than directives. Thus, those who perhaps earlier were training or evaluating the controller "in the seat" now perform an extended team function. In times of bad weather and peakload, when communication demands are heaviest, yet a fourth role emerges. A communications controller complements the radar controller in the communication loop, slipping into the job of communications to everyone other than the aircraft aloft, to other Federal Aviation Administration facilities, reporting changes in weather, and fielding questions from air-trafic control managers, so the radar controller is undistracted. Each person "knows the boundaries" of his/her communications realm. The supervising controller remains in the background, rarely intervening or giving direction, rather assuring that the team is refreshed and that assisting roles are filled and "sensing the level of stress" on his/her people. Other controllers may assume the supervisory role--since the assigned supervisor is likely to be caught up with helping some controllers deal with overload. They will alert "the super" to watch a controller who looks like he is in trouble. Or they will call to one of their colleagues coming off break that things are getting busy in the affected sector. A particularly intense episode may occur when there is a substantial change in strong wind direction, a potentially hazardous situation. This may require a change in the final landing direction and, therefore, major shifts in the flight 33/J-PART, January 1991

17 Challenges of "High-Reliability Organizations" patterns of arriving and departing aircraft. And it may mean halving the quantity of aircraft that can be handled due to the substitution of a single runway for a dual parallel arrangement. If this happens just before or during a peakload time, especially when the flight path structure serving multiple airports in a region is tightly packed, there is very great demand on the local approach control and higher altitude enroute center controllers. This is the situation in the San Francisco area when the wind shifts to the southeast from the northwest. While dualrunway capacity remains the same, air traffic has to be rearranged by swinging it round 180 degrees from a southeast approach heading to a northwest one, and this must be coming within an airspace that is nearly saturated much of the morning and afternoon. Since there are some three major airports, two large military air bases, and five smaller general aviation airfields in this area, there may be a rather large number of aircraft aloft. Reorienting the flight paths so much becomes a major program for the controllers on duty. The tempo at the approach-control facility and the enroute center increases, and controllers gather in small groups around relevant radar screens, plotting the optimal ways to manage the traffic as the shift in direction becomes immanent. Advice is traded, suggestions put forward, and the actual traffic is compared with the simulations used in the long hours of training the controllers undergo to deal with "the Southeast Plan." While there are general rules and controllers and supervisors have formal authority, it is the team that rallies round the controllers in "the hot seats." It will be the experienced controller virtuosos who dominate the decision train. "Losing separationm--the key indicator of controller failure--is too awful to trust to rules alone. Notably, there are a number of contradictory behaviors and relationships between the bureaucratic and high-tempo modes. Recall that they are carried out by the same people facing different degrees of pressure. The character of delegation, communication costs, and status vary considerably. There still remains a final, emergency-response mode that is galvanized by the highly consensual, unequivocal indications of emergency or superordinate threat. These are signals that operations are proceeding in a way ("coming unraveled") that could result in very serious, harmful consequences for the unit. These may be life threatening; they are sometimes organization threatening as well. This mode is based on a clear specification of emergency events. When they occur, there are a number of carefully assigned, practiced operations that are 34/J-PART,January1991

18 'mere are still some situations that operational personnel, me emergency-response mode is often operative when this happens and a special form of the high-tempo operations mode emerges. Those on the spot with both technical skills and personal presence take charge until the emergency is in hand, then they revert to the directed mode. See, for Challenges of "High-Reliability Organizations" activated. Flight deck crews have predetermined roles in firefighting situations. When air-traffic control computers go down, controllers know exactly what to do, for example, to "spin" the aircraft in place (fly in circles) to buy time to sort out the mess and correct the computer problem. Authority patterns are based on predetermined, preprogrammed allocation of duties, a directed--in a sense scripted-- collegial teamwork process of instant response. HROs devote considerable effort to simulating emergency situations and Sauings and the practicing responses to them. Again, these are many of the research on community response to disasters and on risk management same people who have already incorporated the bureaucratic after the ~h~~~ ~ iisland l nuclear ~ and high-tempo modes of behavior in their behavioral reperpower plant disaster. toire.lg %e modes-of-operation observations are consistent, post hoc, with the contingency theory claim that the better the match between differentiation and the complexity of the work performed, the higher the Contemporary organization-theory literature does little to alert one to the likelihood of these multilayered, nested authority systems.'o In the literature, different types of organizations parallel each mode: there are bureaucratic, professional, and disaster-response organizations. Each has a organization's effectivenes. The more d istinctive character. It is unlikely that all three might be specific contingency expectations, are simple to account for usable by the same organizational membership." the complexity and flexibility observed. Earlier work of Lawrence and Lorsch (1967) argued that some organizations will be more highly formalized and have greater goal specificity than others and that the differences are associated with the organization's environment. As summarized in Scott (1987a), organizational forms are ranged along a single continuum: at one end are organizations that have dearly specified goals, are centralized, and have highly formalized organizational structur&;-at the other end are organizations that lack agreement on goals, are decentralized, and have less formalized organizational structures. The conceptual and research questions that flow from this situation are important. How does one conceptualize nested authority structures? What is the process of arriving at the rules for shifting from one mode to another? What are unambiguous indicators of the onset of increasing load so that most or all of those who would need to undergo the shift do so in a timely manner? And perhaps most importantly for operating effects: to what degree do variations in authority preferences and styles vary the speed and onset of the shift in bureaucratic operations versus high-tempo operations? A most interesting problem arises in situations where the organization is confronted with increasing demands and units are experiencing pressures that would be relieved by the pro- This continuum may explain the range cesses of higher-tempo operations. Overlaying high-tempo of trial-and-error organizational forms, operations upon bureaucratic ones (order-enhancing functions but it wedselab0rati0n account for must still be carried on) adds to the dominant mode of hier- HROs that at times exhibit high archical and directive relations those relations associated with formalization and, at others, exhibit low formalization, me modesof- functionally oriented leadership (nearly regardless of organioperation pattern could be rational- zational status). In this situation, feedback is valued, negoized in terms of Lustick's (1980) logic, tiations and accommodation among interdependent units are but this is not a central part his critical, and interpersonal skills are of increased importance. At paper. the same time, of course, many bureaucratic, formal organiza- ncf, R~~~~~~~~ (1988) for a similar insight from the engineering riskmanagement community. tional disciplinary relationships persist, e.g., the Code of Military Justice remains, as do the regular administrative functions of accountability. When activities associated with 35/J-PART,January 1991

19 Challenges of "High-Reliability Organizations" high-reliability operations increase in urgency, they call for additional sets of behaviors with the result that routine and high-tempo behaviors may be in tension. Some operational modes call for different, sometimes contradictory, behaviors, and attitudes. Operational modes also represent dominant authority modes or styles: hierarchical or collegial. To what degree does an imbalance of authority skills or inclinations to use a less-comfortable style bias the character of operations in the different modes? Would a preference for collegial, professionally oriented direction lead to undue weakening of the bureaucratic order maintaining operations? Do leaders who favor hierarchical direction, bawd on formal positions and possibly superior knowledge, resist too long in turning to their formal subordinates for operational cooperation? It is likely that there would be a conflict of expectations arising from the same person being subjected to several sets of authority/organizational modes. This was evident for one of the aircraft-carrier captains. He noted, one night on the bridge, the importance of encouraging deckhandling people to report mistakes that might lead to real troubles. At the same time, he recognized the irony of the situation and the clash of norms. There are two views from widely divergent perspectives that are also Pointing down to the dimly lit flight deck below, he said consistent with the observations here but still too abstractly applied to use I just had to sentence the third-class petty officer who fires the waist as a basis for deriving hypotheses cat (catapult) to three days in the brig-on bread and water--for going concerning internal authority Patterns AWOL [absent without leave]. He felt he had to move his mother into in HRos. See K. Weick's another place before he left on this exercise. He didn't clear this of requisite variety and the work On "leave" with anyone. I hated to do it. [Apropos the need to maintain organizational networks, especially W. loyalty and positive attitude toward his operational job.] But we have W. Powell (1989). to have [bureaucratic] discipline among the men. %ee Lawrence and Lorsch (1967); see also the extension of the contingency theorists' views in Galbraith (1977, 107), Pfeffer (1981), and Pfeffer and interest. Salanzick (1978). %ottls (1987a) extraordinary summary also provides conceptual logics that could be used post hoc to suggest elaborations of theory once the 0b~e~ations have been made. However, in an attempt to assist the researchers here in doing so before the fact, Scott (1987b) found that the literature is quite limited in terms of overall organizational reliability. Its main conceptual utility is in address ing the conditions associated with individual reliability in situations in which improvements would be from relatively modest to above-average levels. The range and intensity of these tensions and the organizational norms that arise to reduce them are of considerable Nested authority patterns challenge organization theory to add a new level of complexity to existing models of organization decisionmaking and authority structure. The logical foundations for these models are available in the literat~re.'~ Thompson's (1967) definition of organization, for example, can be modified slightly to acknowledge the challenge associated with trying to be a highly reliable ~rganization.~~ While these organizations may be natural systems trying to be rational, they cannot afford the errors associated with acting as if the organization has achieved complete closure when it has not.24 36/ J-PART,January 1991

20 Challenges of "High-Reliability Organizations" Challenges of Modeling Tightly Coupled Interdependence The most vivid impression of the operating groups in these HROs is one of close interdependence, especially during high-tempo or emergency activities. Interactions are a mix of J. D. Thompson's sequential and reciprocal interdependencies prompted by the functional needs of the technologies and the pressures of high demand (Thompson 1967). Relationships are complex, tightly coupled, sometimes intense and urgent. Airtraffic control dynamics and aircraft operations at sea provide many examples, several of which are outlined below. Activities in an enroute air-traffic control center have a palpable sense of ebb and flow. During the early morning hours before 6:30 am, one person handles both the radar and the associate controller roles. As activities increase to normal routine, (7 am) a radar controller--talking and directing--is assisted by an associate controller handling the paper-based backup "flight strips." The associate controller provides alerts regarding which aircraft may seek or need a change. As the high-tempo demands approach (9:30 am), the dynamics discussed above evolve. A third, often senior, controller joins the two regulars as "another pair of eyes." At top tempo (10-11 am), the area supervisor (over five sectors) may also be nearby, along with perhaps two or three other controllers who are interested spectators. This evolution is rarely overtly directed. Rather, it is selforganized by the controllers, who take their place "next in line" to replace those controllers in the area who have gone longest without break. "Onbreak controllers observe and assist their fellow radar controllers, who are formally responsible for the watch, but accept support of "other sets of eyes." Close reciprocal coordination and information sharing is the rule. As aircraft proceed through a sector, they must be "handed off' sequentially to adjacent sectors. This flow requires close, cryptic coordination with radar controllers "over there [in sector 441 and there [sector 321." As an aircraft nears the sector boundary, a set sequence of communications and computer handoffs is initiated, and an acknowledgement of "handoff accepted" is expected. At the same time, aircraft are being "handed to" the radar controller, logged in and spotted by the Associate, and acknowledged as received in turn. For a busy sector--up to twenty planes being monitored simultaneously--handoffs and "hand to's" may be coming from and going to three or four neighboring sectors, perhaps as many as five or six a minute. A helping unit--the traffic-management coordinator (TMCI--is in the background monitoring the whole 37/J-PART, January 1991

Making Smart IT Choices

Making Smart IT Choices Making Smart IT Choices Understanding Value and Risk in Government IT Investments Sharon S. Dawes Theresa A. Pardo Stephanie Simon Anthony M. Cresswell Mark F. LaVigne David F. Andersen Peter A. Bloniarz

More information

System failure. Why governments must learn to think differently. Second edition. Jake Chapman

System failure. Why governments must learn to think differently. Second edition. Jake Chapman System failure Why governments must learn to think differently Second edition Jake Chapman Open access. Some rights reserved. As the publisher of this work, Demos has an open access policy which enables

More information

From Safety-I to Safety-II: A White Paper

From Safety-I to Safety-II: A White Paper Network Manager nominated by the European Commission EUROCONTROL From Safety-I to Safety-II: A White Paper DNM Safety EXECUTIVE SUMMARY In 2012, the aviation industry experienced the safest year on record,

More information

Good Research Practice What Is It?

Good Research Practice What Is It? Good Research Practice What Is It? Explores some of the ethical issues that arise in research, and is intended to provide a basis for reflection and discussion. It is aimed at researchers in every field

More information

Research report May 2014. Leadership Easier said than done

Research report May 2014. Leadership Easier said than done Research report May 2014 Leadership Easier said than done WORK WORKFORCE WORKPLACE Championing better work and working lives The CIPDs purpose is to champion better work and working lives by improving

More information

Interested in European research?

Interested in European research? report Interested in European research? Research*eu is our monthly magazine keeping you in touch with main developments (results, programmes, events, etc.). It is available in English, French, German and

More information

Guidance for working with other organisations

Guidance for working with other organisations Guidance for working with other organisations This document was made possible by the generous support of the American people through the United States Agency for International Development (USAID). The

More information

Evaluation. valuation of any kind is designed to document what happened in a program.

Evaluation. valuation of any kind is designed to document what happened in a program. Using Case Studies to do Program Evaluation E valuation of any kind is designed to document what happened in a program. Evaluation should show: 1) what actually occurred, 2) whether it had an impact, expected

More information


THE RIGHT TIME, THE RIGHT PLACE THE RIGHT TIME, THE RIGHT PLACE An expert examination of the application of health and social care governance arrangements for ensuring the quality of care provision in Northern Ireland DECEMBER 2014 Review

More information



More information



More information

THE ACCIDENT AT TMI. ThePresident's Commission On

THE ACCIDENT AT TMI. ThePresident's Commission On ThePresident's Commission On THE ACCIDENT AT TMI Bruce Babbitt Governor of Arizona Patrick E. Haggerty Honorary Chairman and General Director Texas Instruments Incorporated Carolyn Lewis Associate Professor

More information

Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating

Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating Defining and Testing EMR Usability: Principles and Proposed Methods of EMR Usability Evaluation and Rating HIMSS EHR Usability Task Force June 2009 CONTENTS EXECUTIVE SUMMARY... 1 INTRODUCTION... 2 WHAT

More information



More information



More information

What are requirements?

What are requirements? 2004 Steve Easterbrook. DRAFT PLEASE DO NOT CIRCULATE page 1 C H A P T E R 2 What are requirements? The simple question what are requirements? turns out not to have a simple answer. In this chapter we

More information

Principles of Good Policing: Avoiding Violence Between Police and Citizens

Principles of Good Policing: Avoiding Violence Between Police and Citizens U.S. Department of Justice Community Relations Service Principles of Good Policing: Avoiding Violence Between Police and Citizens (Revised September 2003) About the Community Relations

More information

Education. The Need for Social Work Intervention

Education. The Need for Social Work Intervention Education The Need for Social Work Intervention THE NEED FOR SOCIAL WORK INTERVENTION A DISCUSSION PAPER FOR THE SCOTTISH 21 st CENTURY SOCIAL WORK REVIEW Don Brand, Trish Reith and Daphne Statham (Consultants)

More information

Modernizing British Columbia s Justice System. Minister of Justice and Attorney General

Modernizing British Columbia s Justice System. Minister of Justice and Attorney General Modernizing British Columbia s Justice System Minister of Justice and Attorney General Green Paper February 2012 Modernizing British Columbia s Justice System Minister of Justice and Attorney General

More information

Rethinking Classroom Assessment with Purpose in Mind

Rethinking Classroom Assessment with Purpose in Mind Rethinking Classroom Assessment with Purpose in Mind Assessment for Learning Assessment as Learning Assessment of Learning Rethinking Classroom Assessment with Purpose in Mind Assessment for Learning

More information



More information

A rose by any other name

A rose by any other name A rose by any other name Revisiting the question: what exactly is volunteering? Working paper series: Paper one Angela Ellis Paine, Matthew Hill and Colin Rochester 2010 About the Institute for Volunteering

More information

About learning. Report of the Learning Working Group

About learning. Report of the Learning Working Group About learning Report of the Learning Working Group Open access. Some rights reserved. As the publisher of this work, Demos has an open access policy which enables anyone to access our content electronically

More information



More information

Executive Summary. Reflecting and moving forward. Identifying that something might be wrong. Outcomes and feedback.

Executive Summary. Reflecting and moving forward. Identifying that something might be wrong. Outcomes and feedback. Executive Summary Reflecting and moving forward Identifying that something might be wrong Outcomes and feedback Raising a concern Examining the facts February 2015 Executive Summary 8 Freedom to Speak

More information

Developmentally Appropriate Practice in Early Childhood Programs Serving Children from Birth through Age 8

Developmentally Appropriate Practice in Early Childhood Programs Serving Children from Birth through Age 8 Position Statement Developmentally Appropriate Practice in Early Childhood Programs Serving Children from Birth through Age 8 Adopted 2009 A position statement of the National Asssociation for the Education

More information

The use and abuse of the logical framework approach

The use and abuse of the logical framework approach NOVEMBER 2005 OLIVER BAKEWELL & ANNE GARBUTT SEKA Resultatredovisningsprojekt The use and abuse of the logical framework approach INTRODUCTION The logical framework approach (LFA) has come to play a central

More information

Report. Reflecting and moving forward. Identifying that something might be wrong. Outcomes and feedback. Raising a concern

Report. Reflecting and moving forward. Identifying that something might be wrong. Outcomes and feedback. Raising a concern Report Reflecting and moving forward Identifying that something might be wrong Outcomes and feedback Raising a concern Examining the facts February 2015 Freedom to Speak Up An independent review into creating

More information

Is that paper really due today? : differences in first-generation and traditional college students understandings of faculty expectations

Is that paper really due today? : differences in first-generation and traditional college students understandings of faculty expectations DOI 10.1007/s10734-007-9065-5 Is that paper really due today? : differences in first-generation and traditional college students understandings of faculty expectations Peter J. Collier Æ David L. Morgan

More information

Focus Groups as Qualitative Research

Focus Groups as Qualitative Research Focus Groups as Qualitative Research PLANNING AND RESEARCH DESIGN FOR FOCUS GROUPS Contributors: David L. Morgan Print Pub. Date: 1997 Online Pub. Date: Print ISBN: 9780761903437 Online ISBN: 9781412984287

More information