Process Flexibility: Design, Evaluation and Applications

Size: px
Start display at page:

Download "Process Flexibility: Design, Evaluation and Applications"


1 Process Flexibility: Design, Evaluation and Applications Mable Chou Chung-Piaw Teo Huan Zheng This version: August 2008 Abstract One of the most effective ways to minimize supply/demand mismatch cost, with little increase in operational cost, is to deploy valuable resources in a flexible and timely manner to meet the realized demand. This notion of flexible processes has significantly changed the operations in many manufacturing and service companies. For example, flexible production system is now commonly used by automobile manufacturers, and work force cross-training system is by now a common practice in many service industries. However, there is a tradeoff between the level of flexibility available in the system and the associated complexity and operational cost. The challenge is to have the right level of flexibility to capture the bulk of the benefits from a full flexibility system, while controlling for the increase in implementation cost. This paper reviews the latest development on the subject of process flexibility in the past decade. In particular, we focus on the phenomenon, often observed in practice, that a slight increase in process flexibility can reap a significant amount of improvement in system performance. This review explores the issues in three perspectives: design, evaluation and applications. We also discuss how the process flexibility concept has been deployed in several manufacturing and service systems. Department of Decision Sciences, NUS Business School, National University of Singapore Department of Decision Sciences, NUS Business School, National University of Singapore Management Science Department, Antai College of Economics and Management, Shanghai Jiao Tong University.

2 Introduction Selling fruit juices in the canteen of the NUS Business School is a daunting task. The demands for the different types of fruit juice are highly unpredictable, depending on the weather and also the academic calendar of the student population. The problem is exasperated by the fact that the supplies (i.e. fresh fruits) are perishable, and any leftovers often have to be discarded at the end of the day. The stall owner, however, has created an ingenious product to mitigate this problem. This product, called surprise, is a random mixture of fruit juices, concocted by the owner, and tailored to the tastes and likings of the student population. You never know what you will get when you order a surprise from the fruit stall, but alas, this element of surprise has turned the product into a best-seller, and it is currently the most popular drink in the stall. The implication for operational efficiency is also clear - the stall owner can now cleverly re-shape the demand for the different fruits, and can deploy the available resources on hand to meet the unpredictable demand in a more efficient manner. This is a clear win-win design, both from a marketing and operational perspective. Demand shaping, however, may not be feasible in all industries. In the event that customer has specific requirements and will not switch to substitute products, some companies have resorted to the use of a flexible product which can be customized to specific use by the customers. Figure shows two examples of flexible product - the first example shows a table that can be customized to varying heights, depending on the needs of the users. The second example shows the design of a flexible book shelf, where the height of the shelving can be customized by the users depending on their needs. Note that additional features have to be built into the product itself to allow for flexible deployment. The main disadvantage of a flexible product is the associated increase in production cost. To cope with the uncertain demand, the next best thing one can hope for is to exploit commonality in parts and service requirements to make effective use of the available resources. To facilitate rapid conversion of flexible resources and supplies to meet demands for end products, it is necessary for the company to put a flexible product/service delivery system in place to tap into the power of a flexible system. This is the approach adopted by several companies in dealing with their packaging problems. For example, several companies based in Singapore have opted to use flexible packages to economize on the shipping cost and the need to reduce wastage in packaging. In the past, these companies used standard boxes to ship out their customers 2

3 Figure : Examples of flexible product. orders. However, the boxes were often less than half filled, even though the shipping charges were computed based on the volumetric weight of the standard boxes used. To save on the logistics cost, these companies have pioneered the use of flexible boxes - where the shape can be customized based on the items packed into the boxes. Figure 2 shows an example of a flexible box used by a Singapore company. Figure 2: Example of a flexible package. This box can be customized to 4 different heights, by cutting along the edges, up til the grooves as shown on the side of the box. This strategy has also been adopted by transport authority in several countries to deal with An industry jargon to convert volume into shipping weight for price calculation 3

4 congestion problem. In the early 70s, the New Jersey Port Authority allowed the morning eastbound traffic to use a lane on the westbound direction of the Lincoln Tunnel, allowing each commuter in the eastbound traffic to save up to 20 minutes every weekday morning. This little added flexibility in the lane direction allowed the transport authority to reap an annual savings of close to 4 million, based on an estimate of productivity value of $2.82 per hour per worker. (Olcott, O. (973)). This concept is still in use today in many cities, but often with the lane direction reversal controlled by movable barrier systems in order to prevent head on collisions. Figure 3: Eastbound buses operating on a westbound lane in the morning peak hour in Lincoln Tunnel, New Jersey, in the early 70s. Source: Olcott, O. (973) The phrase process flexibility can be broadly defined as the ease of changing the systems requirements with a relatively small increase in complexity (and rework). This is part of the broader concept of flexibility in service and manufacturing, a rapidly developing area in the academic and practitioner community over the last few decades. We refer the readers to the comprehensive article by Buzacott and Mandelbaum (2008) for an historical overview of the area, and the accompanying insights and future challenges. They presented three different ways of thinking about flexibility: Prior flexibility, State flexibility, and Action flexibility. Prior flexibility relates to the design of flexibility into the system by increasing the variety of initial actions or decisions we can make, and state flexibility relates to the design of flexibility into the system by increasing the ability to cope with uncontrollable changes and uncertainty in the environment by trying to be good under any environmental outcome. Last but not least, 4

5 action flexibility relates to the ability to respond to changes and uncertainty that are revealed over time by taking effective recourse action. Viewed in this framework, process flexibility concerns the incorporation of the right level of state flexibility into the system, by accounting for the level of action flexibility and its impact on cost and performance of the manufacturing and service system. Process flexibility, as a key concept to quickly respond to demand/supply uncertainties with little cost, has already changed the operational processes in several manufacturing and service industries. The automobile industry, for instance, has moved away from using focused plants (where one plant produces essentially one product) to using modern flexible plants (where one plant produces several products). The Ford Motor Company, for instance, invested $485 million in two Canadian engine plants to renovate and retool them with a flexible system. It has also launched a plan to equip most of its 30-odd engine and transmission plants all over the world with flexible systems.... The initial investment is slightly higher, but long-term costs are lower in multiplies, said Chris Bolen, manager of Ford s Windsor engine plant, which uses the flexible system to machine new three-valve-per-cylinder heads for Ford s 5.4-liter V8 engine... Ford says the system will help it meet changes in demand. If our business was hit by a significant down sizing from V8s to V6s or V6s to (four-cylinder engines) or diesels in North America, we ll be able to react to that without years of turnaround, said Kevin Bennett, Ford director of power train manufacturing. It s essential we be able to react to the market more rapidly than in the past.... Mark Phelan, Ford Speeds Changeovers in Engine Production Knight Ridder Tribune Business News. Washington: Nov 6, A survey of North-American automobile industry conducted in 2004 shows that the plants of major automobile manufacturers, such as Ford and General Motor, are more flexible than those 20 years ago (Van Biesebroeck (2004); see also Boudette (2006)). The survey shows that these flexible plants can produce much more types of cars to meet the rapidly changing customer demands while their capacities do not change very much. Suh et al. (2004) contains a case study on how flexibility can be built into the automobile business. The proposal is to have dimensional flexibility in the floor plan of the underbody of the vehicle platform. This can be achieved in various ways (trimming the floor plan for long wheelbase vehicles to meet 5

6 the requirement for short wheelbase vehicles, or to weld an extension piece to the floor plan for short wheelbase vehicles to accommodate the long wheelbase vehicles). The ability to respond quickly to changes in the environment is also important in troops deployment in the military. In military tactics, the reserve force which the commander directly controls is akin to a flexible server: it has no specific task initially in the battle plan, but can be deployed in the most effective way, depending on how the battle evolves on the ground. Of course, the effectiveness of the reserve force depends on the level of state and action flexibility it can be deployed in the battlefield. Unfortunately, the flexible force deployment plan comes with more battle preparation and movement co-ordination on the ground. The daunting task of the commander is thus to come out with a battle plan which can be executed on the ground, and flexible enough to adapt to changes in the environment. The ancient Chinese had apparently mastered the art of flexibility in troops deployment. Folklore has it that the ancient eight elements battle formation, a battle array formed by eight fighting units, has the ability to change formation so quickly, for example, from attack to defense formation with fighting units enforcing each other that the enemy could not see the beginning from the end of the formation. For ease of command and control, there could only be limited ways to re-deploy the fighting units. However, by co-ordinating their actions together, the formation is able to anticipate and react to a wide range of possible enemy s maneuver. Finding the structure and logic behind the flexible deployment tactics, and thus uncovering the secrets of this ancient innovation, will be an interesting challenge for the research community. Figure 4 depicts an artist impression of the battle formation used in ancient warfare. We supplement the review by Buzacott and Mandelbaum (2008) by narrowing our discussion to the subject of process flexibility. In Section 2, we present a basic model for the process flexibility problem, and discuss recent results obtained on the model for a variety of performance measurements (average case, worst case etc.). We review in Section 3 the techniques that can be used to design a sparse and yet efficient process structure, and discuss in Section 4 indices created to measure and evaluate the performance of such a structure. We conclude the paper with a list of recent applications exhibiting a similar theme - that a limited amount of flexibility, properly incorporated into the system, can reap significant benefits. We do not, however, touch on the related issue of how the capacities in the system could be configured. For this, we refer the readers to the works in Van Mieghem (998), Bish and Wang (2004) and Bish et al. (2005). 6

7 Figure 4: A depiction of battle formation in ancient Chinese history For earlier surveys on related work on manufacturing flexibility, see Sethi and Sethi (990) and Shi and Daniels (2003). While we try to be comprehensive in our review, it is inevitable that we may be unaware of other important contributions. The fault is entirely ours. Our goal in this paper is to present an overview of some of the recent theoretical results obtained for this class of problems, scattered over a series of papers. However, we have also included some new results and applications that have not appeared elsewhere. For example, the expansion index, obtained from the insight that graph connectivity is a good surrogate for the concept of process flexibility, is described and presented in this survey for the first time. Furthermore, we applied the theoretical results developed in earlier papers to study multi-stage supply chain and troops deployment problems, and obtain several new insights and numerical results for these problems. 2 Models for Process Flexibility We use a bipartite graph to represent the flexibility structure. On the left is a set A of n product nodes while on the right is a set B of m facility/plant nodes. A link connecting product node i to facility node j means that facility j is endowed with the capability to produce product i. Let G A B denote the set of all such links; that is, the edge set of the bipartite graph. Hence, each flexibility configuration can be uniquely represented by a bipartite graph G. The process flexibility problem concerns the performance of flexibility at two levels. At 7

8 the level of action flexibility, the process flexibility problem boils down to solving the following classical transportation problem on m supply and n demand nodes, with process structure G: Z G(D) = s.t. n m max i= j= x ij m x ij D i i =, 2,...n; () j= n x ij C j j =, 2,...m; (2) i= x ij 0 i =,...,n,j =,...,m, (3) x ij =0 (i, j) / G. (4) The vector D =(D,...,D n ) encodes the demand for each product, and C j represents the capacity/supply at plant j. Our goal is to utilize the capacities at the most efficient manner to meet the demands, subject to the constraints imposed by the process flexibility structure G. While we have presented the action flexibility problem as a static max flow problem, we need to caution that the problem encountered on the ground could be more complicated than depicted by this simple model. The information on D may not be revealed at the same time, and yet the deployment decisions may have to be made on the spot with incomplete information. The objective function may have to take into account the cost impact of overages and shortages, and not mere flow maximization. The actual optimization problem encountered at the operational level is more aptly represented by a multi-stage stochastic programming model, but we have refrained from going into the details here in this paper for ease of exposition. At the tactical level of state flexibility, we need to design the process structure G, so that the system is able to respond to changes in endogenous uncertainty in the system. i.e., we want the expected maximum flow E D ( Z G (D) ) to be as large as possible. succintly written as the following optimization problem: ( ) max E D ZG(D). G A B The process flexibility problem considered in this paper can be Note that in essence, the process flexibility structure determines how the system can effectively allocate its capacity to handle the random demand. An alternate measurement to 8

9 characterize the performance of the process structure, in the absence of a probability measure on the space of possible scenarios, is via a worst case approach. i.e., we want ( ) min ZG(D) D D to be as big as possible, so that the structure G will perform relatively well even in the worst scenario in the set D. optimization problem: This changes the process flexibility problem to the following robust { )} max min (Z G (D). G A B D D For both objectives, clearly the optimal structure will be the one with full flexibility, i.e., the complete bipartite graph on A and B, containing all the links. However, this level of performance comes at the expense of drastically increase operational and/or communication costs at the operational level. For the process structure to be useful in practice, the number of edges in the graph G should be as small as possible. The central theme of this paper is to use a series of examples and discussions to demonstrate that in most settings, a structure with far fewer number of edges may already perform as good as the fully flexible system. Such results and insights are obviously important and useful in practice. Studies on process flexibility can be traced back to 980 s, stemming from the hot topic Flexible Manufacturing System (cf. Stecke (983), Browne et al. (984)). The focus of FMS is on the trade-off of investment in dedicated versus flexible capacities (cf. Fine and Freund (990), Van Mieghem (998)). The challenge there is to understand the economic value of flexible resources in the system, vis-à-vis its ability to reduce capacity investments. The observation that a partial flexibility structure can perform nearly as well as a full flexibility structure is pointed out explicitly by the seminal work of Jordan and Graves (995). Their findings were based on a study of GM production network. They calibrated the performance of sparse partial flexibility structures with the full flexibility structure in an extensive simulation. Surprisingly the results showed that a partial flexibility structure, if well designed, could capture almost all the benefits in the full flexibility structure. They further proposed the use of chaining structure (cf. Figure 5) as a good design for partial flexibility structures. 9

10 Product Plant Product Plant A:Fu lflexibility B:PartialFlexibility (A chain) Figure 5: Flexibility Structures Represented via Bipartite Graphs. 2. Theoretical Results: Average Performance Note that a chain in a n by n bipartite graph has 2n (flexibility) edges, whereas a fully flexible system has n 2 edges. It is thus surprising that a simple chain structure can perform as well as a fully flexible system in Jordan and Grave s study using the GM data. Aksin and Karaesmen (2004) proved that the performance of a regular k-chain (a generalization of a chain, where k represents the degreee of each node) is increasing concave as k increase. The returns from added flexibility into the system is thus diminishing. Chou et al. (2007a) demonstrated this effect more succinctly by comparing the performance of the chaining structure with the fully flexible system for an asymptotically large system. They consider the following n-plant, n-product example. Assuming that each plant has a fixed capacity of C j = C units for each j =,...,n, and each product has an expected demand of D i = C units for each i =,...,n as well. When the demand is uniformly distributed between 0 and 2C, Chou et al. (2007a) showed that ( E D Z lim Chain (D) ) ( n E D Z (D) ) =89.6%. Full This implies that a simple chain structure can capture close to 90% of the value of the max flow in a fully flexible system, even when the system size is very large and the demand is uniform over a range. The performance in the case of normal distribution is even more impressive. Assuming that µ = 3σ, Table shows the expected performance of two different structures over the random demand as n varies. For small n (say n = 0), their simulation shows that 0

11 System Size n Chaining C Full Flexibility F ED(Z Chain (D)) E D(Z Full (D)) % % % % % % % Table : Expected Sales and Chaining Efficiency for Increasing System Size the expected sales, essentially the maximum flow, in the two systems are and respectively. This demonstrates that chaining already achieves most (99.39%) of the benefits of full flexibility in this case. As the system size expands, the performance of chaining deteriorates slightly, but still at an impressive level of 97.48% for n = 40. As n approaches infinity, it can be shown that the limit tends to a value close to 96% (cf. Chou et al. (2007a)). The result of Jordan and Graves (995) is important and exciting because it is consistent with a widely observed phenomenon: a simple and easy-to-execute structure (e.g. a chain structure) could be as good as the most complicated one (e.g. a full flexibility structure). It is thus reasonable to believe that a systematical and effective method to design a good sparse flexible structure can be adopted to applications in many different fields. Bassamboo et al. (2008) extended the theory of chaining into dynamic processing system, through a concept called tailored pairing. Pairing is a configuration such that every two classes of demands are linked by exactly one resource, and hence is different from the concept of chaining, where there is a circular ordering of nodes and only adjacent demand nodes are linked by a resource. They used Brownian motion approximation to show that subject to mild conditions on the marginal cost structure as a function of the level of flexibility, tailored pairing is optimal for symmetric system (when average demand and supply are identical and balanced) in the heavy traffic regime. i.e., there is no need in the optimal solution to invest in server capable of serving three or more classes of demands. The authors further postulated that the same holds even for asymmetrical system, and supported their claims via numerical simulation. 2.2 Theoretical Results: Worst Case Performance Chou et al. (2007b) analyzed the performance of partial flexible structure using a worst case

12 approach. They adopt the concept of graph expansion (Bassalygo and Pinsker (973)), which is widely used in the area of graph theory and computer science (Sarnak (2004)), to study the process flexibility problem in this setting. Their study reveals the intimate connection between process flexibility and graph connectivity, and shows that the graph expander structure (i.e. a class of highly connected graphs with far smaller number of arcs than complete graph) works extremely well as a sparse flexibility structure. This result holds in many classes of objective functions, only requiring a mild assumption that the demand is bounded around its mean. i.e., demand is never more than a constant times of its mean. Definition D i has bounded variation of λ around its mean if D i λe[ D] almost surely. It turns out that when demand does not deviate substantially away from the mean, we can use the action flexibility inherent in the process structure to effectively allocate capacities to the demands. To do so, we need to understand the notion of graph connectivity associated with every process structure. Definition 2 A structure F is k-connected if there are at least k node disjoint paths linking every pair of nodes in A B. A k-chain (denoted by C k ) is a subgraph in a n by n bipartite graph where each supply node i is linked to demand nodes i, i +,...,i+ k (modulo n). The structure C k is clearly k- connected, with kn links. However, there are exponentially many classes of k-connected graphs with kn edges, for k>2, although the 2-chain is the only 2-connected graph with 2n edges. There is a clear trade-offs in the level of connectivity with the number of edges - for higher graph connectivity, the structure needs to have more edges. There is a class of highly connected graph, called graph expander, which has received a lot of attention in the literature. Basically, graph expanders are graphs where every small subset of nodes are linked to a large neighborhood. The ratio of the size of the neighborhood and the size of the subset measures the expansion capability of the graph. We define the neighborhood of a subset and the concept of a graph expander formally in the following: Definition 3 Let F be a bipartite graph with partite set A and B. ForS A, the neighborhood of S in F is defined to be Γ(S) = {j B:(i, j) F for some i S.}. 2

13 Definition 4 Let F be a bipartite graph with partite set A and B. (α, λ, )-expander if The structure F is a for each node v in the graph, deg(v) for every v A, and for all small subset S Awith S αn, we have Γ(S) λ S. Remark:. For a n n bipartite graph which is also a (α, λ, )-expander,, the number of edges is at most n. 2. A 2-chain C 2 is clearly a ( n, 2, 2) expander, since for each subset of size, there are at least 2 neighbors. Furthermore, the degree is bounded by 2. It is also a ( 2 n,.5, 2) expander, since for every subset S of size at most 2, Γ(S).5 S. It is easy to check that it is simultaneously a ( k n, (k +)/k, 2) expander, for all k n. 3. The reason why graph expander works well in matching supply and demand is because graph expander ensures that any suitably small group of product nodes is connected to a relatively large number of plants. Moreover, the notion that a long chain is better than a short chain can be cast in the same light: the expansion ratios for small subsets of product nodes in long chains are higher than that in short chains. Chou et al. (2007b) established the following main result: Theorem Consider a n n system, where the demand D i has bounded variation of λ with mean µ i = µ. Assume that each plant has capacity µ. Let F be an (α, λ, )-expander, with α λ = ɛ for some ɛ>0. Then [ ( ZF(D) αλn min µ, for all D. D )] i A i =( ɛ)z n Full (D) Note that the ɛ-optimality performance holds for all demand scenario D, and is thus the worst case performance of the expander structure, given that the demand has bounded variation of λ. This result is considerably stronger than the average case performance of the chaining 3

14 structure. Since 2-chain C 2 in a n by n bipartite graph is a ( n n, n n, 2)-expander, we have the following immediate corollary: Corollary Suppose that (i) the demand of each product has bounded variation of + n, (ii) with mean µ i = µ, i =,...,n, and (iii) each of the n plants has capacity µ. Then Z C 2 (D) =Z Full (D) for all D. Remark: We notice that truncated normal distribution is often used to model product demand distribution in a lot of service and manufacturing settings. When σ = µ/3, and demand is truncated at one standard deviation above the mean, then according to the above, 2-chain is always as good as the fully flexible system when n = 4. However, when n increase, the worst case performance of a 2-chain is worse off compared to the fully flexible system. In the worst case setting, the level of flexibility needed in the structure depends on the magnitude of demand deviation above the mean (i.e. level of uncertainty), which corresponds to the graph expansion ratio (parameter λ) in the expander structure. For k-chain, this ratio depends on n, and hence the performance of k-chain suffers as n increase. To find good process structure for large n, we use a different class of graph expander structure. In particular, we use expander where the number of edges can be much smaller than the number of edges in a fully flexible system. The existence of such structure is well known and is by now a folklore in the graph theory community. Theorem 2 [Asratian et al. (998)] For any n, λ, and α< with αλ <, there exists a (α, λ, )-expander, for any + log 2 λ +(λ + ) log 2 e log 2 (αλ) + λ +. (5) Note that the lower bound on the degreee is independent of n, and hence the number of edges in this class of graph expanders is linear in n. The implication for the process flexibility problem can be stated more succinctly as follows: 4

15 In the symmetrical system, for any given demand distribution with bounded variation λ, we can find a corresponding α with αλ ɛ, for some ɛ>0, such that for n sufficiently large, we can always find a process structure using at most n edges, where is given by the right hand side of (5), such that the worst case performance of the structure is at most ɛ times of the fully flexible system. For more general system (i.e. the number of product nodes and plant nodes might be different and products follow different demand distributions), Chou et al. (2007b) proposed a generalization using the concept of Ψ-expander, with a high Ψ (0 < Ψ ). Suppose the demand with mean µ i is assumed to be bounded in [λ i (U)µ i,λ i (U)µ i ]. We say that the demand has bounded variation of λ i (L) and λ i (U) in this case. Definition 5 A Ψ-expander in the process flexibility problem is a bipartite graph in A B with C j min λ i (U)µ i, Ψ C j λ i (L)µ i, j Γ(S) i S j B i/ S for all subset S A. The definition of Ψ-expander partitioned the subsets of A into two sets: (i) For small subset S such that i S λ i(u)µ i + i/ S λ i(l)µ i Ψ j B C j, we have C j λ i (U)µ i, j Γ(S) i S and hence the plants supplying to the small subsets have sufficient capacity to deal with demand arising from the small subset. (ii) At the same time, the capacity connected to a non-small subset is also large enough, i.e. C j Ψ C j λ i (L)µ i, j Γ(S) j B i/ S so that at least Ψ proportion of the total capacity is utilized in the worst case. It is thus easy to see that a structure with Ψ = is as good as full flexibility. Theorem 3 (Chou et al. (2007b)) Let F be a Ψ-expander. When D i has bounded variation λ i (L) and λ i (U), then for all demand realization, we can find a solution for Z F such that either (a) all the plants are operating below their configured capacity level (because of insufficient demand), or (b) at least Ψ proportion of the total pre-configured capacity have been utilized. 5

16 The above theorem suggests that a Ψ-expander has the following nice property - as long as the demand for each product falls in the range λ i (L)µ i and λ i (U)µ i, then the process structure guarantees a utilization rate of 00 Ψ% in the entire system! Example: Consider the process flexibility problem with 5 plants and 5 products. The capacity at each plant is 00 units, whereas the demand for the 5 products are between 50 and 50, each with mean of 00. Note that we did not specify the precise structure of the demand distributions. A fully flexible system in this case contains 25 edges, whereas a 2-chain has only 0 edges. Note that the demand is always within.5 times of its mean. Hence the 2-chain has bounded variation with λ i (L) =0.5, and λ i (U) =.5. It can then be shown easily that the 2-chain is a -expander. Thus the 2-chain structure in this case has the same performance as the fully flexible system, for all demand realization! Theoretical Results: Randomized Performance The structural results in the worst case setting provides a glimpse to the structure of good process structure. However, for more complicated cases (such as the non-identical and asymmetrical demand setting), such structural insights may not be readily available. Instead of focusing on the performance of a specific structure, we can also study the average performance of a class of process structures. This route of analysis allows us to examine the performance of sparse structures in other classes of design problems. To this end, we need to generalize the notion of process flexibility. Consider the problem { n } (P ): Z(b, {,...,n}) = max c i x i : Ax b; x i 0, i =,...,n. The dual problem is given by { m } (D) : Z(b) =min b j y j : A T y c; y j 0, j =,...,m. j= i= 2 One of the authors have often played a game with students in class - the students can choose demand values of any kind, as long as each value falls in the range of [50, 50]. The instructor will then compare the performance of a 2-chain and a fully flexible system. Most students are amazed that a simple 2-chain always achieved the same performance as the complicated fully flexible system. 6

17 (D) is a linear programming problem with m variables and n constraints. If we sample N constraints from (D), and denote the constraints sampled by S, we obtain the problem { m } (D(S)) : Z(b,S) = min b j y j : A T S y c S ; y j 0, j =,...,m. j= The dual of this problem is { } (P (S)) : Z(b,S) = max c i x i : A S x S b; x i 0, i S. i S Let x (b) denote an optimal solution in Z(b, {,...,n}). Note that since b is random, x (b) is also a random vector. We assume that problem (P ) has an optimal solution x (b) with the following property: (Property A): n), and for all i =,...,n. x i (b) λe b(x i (b)) almost surely for some constant λ>0 (independent of The above property essentially states that for each realization of b, the optimal primal solution x (b) should not be too far above its mean value. variation assumptions in the process flexibility problem. This is similar to the bounded Theorem 4 Suppose Property A holds for (P ). Then there exists a set S with cardinality N = O( λm ɛ ), such that Eb(Z(b,S)) ( ɛ)eb(z(b)). In the case when n>>m, S is considered a sparse set and plays the role of the sparse structure in the process flexibility problem. This theorem identifies a natural condition under which we can expect a suitably chosen sparse subset S (with O(m) variables) to perform nearly as well as a fully flexible system with n variables, even if n>>m. To construct the sparse set S with near optimal performance, we need to understand the behavior of the (random) optimal solution x (b). The set S in the above theorem is constructed by repeatedly sampling from the distribution x (b), up to O( λm ɛ ) times. We will see in the later sections that for several classes of problems, the optimal solution x (b) can be obtained in closed form. For the technical details pertaining to this result, we refer the readers to Chou et al. (2007a). 7

18 3 Construction Methods The process flexibility problem can be technically modeled as a two stage stochastic programming problem, where the first stage decision concerns the selection of links subject to budget constraints (0- discrete problem), whereas the second stage problem concerns finding the best recourse action to match supply with demand, to obtain the expected second stage cost function arising from the optimal recourse decisions. The latter is a difficult problem, since it involves finding the expected maximum flow problem (or worst case max flow) in a general bipartite graph. While there are numerical methods that can be used to handle this kind of problems when the size is small, it is not widely used in practice, due to the difficulty in finding reliable distributional information on demand and accurate cost estimation. To the best of our knowledge, this algorithmic design problem has also been largely overlooked by the research community, in part due to the technical difficulties associated with this problem. We describe next several heuristics that can be used to construct good process structure quickly, exploiting the structural insights discussed in the previous section. We also review a recent approach to address this problem, using the theory of robust optimization. 3. Chaining Method The chaining concept (Jordan and Graves (995)) is arguably the most influential strategy used in practice to design good process structure. It is developed based on two key insights obtained from Jordan and Graves (995) s landmark study: By adding a small amount of flexibility at the proper places in an otherwise rigid system, we could achieve a significant improvement in its performance; and the links should be added to the structure to obtain long chains. A chain refers to a path which goes through a group of distinct product and plant nodes in consecutive order, returning to the start node at the end to form a cycle. A long chain is preferred because it has the ability to pool the plants capacity and products demand together and thus can deal with demand uncertainties more effectively than a short chain. Based on the above conceptual ideas, Jordan and Graves (995) provided three other general guidelines to design process structure. They advise the designer to 8

19 try to equalize the capacity allocated to each product node in the chain; try to equalize the expected demand allocated to each plant node in the chain; construct a long circular chain visiting as many nodes as possible. The first two guidelines aim to help the plants to achieve higher capacity utilization and to satisfy more demand. In the symmetric case, with each product and each plant having the same level of mean demand and capacity, we can use the three guidelines to obtain a regular 2-chain. The above guidelines are obtained and validated by extensive numerical simulation results. However, these guidelines alone do not provide an implementable heuristic which can be used to design a process structure. Jordan and Graves (995) mentioned that they have no firm guidelines to adding flexibility for more general cases. In fact, in their work, they use the guidelines they developed to obtain various process structures, and then used numerical simulation to estimate the performance (e.g. average lost sales; unused capacity at each product and plant etc.) of each structure, to determine the best process structure. This process is tedious and time consuming. The next method tries to address this deficiency, using a heuristic guided by the graph expansion concept. 3.2 Node Expansion Method The previous section shows that the performance of a process structure is intimately connected to its underlying connectivity property. We can thus design a good process structure by constructing a graph with good expansion ratio. The latter problem is recently investigated by Ghosh and Boyd (2006). However, their approach works more for the symmetrical case (graph expansion is after all a concept originating in graph theory), but does not address the issue associated with asymmetrical supply and demand setting. The notion of graph expansion is nevertheless connected to the design guidelines popularized by Jordan and Graves (995). The first and second guideline seek to equalize the capacity and expected demand allocated to each product and plant respectively. This is related to the concept of expansion ratio for all (small) subsets containing only a single node. We extend this strategy further by fully exploiting this connection. 9

20 The node expansion method works by checking the expansion ratio of all singletons, although the method can be easily extended to handle all subsets with up to k (k fixed, small) nodes. Starting from any rigid base structure, the method augments the structure by adding links iteratively to improve upon the node expansion ratio in a greedy manner: In each iteration, we add a link connecting the product node with the lowest product expansion ratio (i.e. the ratio of the total capacity of the connected plants to the product s expected demand), to the plant node with the lowest plant expansion ratio (i.e. the ratio of the total expected demand of the connected product nodes to the plant s capacity). We skip this edge if it has already been added to the structure, and move to the plant node with the next smallest expansion ratio. We repeat this procedure until the number of added links reached the pre-determined limit. Simulation studies by Chou et al. (2007b) show that this design heuristic can generate good process structures in several applications. In fact, on the process flexibility problem encountered in GM, the structure obtained using this direct constructive approach is already as good as the structure developed by Jordan and Graves (995) guided by extensive numerical simulation. We refer the readers to Chou et al. (2007b) for details. 3.3 Sampling Method The sampling method builds on Theorem 4. For the process flexibility problem, the optimal solution is trivial under the full flexibility structure F. Note that in this problem, with demand D =(D,D 2,...,D m ) and capacity C =(C,...,C n ), the max flow problem ZF (D) has up to nm variables, with only O(n + m) constraints. There are multiple optimal solutions to this simple problem, and the properties of the extreme point solutions are well known. For our purpose, however, we need a closed form solution to the problem. Fortunately, this is easy when the structure corresponds to the full flexibility structure F. Lemma In the process flexibility problem, x ij(d) = D i C { j m max i= D i, }, i =...m, j =,...,n, n j= C j 20

21 is an optimal solution to ZF (D) under the full flexibility structure F. Furthermore, { m ZF (D) = min D i, i= n C j }. j= Using this solution, we can construct (random) process structure by sampling arc (i, j) with probability proportional to x ij (D). The sampling heuristic basically has two stages. In the first stage, the sampling probability for each link (i, j) (i =...m, j =...n)is estimated by calculating the empirical average of the flow on the arc (i, j). This is used in the case when x ij (D) does not have an easy closed form expression. In the second stage, structures can be created by selecting links using the estimated sampling probabilities, and the structure with the best performance will be ultimately selected. Note that the sampling method uses numerical simulation to obtain the performance of the structure sampled, and in a way, this is identical to the approach used by Jordan and Graves (995). The advantage of the sampling method is that it is systematic, and can be applied to a wide variety of other problems, ranging from capacity pooling networks (Chou et al. (2007a)) to transshipment network design (Lien et al. (2005)). The disadvantage, on the other hand, is that the sampling method cannot ensure that every structure sampled will be good, and an additional evaluation step (e.g. through simulation) is needed to identify a good structure. Furthermore, although the method works theoretically, we do not have good qualitative understanding of the features inherent in near optimal sparse structures. 3.4 Robust Optimization The process flexibility problem can be recast as a two-stage stochastic maximum flow problem as follows: ] max x E P [Q( ξ, x) s.t. x i = K, x i {0, }, i where K represent the maximum number of links in the process structure, x the design decision variables, and ξ denotes the random demand parameters in the problem, with distribution P. 2

22 The recourse function is given by: Q( ξ, x) =ZG(x) (D) s.t. ξ = D, e G(x) if and only if x e =. When the probability measure on the scenario space is not explicitly given, we can minimize the worst case performance of the design, over an uncertainty set U, to obtain a robust two stage maximum flow problem: max x s.t. min ξ U [ ] Q( ξ, x) x i = K, x i {0, }, i To solve this robust optimization problem, the challenge is to find a nice uncertainty set U, so that the problem max x t s.t. x i = K, x i {0, }, i t Z G(x) ( ξ) ξ U. can be recast as a tractable convex optimization problem, even if the set U has infinitely many scenarios. Unfortunately, when U represents the set of demand scenarios with bounded variation around the means, the above robust formulation can be solved in polynomial time if and only if P = NP. We refer the readers to details in Atamturk and Zhang (2007), and for some special cases when the robust two stage network design problem can be solved in polynomial time. The currently available approach to solve the robust process flexibility problem is via the cutting plane approach, and is thus not computationally efficient. 4 Measurement and Evaluation The research on the evaluation of process flexibility has so far focused on creating indices to rank process flexibility structures, in terms of level of flexibility inherent in the structure. This line of pursuit complements the constructive approaches, and seeks way to allow the managers to compare different process structures quickly, without the need to evaluate its performance in simulation, due to the lack of cost parameters and/or distributional information on the uncertain parameters. The computation of the indices uses minimal information on the uncertain demand parameters (mostly average values). These indices are thus usually easy-to-compute, and can 22

23 effectively rank the structures in terms of performance, though they cannot give an absolute value of the performance of the different process structures. 4. JG Index Jordan and Graves (995) developed a probabilistic index to measure the performance of a given structure. For any subset S of demand nodes, they focus on the probability that the unsatisfied demand in that structure would exceed that for the corresponding full flexibility structure. i.e., [{ P D j } ( S i max 0, D j S i )]. j S i Γ(S) j i The largest probability among all subsets is used as a flexibility index to compare among structures. A good flexibility structure should have a low index, since a more flexible structure should deal with demand uncertainty more effectively, and thus the unfilled demand of the structure should be as close to the full flexibility structure as possible. However, the JG index is usually very hard to compute. 4.2 Structural Flexibility Index The index developed by Iravani et al. (2005) is arguably a milestone in the development of a measure for process flexibility. In this study, a suitably defined structural flexibility matrix (SF Matrix) M was proposed to calibrate the performance of a process structure. An entry (i, j) inm, denoted by M(i, j), represents the maximum number of non-overlapping routes from demand node i to demand node j, whereas M(i, i) = degree of arcs connected to the demand node i. The largest eigenvalue and mean of the entries in SF matrix M are used as two alternate indices to determine the level of flexibility in a process structure. SF indices are much easier to compute and work better than probabilistic indices in many examples (cf. Iravani et al. (2005)), although the computation of each entry in the matrix requires solving a maximum flow routine. 23

24 4.3 Expansion Index We propose an alternate index to measure the performance of a process structure, using the observation that graph with higher connectivity tends to be more flexible, which has been theoretically verified in the balanced and identical case. Let N denote the number of nodes (including both supply and demand nodes), and L the number of links in the structure. The expansion index is defined as the second smallest eigenvalue of the Laplacian matrix L = TT, where T is a N L matrix. For link l connecting node i A (with mean µ i ) and j B (with mean S j ), the corresponding entry in column l of T is T il = µ i S j, T jl = µ i S j, and T kl =0, k i, j. The index is developed using a well-known observation in graph theory that the second smallest eigenvalue of L, i.e. λ 2 (L), is a good surrogate for measuring the connectivity of the underlying graph (see Fiedler (973), Ghosh and Boyd (2006)) in the case when µ i = S j = for all i, j. As shown in the earlier section, a graph with good expansion ratio (high λ 2 (L)) is highly connected and is more flexible in matching supply and demand. λ 2 (L) can thus be used as an alternate index to rank process structures. The index λ 2 (L), as compared to the SF index M, has the key advantage that it has been thoroughly studied in the literature. 4.4 Numerical Comparison These indices have been tested in the following numerical experiments. The first experiment is to rank the performance of two different graphs, i.e., the Levi graph and the regular 3-chain (cf. Figure 6). Both graphs are regular with degree 3, and hence have equal number of edges. The Levi graph has slightly better expansion ratio for small subsets up to order 3. In fact, it can be shown that each subset of order 3 on one side has at least 5 neighbors, whereas it is easy to find subsets of order 3 in the 3-chain with only 4 neighbors. As shown in table 2, λ 2 (Levi) (0.55) is higher than λ 2 (Regular) (0.05), in the symmetric case when all demand means and capacity levels are identical. This indicates that the Levi graph could be a better process structure than the 3-chain. This is consistent with our simulation 24

25 A A B 2 B 2 C 3 C 3 D 4 D 4 E 5 E 5 F 6 F 6 G 7 G 7 H 8 H 8 I 9 I 9 J 0 J 0 K K L 2 L 2 M 3 M 3 N 4 N 4 O 5 O 5 P 6 P 6 Q 7 Q 7 R 8 R 8 S 9 S 9 T 20 T 20 U 2 U 2 V 22 V 22 W 23 W 23 X 24 X 24 Y 25 Y 25 Z 26 Z 26 AA 27 AA 27 A:A levigraph B:A regular graph Figure 6: Levi graph and a regular chain with degree 3. experiments: the Levi graph can indeed support slightly higher amount of flow in the process flexibility problem, compared to the 3-chain, for a variety of demand distributions. We have also evaluated the performance of the expansion indices and SF indices, using two representative examples studied in Iravani et al. (2005). The first example is a group of structures with random demand µ =(.5,, 0.5, 0.5,,.5) and fixed capacity S =(,,,,, ) (c.f. figure 7). The second example is a group of structures with random demand µ = (,,,,,,, ) and fixed capacity S =(,,,,,,, ), as shown in figure 8. We use E(ZF e (D)), the expected excess flow in structure F, as the benchmark to evaluate the performance of structure F. E(ZF e (D)) is closely related to the max-flow objective E(Z F (D)), because the excess flow ZF e (D) is simply n i= D i ZF (D), the amount of unmet demand. E(ZF e (D)) is a better benchmark for this experiment because it is more sensitive and does not scale as much as the max-flow objective when problem size changes. We approximate E(ZF e (D)) by sampling 200 demand scenarios, each with demand D i uniformly distributed in (0, 2µ i ). In both cases, the ranking obtained from the expansion index is consistent with the ranking given by (the empirical average) E(Z e F (D)). The ranking given by SF index, on the other 25

26 a d c b e f C apacity D em and Structure a d c b e f Capacity Dem and Structure a d c b e f C apacity Dem and Structure a d c b e f Capacity Dem and Structure a d c b e f Capacity Dem and Structure -5 Figure 7: SF Group : Structures with Demand µ =(.5,, 0.5, 0.5,,.5) a d c b e f C apacity Demand Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Dem and Structure g h a d c b e f Capacity Demand Structure g h a d c b e f Capacity Dem and Structure g h Figure 8: SF Group 2: Structures with Demand µ =(,,,,,,, ) 26

27 hand, are slightly different from (the empirical average) E(ZF e (D)). Note that lower value of E(ZF e (D)) should ideally correspond to higher index value. Unfortunately, the SF method errs on the performance of - and -2, and on 2-4 vis-à-vis the rest. The simulation results indeed suggest that both the expansion index and SF index are easy-to-compute indices that can be used to rank process structures rather effectively. Table 2: Comparisons among Flexibility indices 5 Applications The central theme in this paper is the observation that a little flexibility can go a long way in enhancing the performance of the system. We have discussed the impact of this phenomenon on the process flexibility problem in the earlier sections. This insight has also been observed in numerous other settings. In the rest of this section, we review some of the key results obtained for other related areas, and discuss some new applications. 5. Multi-Stage Supply Chain Graves and Tomlin (2003) extended Jordan and Grave s work to multi-product, multi-stage supply chains, where each product needs to flow through several stages in the supply chain. They proposed a supply chain flexibility measure g, where higher g indicates higher flexibility. Unfortunately, they stop short of offering a method to design a flexible supply chain network. 27

28 The results established in the earlier sections can be used to establish a much stronger result concerning the performance of sparse supply chain structure. We also give a glimpse of the performance of the sampling method using this example as an illustration. Consider the following supply chain design problem (see figure 9): There are n products, n 2 plants, and n 3 suppliers. In the full flexibility scenario, each product can be produced at any plant, using material source from any of the supplier. We assume that each unit of product consumes a unit of material from each supplier and uses a unit capacity at a plant. We assume further that production capacity at the plants are C i, i =,...,n, and the suppliers have limited amount of materials, at capacity B i, i =,...,n 3. The demand for each product is random and denoted by the random variable D i, i =,...,n. Figure 9: A Supply Chain Flexibility Structure. In the full flexibility scenario, it is easy to see that the expected sales is given by ( n n 2 n 3 E D [min D i, C i, B i )]. i= i= i= 28

29 The above problem can be formulated as the following set packing problem: Z (D) = s.t. n n 2 n 3 max n 2 j= k= n n 3 i= j= k= n 3 x ijk x ijk D i i =, 2,...n ; (6) x ijk C j j =, 2,...n 2 ; (7) i= k= n n 2 x ijk B k k =, 2,...n 3 ; (8) i= j= x ijk 0 i, j, k. (9) For each realization of demand D i, it is easy to see that there is an optimal solution given by x ijk (D) = D i C j B k n max{ i= D i n 2 j= C j, n i= D i n 3 k= B k, n 2 j= C j }. n 3 k= B k Let S = {(i, j, k) : material from supplier k used to produce product i at plant j} denote the supply chain configuration. We have an analogous result for this problem setting. Theorem 5 Suppose the demand distribution satisfies x ijk (D) = D i C j B k n max{ i= D i n 2 j= C j, n i= D i n 3 k= B k, n 2 j= C j } n 3 k= B k [ D i C j B k λe max{ n i= D i n 2 j= C j, n i= D i n 3 k= B k, n 2 j= C j n 3 k= B k ] }, for some λ>0, then there exists a sparse supply chain configuration S with cardinality S = O(λ(n + n 2 + n 3 )/ɛ), such that the expected demand met by the sparse supply chain system is at least ( ɛ)e(z (D)). We can use the sampling method to obtain good sparse supply chain structure. Consider a supply chain with 9 products, 7 plants and 5 suppliers. Demand for each product is normally distributed. The expected demands are shown in figure 0. The standard deviation is 40% of the expected demand. Products can be divided into 3 subgroups. Demands of products in the same subgroup are correlated. The correlation coefficients are 0.2 pairwise in subgroup 29

30 (product to 4), -0.2 pairwise in subgroup 2 (product 5 to 7), and 0. in subgroup 3 (product 8 and 9). There are no correlations between demands of products in different subgroups. Supplies from the suppliers are also normally distributed. For each supplier, the standard deviation is 40% of the expected supply (shown in figure 0). Each plant is able to produce any product. The capacity of each plant is fixed (see figure 0). In the full flexibility scenario, a plant can produce any product using the material supplied by any supplier. Dem ands Plants Supplies µ=7.6 µ=5.7 2 C=.3 µ=7.4 3 C=4.7 S=8 µ=5. 4 C=8.6 S=6 µ=4.5 5 C=3.4 S=0 µ=2. 6 C=.7 S=9 µ=3.6 7 C=8.3 S=2 µ=7.3 8 C=6.3 µ=8.9 9 Figure 0: A Supply Chain Network with N =2, obtained from the sampling approach. We conduct a simulation study to test whether there exists a partial flexible supply chain network capturing almost all the benefits of the full flexibility system. We simulate 00 scenarios of demands for each retailer and 00 scenarios of supplies for each supplier. We use the number of paths in the network to denote the degree of flexibility. For each degree of flexibility (N from 0 to 35) we generate 00 structures using the sampling method and return the structure with the best empirical performance. Figure shows the effect of higher degree of flexibility on the expected sales. The figure shows that the marginal contribution of every additional path is in general diminishing in its returns on expected sales. When 0 paths are selected, the expected total sales is already 85% 30

31 50 Expected Statisfied Dem and Supply Chain Structures Fu lflexibility Figure : Expected Satisfied Demand as Flexibility Increase. of the expected sales under the full flexibility system. After the number of paths is increase to 9, the total sales is a whopping 99% of the expected sales in the full flexibility system (with a total of 35 paths). 5.2 Military Deployment and Transshipment Inspired by the defense in depth strategy devised by Emperor Constantine (Constantine The Great, ), Revelle and Rosing (2000) studied the following problem in troops deployment: Each region in the empire must be protected by one or more mobile field armies (FAs) to throw back invading enemies. It is secured if one or more FAs is stationed in the region. It is securable if an FA can reach the region in a single step (i.e., there is a route linking the region where the FA is stationed to it). However, an FA can be deployed from one region to another adjacent region only when there is at least one other FA to help launch it. i.e., the FA must come from a region with at least two FAs stationed in it. This restriction is much like the island hopping strategy used by General MacArthur in World War II in the Pacific. The puzzle confronting Emperor Constantine concerns the positioning of 4 FAs to protect the 8 regions in his empire as shown in Figure 2. He chose to position two FAs at Rome, and two at his new capital Constantinople, leaving the outskirt region Britain vulnerable to enemies attack. By focusing on the troops deployment problem in the event of war in one of the regions, Revelle and Rosing (2000) solved the above puzzle by formulating the problem into 3

32 Figure 2: The empire of Constantine. an integer program. In this case, all the regions in the empire can be protected by stationing one FA in Britain, one in Asia Minor, and two in Rome. Note that the above deployment, however, is not securable in the event that two or more wars happen at the same time. For instance, this deployment could not secure against joint outbreak of wars in any two of the five regions: Gaul, Iberia, North Africa, Egypt and Constantinople. A slightly better deployment is to station two FAs at Iberia, and another two FAs at Egypt. This secures the regions for up to two wars, except if the wars occur at Britain and Gaul, or Constantinople and Asia Minor. This deployment is thus more resilient against the joint outbreak of two wars. Unfortunately, this latter deployment is politically unacceptable as no troops are now stationed at the capital city Rome. In general, finding the best deployment solution securing against outbreak of wars in up to k regions (for k 2) is a challenging problem. The minimum number of FAs needed to secure the regions will largely depend on the network structure for troops re-deployment. In general, if the network is dense (with many links joining different regions), or has one region connecting to many different regions, then the number of FAs needed will be low. In this section, we consider an analogous military deployment problem. Consider a military mission, where n strategic locations need to be defended against possible enemy s invasion. The army has Q i units of troops in location i. Unfortunately, the enemy s mission could not be predicted, and the unit of troops deployed by the enemy to attack location i is denoted by D i. One way to strengthen the defense network is to have reinforcement troops, where units 32

33 in location i may be deployed to location j, if the troops can be trained to rush from i to j within stipulated time. Of course, it will be ideal to have many reinforcement paths, as that means the whole force can be pooled together at the right place to deal with enemy s invasion. However, due to the limited time in deployment, each unit in location i can only be trained to reinforce limited number of other locations. The challenge is to design a reinforcement network to defend against the enormous number of enemy s possible course of action. This problem is similar to the transshipment problem studied in the literature, although the latter focuses mainly on the optimal inventory policy and optimal order quantity Q i for each retailer (For problems with two retailers, see Tagaras and Cohen (992); for problems with many identical retailers, see Robinson (990)). They all assumed complete grouping, i.e., a retailer could tranship its products to any other retailers. Only a few papers discuss how to design a transshipment networks. Lien et al. (2005) studied the impacts of the transshipment network structure. They compared the performance of different network configurations: no transshipment, complete grouping, partial grouping, unidirectional chain and bidirectional chain (See Figure 3). Similar to the findings in Jordan and Graves (995), they showed that sparse transshipment network structures can capture almost all the benefits of complete grouping. They also indicated that the chaining structure, which is also a kind of sparse structure, would outperform other sparse structures. The troops deployment (and the transshipment) problem can be reduced to a variant of the process flexibility problem, where there are n plants and n products. Each plant i has capacity (Q i D i ) + (the left over at retailer i), which can be used to meet demand for other products. Each product has demand (D i Q i ) + (unfilled demand at retailer i). Note that in this case, both capacity and demand are random parameters in our problem, and (Q i D i ) + (D i Q i ) + =0. From Theorem 4, and the analysis therein, the existence of a sparse support structure for the troops deployment problem is guaranteed by the following condition: x (D i Q i ) + (Q j D j ) + i,j(d) = { n max i= (D i Q i ) +, } n j= (Q j D j ) + almost surely for some λ>, and for all i, j. [ (D i Q i ) + (Q j D j ) + ] λe D { n max i= (D i Q i ) +, } n j= (Q j D j ) + 33

34 Complete Pooling 3 A Bidirectional Chain A Unidirectional Chain 3 Group Pooling Figure 3: Different Kinds of Transshipment Network Structures Example: When D i are i.i.d. and take values in {0, 2} with equal probability, Q i = for all i =,...,n, then (D i Q i ) + and (Q i D i ) + are Bernoulli variable with equal probability. Note that E((D i Q i ) + )=E((Q i D i ) + )= 2. Furthermore, Hence n n n (D i Q i ) + + (Q j D j ) + = D i Q i = n. i= j= i= x i,j(d) = (D i Q i ) + (Q j D j ) { + n max i= (D i Q i ) +, } n j= (Q j D j ) + Property A holds for this example. 2 n (D i Q i ) + (Q j D j ) + 8 [ ] n E (D i Q i ) + (Q j D j ) + [ (D i Q i ) + (Q j D j ) + ] 8E D { n max i= (D i Q i ) +, }. n j= (Q j D j ) + 34

35 Remarks: The sampling approach in our analysis uses the value [ (D i Q i ) + (Q j D j ) + ] E D { n max i= (D i Q i ) +, } n j= (Q j D j ) + to obtain the variable sampling probability. This approach can gainfully employ the additional information on the covariance structure of D i and D j, and the total excess and unfilled demand distribution n i= (D i Q i ) + and n j= (Q j D j ) + to obtain reliable sampling probabilities. There is a combinatorial analogue to the troops deployment problem. Suppose we distribute 2n units of troops uniformly on 2n nodes, with each location defended by exactly one unit. Suppose also that each location will not be penetrable only if we have 2 units defending that location. If enemy can attack up to n different locations, how would we design the reinforcement network? We color each node as red if enemy attacks that location, blue otherwise. The problem can be reduced to random allocation of n red and n blue balls uniformly in the nodes of the network. Let c(i) denote the color assigned to node i. Let e(g) denote the edge set in G. We say that M e(g) is a colored matching if it is a matching in G with M = {(i, j) :c(i) c(j), (i, j) e(g)}. Let m(g) denote the cardinality of a maximum colored matching in G. Thus m(g) represents the number of locations that can be defended in the network. Note that m(g) n for all realizations of the color distribution, and E(m(G)) = n when e(g) = K(2n), the complete graph on 2n nodes. Theorem 4 and Property A shows that cardinality of the edge set e(g) can be reduced much further, while sacrificing only a little in value of E(m(G)). Theorem 6 For all ɛ>0, there exists n(ɛ) > 0 such that for all n n(ɛ), there exists a graph G n with 2n nodes and O(n) edges, such that n E(m(G n )) ( ɛ)n. Hence a sparse but yet near-to-optimal reinforcement network can be obtained with only a small loss of locations, regardless of enemy s course of action! 35

36 5.3 Sequencing with limited flexibility Lahmar et al. (2003) considered the following sequencing problem in an automotive assembly line: cars leaving the body shop on a moving line has to be resequenced prior to entering the paint shop, in order to minimize the changeover cost at the paint shop. Given an initial ordering of jobs, they proposed a Dynamic Program to find the minimum cost permutation of the sequence, so that each position is shifted not more than K positions to the right, and not more than K 2 positions to the left. The values K and K 2 reflect the limited buffer space available in the production plant, and reflect the level of flexibility within the plant. Re-sequencing is needed to minimize, say, the changeover cost at the next station. A precise analytical measurement of the value of flexibility, however, is difficult to obtain, because the complexity of the DP based algorithm depends on the value of K and K 2. Nevertheless, the numerical results in this paper are quite convincing - the effect of flexibility diminishes rapidly, and most of the benefits can be accrued at small values of K and K Call Center Staffing Wallace and Whitt (2004) explores the use of chaining in call center staffing and skill chaining. In a typical call center, the types of calls, and the skills involved in servicing these calls, vary and it is not possible to train every staff to handle all calls. The authors showed that with appropriate skill chaining, and in the scenario that the duration of service does not depend on the call types or the agents serving it, then a simple routing policy, together with proper skill chaining, can result in near optimal performance, even if service level constraints (eg. service level guarantee for type k calls, or bounds on blocking probability) are taken into consideration. This paper shows that it may be more worthwhile to pay attention to cross-training, rather than to invest in complicated call routing software. The proper staffing level are next identified via simulation based optimization. 5.5 Load Balancing The concept of limited flexibility also has important application in load balancing in stochastic network routing analysis (cf. Mitzenmacher, M. (996)). This application follows from the 36

37 following interesting observation: Suppose n balls are randomly inserted into n bins, with each bin chosen with probability /n. What is the expected number of balls in a bin with the maximum load? It is not difficult to show that the bin with the maximum load should have O(log(n)) balls with high probability. Suppose we modify the process in the following way- the balls are inserted into the bins in sequential manner. Each ball gets to pick two bins randomly. Depending on the load on the two bins at that time, the ball will be inserted into the bin with the smaller load. It turns out that this simple modification reduces the peak load drastically to O(log log n) with high probability! Having more flexibility does not help much either, since log log n if we allow each ball to pick K bins randomly, then the peak load is reduced to O( log K ), for any K 2. i.e., having more flexibility only reduces the peak load by a constant factor Other Applications The flexibility strategy has been shown to be rather effective in various other areas such as supply chain planning (Bish and Wang (2004)), queuing (Benjafaar (2002), Gurumurthi and Benjaafar (2004)), revenue management (Gallego and Phillips (2004)), scheduling (Daniels and Mazzola (994), Daniels et al. (996), and Daniels et al. (2004)), and flexible work force scheduling (Hopp et al. (2004), Wallace and Whitt (2004), Brusco and Johns (998)). For instance, Hopp et al. (2004) observed similar results in their study of a work force scheduling problem in a ConWIP (constant work-in-process) queuing system. By comparing the performances of cherry picking and skill-chaining cross-training strategies, they observed that skill-chaining, which is indeed a kind of the chaining strategy, outperforms others. They also showed that a chain with a low degree (the number of tasks a worker can handle) is able to capture the bulk of the contribution from a chain with high degree. 3 This result may have ramification in appointment system design. Instead of allowing the patient to pick her appointment day, the clinic may well benefit from balancing the load on different days by asking the patient to come for appointment on a less congested day. Giving the patient just another choice may well be enough to reduce the peak load by a dramatic amount! 37

38 6 Conclusion We have described in this paper several settings where partial flexibility can be gainfully employed to reap the maximum benefit. We have also discussed its intimate connection with graph connectivity problem, and have prescribed a condition under which partial flexibility will be almost as effective as the full flexibility system. To wrap up the review, we would like to highlight however some caveats to the adoption of partial flexibility strategy in practice. 6. Partially flexible structure has higher variance Some authors have cautioned that partial flexibility has its limitation. Muriel et al. (200) showed that a surgery planning system (e.g. a hospital) with a limited flexibility structure could lead to higher instances of the need for rescheduling and larger variability in resource utilization. Bish et al. (2005) also indicated that in the make-to-order environment, partial flexibility could introduce variability in the upstream of the supply chain, thus leading to higher inventory cost, greater production variability and more complicated management requirement. In another word, although partially flexible structure may attain nearly the same expected performance as the fully flexible structure, the (random) optimal solution may exhibits more variability compared to the optimal solution in a fully flexible system. We illustrate this phenomenon using the simple process flexibility problem discussed earlier. Consider Lemma, where the optimal flow for the full flexibility system is characterized by x ij (D) = D i C { j m max i= D i, }, n j= C j and the total max flow is merely ( m min D i, i= n C j ). Consider the case when the network is balanced (n = m) with identical mean demand and capacity at each node. In this case, the dedicated network (with one plant focusing on one j= product) has total maximum flow of n i= ) min (D i,c i. 38

39 Although the fully flexible system has higher expected max flow, it can also have a smaller variance, since { ( m var min D i, i= n )} C j j= n i= { )} var min (D i,c i for many classes of demand distributions. For instance, when demand is negatively correlated, so that m i= D i = n j= C j, then the full flexibility system has higher expected maximum flow and lower variance than the dedicated system. Adding partial flexibility into the system will not be able to reduce variability of the max-flow to the level attained by the full flexibility system. 6.2 Coordination in partially flexible structure is hard In our evaluation of the partial flexibility system so far, we have implicitly assumed that there is a central planning agency which will dictate the optimal flow of supplies to match demands, with complete information on demand. However, when demand signals are released in real time and not synchronized, and when supply deployment decisions must be decided as and when it arrives, the performance of the partial flexibility system can be far from that of the full flexibility system. The latter does not suffer from the coordination problem, as the supplies can be shipped to any demand destination. Hence the performance in a fully flexible system depends only on the total demand and total supply, and is independent of the deployment decisions in real time. This is unfortunately not the case for partial flexibility system. We encountered this issue while working on a bread delivery problem. A group of bakeries have agreed to donate their unsold breads at the end of each day to several old folks homes in the area. See Figure 4 for the locations of the bakeries and the homes. A group of volunteers, recruited under the Food From The Heart program, will deliver the leftover breads each night. For ease of operations, each volunteer is in charged of one route - from a bakery to an assigned home. The route is pre-determined and remains the same each night (for ease of control), and does not change with the level of leftover breads at each bakery. To minimize the mismatch between supply and demand, it will be ideal for the volunteers to deliver the bread from one bakery to more than one home. While it is not possible to have the flexibility to deliver the breads to all homes, it is conceivable that each volunteer can be put in charged of two routes, and can decide which home to send the leftover breads to based on supply information each night. The added flexibility in the operation will 39

40 Figure 4: Locations of Bakeries and Homes allow the system to adjust the supply to each home appropriately to reduce the amount of food wastages. Unfortunately, it is not easy to exploit this strategy at the operational level because the bakeries close at different times, and the food must be handed over just before the shops close for the night. Due to fund limitation, there is only a bare bone information system installed to facilitate the communication between the volunteers and the manager of the food delivery program. Currently, the volunteers can only communicate with a central server via SMS messages. It is thus impossible to implement a centralized planning system to coordinate the delivery operations, and to exploit fully the advantages offered by a partial flexibility system. This problem is also pertinent in the troops deployment problem, where the reinforcement may have to be activated based on partial evolution of the battle on the ground. This problem is further exacerbated by the fact that communication channels or situation reports may not be reliable in actual combat. In this setting, coordinating the flow of goods (or forces) through a partially flexible system is a lot more challenging. 6.3 Concluding Remarks In summary, we have presented in this paper an overview of recent analytical results obtained for the process flexibility problem. We show that the empirical observation that a little flexibility can enhance the performance of the system significantly can be justified in a stylized model based on maximum flow formulation in a two stage stochastic programming model. We have 40

Steering User Behavior with Badges

Steering User Behavior with Badges Steering User Behavior with Badges Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell University Cornell University Stanford University {dph,

More information

Maximizing the Spread of Influence through a Social Network

Maximizing the Spread of Influence through a Social Network Maximizing the Spread of Influence through a Social Network David Kempe Dept. of Computer Science Cornell University, Ithaca NY Jon Kleinberg Dept. of Computer Science Cornell University,

More information

THE PROBLEM OF finding localized energy solutions

THE PROBLEM OF finding localized energy solutions 600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Re-weighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow

Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell

More information

Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations

Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations Jure Leskovec Carnegie Mellon University Jon Kleinberg Cornell University Christos

More information

Is Piketty s Second Law of Capitalism Fundamental?

Is Piketty s Second Law of Capitalism Fundamental? Is Piketty s Second Law of Capitalism Fundamental? Per Krusell Institute for International Economic Studies, CEPR, and NBER Anthony A. Smith, Jr. 1 Yale University and NBER April 27, 2015 (first version:

More information

How to Use Expert Advice

How to Use Expert Advice NICOLÒ CESA-BIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California

More information


ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros

More information

Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods

Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods SIAM REVIEW Vol. 45,No. 3,pp. 385 482 c 2003 Society for Industrial and Applied Mathematics Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods Tamara G. Kolda Robert Michael

More information

Speeding up Distributed Request-Response Workflows

Speeding up Distributed Request-Response Workflows Speeding up Distributed Request-Response Workflows Virajith Jalaparti (UIUC) Peter Bodik Srikanth Kandula Ishai Menache Mikhail Rybalkin (Steklov Math Inst.) Chenyu Yan Microsoft Abstract We found that

More information

CLoud Computing is the long dreamed vision of

CLoud Computing is the long dreamed vision of 1 Enabling Secure and Efficient Ranked Keyword Search over Outsourced Cloud Data Cong Wang, Student Member, IEEE, Ning Cao, Student Member, IEEE, Kui Ren, Senior Member, IEEE, Wenjing Lou, Senior Member,

More information

The newsvendor problem has a rich history that has

The newsvendor problem has a rich history that has OR CHRONICLE PRICING AND THE NEWSVENDOR PROBLEM: A REVIEW WITH EXTENSIONS NICHOLAS C. PETRUZZI University of Illinois, Champaign, Illinois MAQBOOL DADA Purdue University, West Lafayette, Indiana (Received

More information

How To Write a Good PRD. Martin Cagan Silicon Valley Product Group

How To Write a Good PRD. Martin Cagan Silicon Valley Product Group How To Write a Good PRD Martin Cagan Silicon Valley Product Group HOW TO WRITE A GOOD PRD Martin Cagan, Silicon Valley Product Group OVERVIEW The PRD describes the product your company will build. It

More information

Gross Worker Flows over the Business Cycle

Gross Worker Flows over the Business Cycle Gross Worker Flows over the Business Cycle Per Krusell Toshihiko Mukoyama Richard Rogerson Ayşegül Şahin November 2012 Abstract We build a hybrid model of aggregate labor supply with predictions for gross

More information

The terms pull and lean production have become cornerstones of modern manufacturing practice. However,

The terms pull and lean production have become cornerstones of modern manufacturing practice. However, MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol. 6, No. 2, Spring 2004, pp. 133 148 issn 1523-4614 eissn 1526-5498 04 0602 0133 informs doi 10.1287/msom.1030.0028 2004 INFORMS Commissioned Paper To Pull

More information

Fast Solution of l 1 -norm Minimization Problems When the Solution May be Sparse

Fast Solution of l 1 -norm Minimization Problems When the Solution May be Sparse Fast Solution of l 1 -norm Minimization Problems When the Solution May be Sparse David L. Donoho and Yaakov Tsaig October 6 Abstract The minimum l 1 -norm solution to an underdetermined system of linear

More information

THERE HAS BEEN a great resurgence

THERE HAS BEEN a great resurgence Journal of Economic Literature Vol. XXXVII (December 1999), pp. 1661 1707 Clarida, Galí, Gertler: The Science of Monetary Policy The Science of Monetary Policy: A New Keynesian Perspective Richard Clarida,

More information

The Relative Costs and Benefits of Multi-year Procurement Strategies

The Relative Costs and Benefits of Multi-year Procurement Strategies INSTITUTE FOR DEFENSE ANALYSES The Relative Costs and Benefits of Multi-year Procurement Strategies Scot A. Arnold Bruce R. Harmon June 2013 Approved for public release; distribution is unlimited. IDA

More information

A First Encounter with Machine Learning. Max Welling Donald Bren School of Information and Computer Science University of California Irvine

A First Encounter with Machine Learning. Max Welling Donald Bren School of Information and Computer Science University of California Irvine A First Encounter with Machine Learning Max Welling Donald Bren School of Information and Computer Science University of California Irvine November 4, 2011 2 Contents Preface Learning and Intuition iii

More information

Collective Intelligence and its Implementation on the Web: algorithms to develop a collective mental map

Collective Intelligence and its Implementation on the Web: algorithms to develop a collective mental map Collective Intelligence and its Implementation on the Web: algorithms to develop a collective mental map Francis HEYLIGHEN * Center Leo Apostel, Free University of Brussels Address: Krijgskundestraat 33,

More information

Real-Time Dynamic Voltage Scaling for Low-Power Embedded Operating Systems

Real-Time Dynamic Voltage Scaling for Low-Power Embedded Operating Systems Real-Time Dynamic Voltage Scaling for Low-Power Embedded Operating Syste Padmanabhan Pillai and Kang G. Shin Real-Time Computing Laboratory Department of Electrical Engineering and Computer Science The

More information

Misunderstandings between experimentalists and observationalists about causal inference

Misunderstandings between experimentalists and observationalists about causal inference J. R. Statist. Soc. A (2008) 171, Part 2, pp. 481 502 Misunderstandings between experimentalists and observationalists about causal inference Kosuke Imai, Princeton University, USA Gary King Harvard University,

More information

How Small System Dynamics Models Can Help the Public Policy Process

How Small System Dynamics Models Can Help the Public Policy Process How Small System Dynamics Models Can Help the Public Policy Process Navid Ghaffarzadegan*, John Lyneis**, George P. Richardson* * Rockefeller College of Public Affairs and Policy, University at Albany,

More information


COMPETING THROUGH BUSINESS MODELS Working Paper WP no 713 November, 2007 COMPETING THROUGH BUSINESS MODELS Ramon Casadesus-Masanell Joan E. Ricart IESE Business School University of Navarra Avda. Pearson, 21 08034 Barcelona, Spain. Tel.:

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to

More information



More information



More information

IP ASSETS MANAGEMENT SERIES. Successful Technology Licensing


More information

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images SIAM REVIEW Vol. 51,No. 1,pp. 34 81 c 2009 Society for Industrial and Applied Mathematics From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein David

More information