Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC  Coimbra


 Agnes Waters
 1 years ago
 Views:
Transcription
1 Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC  Coimbra Humberto Rocha Brígida da Costa Ferreira Joana Matos Dias Maria do Carmo Lopes Towards efficient transition from optimized to delivery fluence maps in inverse planning of radiotherapy design No ISSN: Instituto de Engenharia de Sistemas e Computadores de Coimbra INESC  Coimbra Rua Antero de Quental, 199; Coimbra; Portugal
2 Towards efficient transition from optimized to delivery fluence maps in inverse planning of radiotherapy design H. Rocha J. M. Dias, B.C. Ferreira, M.C. Lopes June 21, 2010 Abstract The intensity modulated radiation therapy (IMRT) treatment planning problem is usually divided in three smaller problems that are solved sequentially: geometry problem, intensity problem, and realization problem. Most of the research published relates to each of the above problems separately. There exists many models and algorithms to address each of the problems satisfactorily. However, there is the need to link well the three problems. While the linkage between the geometry problem and the intensity problem is straightforward, the linkage between the intensity problem and the realization problem is all but simple, may lead to significant deterioration of plan quality, and is never, or rarely, documented in publications. The goal of this research report is to explain how the transition from the intensity problem to the realization problem is typically done and using a clinical example of a head and neck cancer case, present numerical evidences of the resulting deterioration of plan quality. Research directions to improve the discretization of a optimal continuous fluence map are highlighted. Key words. Radiotherapy, IMRT, Fluence map optimization, Delivery problem. INESCCoimbra, Coimbra, Portugal. Faculdade de Economia, Universidade de Coimbra, Coimbra, Portugal. I3N, Departamento de Física, Universidade de Aveiro, Aveiro, Portugal. Serviço de Física Médica, IPOCFG, EPE, Coimbra, Portugal. 1
3 1 Introduction The goal of radiation therapy is to deliver a dose of radiation to the cancerous region to sterilize the tumor minimizing the damages on the surrounding healthy organs and tissues. Radiation therapy is based on the fact that cancerous cells are focused on fast reproduction and are not as able to repair themselves when damaged by radiation as healthy cells. Therefore, the goal of the treatment is to deliver enough radiation to kill the cancerous cells but not so much that jeopardizes the ability of healthy cells to survive. The continuous development of new treatment machines contributes to the improvement of the accuracy and better control over the radiation delivery. There are two types of radiation treatment planning: forward planning and inverse planning. Forward planning is a trial and error approach in which dose distribution is calculated for a fixed set of parameters (beams and fluences). This is a manual process repeated until a suitable treatment is finally found for a certain set of parameters. This is a time consuming process, where patience is a key factor and has no guarantees of producing high quality treatment plans. Inverse planning, like the name suggests, is the reverse of forward planning. In inverse planning, for a prescribed treatment plan, a correspondent set of parameters (beams and fluences) is algorithmically computed in order to fulfil the prescribed doses and restrictions. Inverse treatment planning allows the modeling of highly complex treatment planning problems and optimization has a fundamental role in the success of this procedure. An important type of inverse treatment planning is intensity modulated radiation therapy (IMRT) where the radiation beam is modulated by a multileaf collimator (see Fig. 1(a)). Multileaf collimators (MLC) enable the transformation of the beam into a grid of smaller beamlets of independent intensities (see Fig. 1(b)). Despite the illustration of Fig. 1(b), beamlets do not exist physically. Their existence is generated by the movement of the leaves of the MLC in Fig. 1(a) that block part of the beam during portions of the delivery time. The MLC has movable leaves on both sides that can be positioned at any beamlet grid boundary. MLC can operate in two distinct ways: dynamic collimation or multiple static collimation. In the first case, the leaves move continuously during irradiation. In the second case, the step and shoot mode, the leaves are set to open a desired aperture during each segment of the delivery and radiation is on for a specific fluence time or intensity. This procedure generates a discrete set (the set of chosen 2
4 (a) (b) Figure 1: Illustration of a multileaf collimator (with 9 pairs of leaves) 1(a) and illustration of a beamlet intensity map (9 9) 1(b). beam angles) of intensity maps like in Fig. 1(b). Here, one will consider multiple static collimation. A common way to solve the inverse planning in IMRT optimization problems is to use a beamletbased approach. This approach leads to a largescale programming problem with thousands of variables and hundreds of thousands of constraints. The quality of the plan, depending on the programming models and the corresponding solving methods, determines the clinical treatment effect. Due to the complexity of the whole optimization problem, many times the treatment planning is divided into three smaller problems which can be solved separately: geometry problem, intensity problem, and realization problem. The geometry problem consists in finding the minimum number of beams and corresponding directions that satisfy the treatment goals using optimization algorithms [6, 8, 13]. In clinical practice, most of the times, the number of beams is assumed to be defined a priori by the treatment planner and the beam directions are still manually selected by the treatment planner that relies mostly on his experience. After deciding what beam angles should be used, a patient will be treated using an optimal plan obtained by solving the intensity (or fluence map) problem  the problem of determining the optimal beamlet weights for the fixed beam angles. Many mathematical optimization models and algorithms have been proposed for the intensity problem, including linear models (e.g. [14, 15]), mixed integer linear models (e.g. [10, 12]), nonlinear models (e.g. [18]), and multiobjective models (e.g. [16]). After an acceptable set of intensity maps is produced, one must find a suitable way for 3
5 delivery (realization problem). Typically, beamlet intensities are discretized over a range of values (0 to 10, e.g.) and one of the many existing techniques ([1, 2, 14, 17]) is used to construct the apertures and intensities that approximately match the intensity maps previously determined. However, reproducing the optimized intensity maps efficiently, i.e. minimizing the radiation exposure time, is a challenging optimization problem. Moreover, one needs to find an efficient way for MLC devices to produce the exact same optimized intensity profiles. Due to leaf collision issues, leaf perturbation of adjacent beamlet intensities, tongueandgroove constraints, etc., the intensity maps actually delivered may be substantially different from the optimized ones. Those problems have been tackled ([9], e.g.) and are still a prosperous field of research. Most of the research published relates to each of the above problems separately. There exists many models and algorithms to address each of the problems satisfactorily. However, there is the need to link well the three problems. While the linkage between the geometry problem and the intensity problem is straightforward, the linkage between the intensity problem and the realization problem is all but simple, may lead to significant deterioration of plan quality, and is never, or rarely, documented in publications. By reading the previous paragraph, and most of the literature, we get the idea that deterioration of plan quality after obtaining optimal fluence maps is only, or mainly, caused by the difficulties of segmentation during the realization problem related with the complexity of the fluence map or physical restrictions of the MLC. That is not true. One of the main reasons for deterioration of plan quality during the realization problem, as will be detailed here, is related to the simplistic linkage between the intensity problem and the realization problem. The goal of this research report is to explain how the transition from the intensity problem to the realization problem is typically done and using a clinical example of a head and neck cancer case, numerical evidence of the resulting deterioration of plan quality will be presented. Research directions to improve the discretization of a optimal continuous fluence map will be highlighted. 4
6 2 Transition from intensity to realization problem In order to better understand what is at stake when linking the intensity problem and the realization problem, let us detail here what we have in the end of the intensity problem and what we need to do in the realization problem. The outcome of the intensity problem is a set of optimal fluence maps (one for each fixed beam) that can be represented by real matrices of m n beamlet weights (intensity assigned to each beamlet), i.e., there are m leaf pairs and for each leaf there are n + 1 possible positions. This matrices, solutions of the intensity problem, are not feasible with respect to the treatment parameters so they must be transformed into feasible hardware settings a posteriori, with a degradation of plan quality. In order to convert an optimal fluence map into a set of MLC segments in a process called segmentation, the real matrices must be transformed in integer matrices by discretizing each beamlet intensity (matrix entry) over a range of values. This discretization is one of the main causes for deterioration of plan quality. An illustration of the usual way this transition is done is presented next. Let us assume that an optimal fluence map is represented by the real matrix: W r = (1) The ideal would be to deliver the intensities of the above real matrix in a continuous manner see Fig. 2(a). However, that is not feasible for the existing MLC s. Typically, this beamlet intensities are discretized over a range of values 0 to number of levels. Let us consider 6 levels. A common procedure is to determine the levels range by dividing the maximum intensity by the number of levels. For this example, the levels range would be By dividing each beamlet intensity by and then rounding we determine the 5
7 (a) (b) Figure 2: Ideal continuous fluence 2(a) and discrete fluence 2(b). level for each beamlet. This is equivalent to distribute the beamlet intensities according to Table 1: Level level intensity beamlet intensity [0.0000; ) [1.1771; ) [3.5313; ) [5.8854; ) [8.2396; ) [ ; ) [ ; ] Table 1: Beamlet distribution to correspondent intensity level. This discretization leads to the fluence map of Fig. 2(b) and can be represented by the following matrix of beamlet weights: 6
8 W l = = α W i = (2) Note that even knowing that the number of levels is 6, beamlets were only assigned to 3 different levels (1, 3, and 6) leading to larger gaps between levels. Note also that this discretization with no criteria may assign very different intensities to the same level and similar intensities to different levels. E.g., and would be in the same level with intensity while that is closer to would be in a different level with intensity Since the weights of the different beamlets are different, and MLC devices produce the same radiation intensity for all the exposed beamlets (when beam is on), beamlet variation can be achieved by changing the beam aperture and superimposing the correspondent number of segments. The problem of choosing the leaf positions and corresponding apertures to consider for delivery is equivalent to find the decomposition of W i into K binary matrices B k, called shape matrices, such that W i = K k=1 α kb k, where α k is the intensity 7
9 value (exposure time) for each aperture and B k are 0 1 matrices where a 0entry means the radiation is blocked and a 1entry means that the radiation go through. More than a decomposition can be found for a given fluence matrix. For the integer matrix W i in (2), a possible decomposition is = , which is equivalent to the superimposition of the apertures of Fig. 3. In the previous illustration of a segmentation procedure many physical issues inherent to MLC s physical limitations were ignored, including leaf collision issues, leaf perturbation of adjacent beamlet intensities, tongueandgroove constraints, etc. Those problems have been tackled ([9], e.g.) but the intensity maps actually delivered may still be different from the theoric ones. However, the possible plan quality degradation originated by this issues (possible changes of decimals in each beamlet intensity) is limited when compared to degradation caused by rounding the optimal beamlet intensities (possible changes of units in each beamlet intensity). By a simple inspection of matrix W r in Eq. 1 and matrix W l in Eq. 2 one see that the linkage between the intensity problem and the delivery problem by 8
10 Figure 3: Sequence of apertures and intensities for a decomposition of W i in (2). doing a simple rounding leads to large differences in the intensity of the beamlets. The lack of criteria to increase or reduce a beamlet intensity should be avoided since all optimization effort is jeopardized by doing so. 3 Numerical results A clinical example of a head and neck case is used to verify the deterioration caused by the rounding of the optimal fluence maps. In general, the head and neck region is a complex area to treat with radiotherapy due to the large number of sensitive organs in this region (e.g. eyes, mandible, larynx, oral cavity, etc.). For simplicity, in this study, the OARs used for treatment optimization were limited to the spinal cord, the brainstem and the parotid glands. The tumor to be treated plus some safety margins is called planning target volume (PTV). For the head and neck case in study it was separated in two parts: PTV left and PTV right (see Figure 4). The prescribed doses for all the structures considered in the optimization are presented in Table 2. In order to facilitate convenient access, visualization and analysis of patient treatment planning data, the computational tools developed within Matlab [11] and CERR [7] (computational environment for radiotherapy research) were used as the main software platform to embody our optimization research and provide the necessary dosimetry data to perform optimization in IMRT. 9
11 Figure 4: Structures considered in the IMRT optimization visualized in CERR. Structure Mean Dose Max Dose Prescribed Dose Priority Spinal cord 45 Gy 1 Brainstem 54 Gy 1 Left parotid 26 Gy 3 Right parotid 26 Gy 3 PTV left 59.4 Gy 2 PTV right 50.4 Gy 2 Body 70 Gy Table 2: Prescribed doses for all the structures considered for IMRT optimization. In order to perform IMRT optimization on this case, two models where used: a linear model and a nonlinear model. Our tests were performed on a 2.66Ghz Intel Core Duo PC with 3 GB RAM. We used CERR version and Matlab (R2007a). The dose was computed using CERR s pencil beam algorithm (QIB) with seven equispaced beams in a coplanar arrangement, with angles 0, 51, 103, 154, 206, 257 and 309, and with 0 collimator angle. In order to solve the nonlinear model we used the Matlab routine quadprog, that for large scale problems uses a reflective trustregion algorithm. To address the linear problem 10
12 Level level intensity beamlet intensity range [ ; ) [ ; ) [ ; ) [ ; ) [ ; ) [ ; ) [ ; ) [ ; ] Table 3: Beamlet distribution to correspondent intensity level for 7 levels. Level level intensity beamlet intensity range [ ; ) [ ; ) [ ; ) [ ; ) [ ; ) [ ; ] Table 4: Beamlet distribution to correspondent intensity level for 5 levels. we used one of the most effective commercial tools to solve large scale linear programs Cplex[4]. We used a barrier algorithm (baropt solver of Cplex 10.0) to tackle our linear problem. In order to acknowledge the degree of plan quality deterioration, results obtained for the optimal fluence maps were compared with the fluence maps obtained after rounding optimal intensities using 7 levels and 5 levels. In Tables 3 and 4 we have the beamlet intensity range for each intensity level. By decreasing the number of levels, the segmentation problem will be simplified resulting in more efficient deliver. However, by decreasing the number of levels the beamlet intensity range will increase potentiating a more expressive deterioration of results. In the best case scenario for both levels there are no differences between the optimal 11
13 (a) (b) Figure 5: Ideal theoric fluence 5(a) and delivered fluence 5(b) for a beam using Konrad. intensities and the rounded intensities. However, for the worst case scenario, for 7 levels the difference between the optimal and the rounded intensity for each beamlet is and for 5 levels that difference is 1.8. Since the number of beamlets is 1907 for this head and neck case the total rounding error can be as large as 1907 times for 7 levels or 1907 times 1.8 for 5 levels. It is important no remark that this rounding procedure is the usual procedure of many treatment planning systems such as Konrad. In Fig. 5 we have the optimal fluence for beam corresponding to 0 degrees using Konrad and the delivered fluence after rounding using 7 levels. The quality of the results can be perceived considering a variety of metrics and can change from patient to patient. Typically, results are judged by their cumulative dosevolume histogram (DVH) and by analyzing isodose curves, i.e., the level curves for equal dose per slice. An ideal DVH for the tumor would present 100% volume for all dose values ranging from zero to the prescribed dose value and then drop immediately to zero, indicating that the whole target volume is treated exactly as prescribed. Ideally, the curves for the organs at risk would instead drop immediately to zero, meaning that no volume receives radiation. In Fig. 6, DVH curves for PTV s and parotids are presented for optimal fluences using the linear and the nonlinear models and for the rounded optimal intensities when using 5 and 7 levels. Another metric usually used considers prescribed dose that 95% of the volume of the PTV receives (D 95 ). Typically, 95% of the prescribed dose is required. 12
14 Percent Volume (%) PTV right ideal PTV right 7 levels PTV right 5 levels PRT right ideal PRT right 7 levels PRT right 5 levels Percent Volume (%) PTV left ideal PTV left 7 levels PTV left 5 levels PRT left ideal PRT left 7 levels PRT left 5 levels Dose (Gy) (a) Dose (Gy) (b) Percent Volume (%) PTV right ideal PTV right 7 levels 20 PTV right 5 levels PRT right ideal 10 PRT right 7 levels PRT right 5 levels Dose (Gy) (c) Percent Volume (%) PTV left ideal PTV left 7 levels 20 PTV left 5 levels PRT left ideal 10 PRT left 7 levels PRT left 5 levels Dose (Gy) (d) Figure 6: DVH for ideal optimal fluence using NLP vs 7 and 5 level for right PTV and PRT 6(a), DVH for rounded fluence using NLP vs 7 and 5 level for left PTV and PRT 6(b), DVH for ideal optimal fluence using LP vs 7 and 5 level for right PTV and PRT 6(c) and DVH for rounded fluence using LP vs 7 and 5 level for left PTV and PRT 6(d). D 95 is represented in Fig. 6 with an asterisk. By observing Fig. 6 some of the conclusions that can be drawn include: We can observe deterioration of the results from the transition of the optimal fluence maps to the rounded ones; That deterioration is aggravated when fewer levels are considered; That deterioration is worst for linear models than for nonlinear models. The first two conclusions were expected and are a corollary of what was said previously. The last conclusion is probably the most interesting and the explanation for that might be 13
15 the fact that nonlinear models produce smoother solutions than the linear models. Note that no segmentation was done and the results deterioration is exclusively caused by the rounding of the optimal fluence maps. How to address this issue? 4 Direct approaches and research directions Apparently no one tackled the problem arising from the degradation of plan quality when rounding the optimal fluence maps since it is impossible to find anything about that in literature and the usual way to make the transition from intensity problem to delivery problem is rounding (or truncating) the optimal fluences. However, probably to address this problem, not directly, but by contouring it, and also to address MLC physical limitations, time limitations, etc. that make impossible the delivery of a plan for a given optimal fluence map, at least in a clinical time, some approaches choose to optimize directly apertures and fluences instead of solving the realization problem after the intensity problem. By solving both problems at once, one has the guarantee that solutions are always feasible for delivery without any postprocessing. The column generation approach [12, 14] belongs to this second group of approaches since the generated columns can be restricted to correspond to feasible hardware settings. An advantage of the column generation approach to the other IMRT optimization approaches is that the complexity of the treatment plan can be controlled through the number of columns. This approach allows us to investigate the nontrivial tradeoff between plan quality and treatment complexity. Another approach that belong to this second group of approaches is the direct aperture optimization (DAO) scheme ([17]). DAO and other variations, namely direct parameter machine optimization (DMPO), are integrated in the latest generation of planning systems. DAO can be viewed and explained as a similar optimization procedure as the postoptimization of the segmentation of the rounded fluence map represented by matrix W l in Eq. 2. After the segmentation of W l we obtain 14
16 = , , 0626 W l = B 1 +4, 7084 B 2 +7, 0626 B 3. Let us assume that instead of the previous decomposition of W l, we want to optimize the α s in W l = α 1 B 1 + α 2 B 2 + α 3 B 3, where B i are the binary matrices of the previous decomposition representing the apertures of Fig. 3. This procedure correspond to 15
17 reoptimize the time each aperture is on. DAO is similar to this postoptimization except that the apertures to be considered are to be chosen from a very large set of possible apertures. For this choice, the DAO method needs a heuristic algorithm with no polynomial computational complexity guarantee. With DAO there is no need for conversion, filtering, weight optimizations or other kinds of postprocessing. Moreover, there is control over the number of segments and hence control over the delivery time of the plan. However, the choice of the segments to be considered in the optimization is questionable. The key factor of DAO is not the optimization of the aperture time but the selection of the segments. That selection is a direct, trial and error, random attempt! Moreover, for hard cases, which inverse planning is suited for, is questionable if DAO can obtain good results. This doubts are shared by the latest generation of planning systems developers since the former option remains there. Therefore, it continues to be of the utmost interest to improve the transition from the intensity problem to the realization problem. One of the approaches to facilitate the transition from the intensity problem to the realization problem as well as the reduction of the overall delivery time is to reduce the complexity of the fluence maps. However, it has to be noted that there is a limit to the degree which the complexity of the plan can be reduced before severely affecting plan quality. It was demonstrated by Craft et al. [5] when they investigated the trade off between treatment plan quality and the required number of segments. DAO is one of the attempts to reduce the complexity of the plan but is not the only one. Attempts to decrease the fluence map complexity in traditional IMRT have been reported, using smoothing techniques. The impact of the map fluctuation to the leaf sequencing has been illustrated in the literature. Intuitively, if the fluence maps have fewer fluctuations, the latter map realization based on MLC leaf sequencing will become easier. Moreover, complicated fluence maps can lead to a high number of segments. These high numbers are not suitable for treatment because of the delivery time limitation. Therefore a procedure is needed to reduce the final number of segments while keeping the quality of the plan. A common strategy is to apply filters after getting the fluence maps, as an attempt to remove the noise. A lot of map smoothing optimization models were proposed in recent years, which intended to balance the dose goal and the map smoothness. Konrad filter averages the fluence intensity of a beamlet with the ones immediately left and right. However, this 16
18 Number of function function value iterations value after leveling Table 5: Optimization history. filter strategy have the same deterioration effect of the rounding of the intensities since both procedures are done without taking the physicians preferences into account, and the resulting dose distribution may not be desirable. The ideal approach would be to obtain, as result of an optimization process, a smooth fluence map, with each fluence intensity as close to a intensity level as possible so that rounding errors are minimized. The second goal is more difficult to achieve but models incorporating the number of levels could be developed in order to minimize distance from intensity level during optimization. The first goal of obtaining, as result of an optimization process, a smoother fluence map has been studied (see e.g. [3]). Apparently, there is a conflict between solving the beamletweight problem to optimum and keeping the degradation in plan quality small in the realization problem. In practice, approximate solutions to the IMRT optimization problems suffice; there are numerous nonnegligible uncertainties and sources of errors in the treatment planning process that make the search for the optimal solutions unmotivated. In particular, a nearoptimal solution that corresponds to a simple treatment plan that can be delivered efficiently and with high accuracy is often preferred to the complex optimal solution. In order to illustrate the importance of not overoptimizing the beamlet weight problem, the previous H&N clinical example was used with the nonlinear model. We run the nonlinear model for 10 iterations, 20 iterations, 50 iterations, 100 iterations, and to optimality (237 iterations). We note, in Table 5, that after 50 iterations the objective function value is very close to the optimal one. Even for 20 iterations, most of the decrease is 17
19 (a) (b) (c) (d) (e) (f) Figure 7: Ideal fluence for initial beamlets 7(a), ideal fluence after 10 iterations 7(b), ideal fluence after 20 iterations 7(c), ideal fluence after 50 iterations 7(d), ideal fluence after 100 iterations 7(e), ideal fluence after 237 iterations 7(f) for a beam using a quadratic model. 18
OPTIMIZATION APPROACHES FOR PLANNING EXTERNAL BEAM RADIOTHERAPY
OPTIMIZATION APPROACHES FOR PLANNING EXTERNAL BEAM RADIOTHERAPY A Thesis Presented to The Academic Faculty by Halil Ozan Gözbaşı In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy
More informationInstituto de Engenharia de Sistemas e Computadores de Coimbra. Institute of Systems Engineering and Computers INESC  Coimbra
Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC  Coimbra Carlos Simões, Teresa Gomes, José Craveirinha, and João Clímaco A BiObjective
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationLoad Shedding for Aggregation Queries over Data Streams
Load Shedding for Aggregation Queries over Data Streams Brian Babcock Mayur Datar Rajeev Motwani Department of Computer Science Stanford University, Stanford, CA 94305 {babcock, datar, rajeev}@cs.stanford.edu
More informationA survey of pointbased POMDP solvers
DOI 10.1007/s1045801292002 A survey of pointbased POMDP solvers Guy Shani Joelle Pineau Robert Kaplow The Author(s) 2012 Abstract The past decade has seen a significant breakthrough in research on
More informationApproximately Detecting Duplicates for Streaming Data using Stable Bloom Filters
Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters Fan Deng University of Alberta fandeng@cs.ualberta.ca Davood Rafiei University of Alberta drafiei@cs.ualberta.ca ABSTRACT
More informationON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT
ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1 ActiveSet Newton Algorithm for Overcomplete NonNegative Representations of Audio Tuomas Virtanen, Member,
More informationHighDimensional Image Warping
Chapter 4 HighDimensional Image Warping John Ashburner & Karl J. Friston The Wellcome Dept. of Imaging Neuroscience, 12 Queen Square, London WC1N 3BG, UK. Contents 4.1 Introduction.................................
More informationPractical RiskBased Testing
Practical RiskBased Testing Product RISk MAnagement: the PRISMA method Drs. Erik P.W.M. van Veenendaal CISA Improve Quality Services BV, The Netherlands www.improveqs.nl May, 2009 2009, Improve Quality
More informationGenerating Humanlike Motion for Robots
Generating Humanlike Motion for Robots Michael J. Gielniak, C. Karen Liu, and Andrea L. Thomaz Abstract Action prediction and fluidity are a key elements of humanrobot teamwork. If a robot s actions
More informationReverse Engineering of Geometric Models  An Introduction
Reverse Engineering of Geometric Models  An Introduction Tamás Várady Ralph R. Martin Jordan Cox 13 May 1996 Abstract In many areas of industry, it is desirable to create geometric models of existing
More informationRobust Set Reconciliation
Robust Set Reconciliation Di Chen 1 Christian Konrad 2 Ke Yi 1 Wei Yu 3 Qin Zhang 4 1 Hong Kong University of Science and Technology, Hong Kong, China 2 Reykjavik University, Reykjavik, Iceland 3 Aarhus
More informationA Survey of Design Techniques for SystemLevel Dynamic Power Management
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 8, NO. 3, JUNE 2000 299 A Survey of Design Techniques for SystemLevel Dynamic Power Management Luca Benini, Member, IEEE, Alessandro
More informationTHE development of methods for automatic detection
Learning to Detect Objects in Images via a Sparse, PartBased Representation Shivani Agarwal, Aatif Awan and Dan Roth, Member, IEEE Computer Society 1 Abstract We study the problem of detecting objects
More informationDude, Where s My Card? RFID Positioning That Works with Multipath and NonLine of Sight
Dude, Where s My Card? RFID Positioning That Works with Multipath and NonLine of Sight Jue Wang and Dina Katabi Massachusetts Institute of Technology {jue_w,dk}@mit.edu ABSTRACT RFIDs are emerging as
More informationStreaming Similarity Search over one Billion Tweets using Parallel LocalitySensitive Hashing
Streaming Similarity Search over one Billion Tweets using Parallel LocalitySensitive Hashing Narayanan Sundaram, Aizana Turmukhametova, Nadathur Satish, Todd Mostak, Piotr Indyk, Samuel Madden and Pradeep
More informationAN INTRODUCTION TO PREMIUM TREND
AN INTRODUCTION TO PREMIUM TREND Burt D. Jones * February, 2002 Acknowledgement I would like to acknowledge the valuable assistance of Catherine Taylor, who was instrumental in the development of this
More informationCLoud Computing is the long dreamed vision of
1 Enabling Secure and Efficient Ranked Keyword Search over Outsourced Cloud Data Cong Wang, Student Member, IEEE, Ning Cao, Student Member, IEEE, Kui Ren, Senior Member, IEEE, Wenjing Lou, Senior Member,
More informationChord: A Scalable Peertopeer Lookup Service for Internet Applications
Chord: A Scalable Peertopeer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science chord@lcs.mit.edu
More informationOn SetBased Multiobjective Optimization
1 On SetBased Multiobjective Optimization Eckart Zitzler, Lothar Thiele, and Johannes Bader Abstract Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can
More informationTHE TWO major steps in applying any heuristic search algorithm
124 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 3, NO. 2, JULY 1999 Parameter Control in Evolutionary Algorithms Ágoston Endre Eiben, Robert Hinterding, and Zbigniew Michalewicz, Senior Member,
More informationA GRASPKNAPSACK HYBRID FOR A NURSESCHEDULING PROBLEM MELISSA D. GOODMAN 1, KATHRYN A. DOWSLAND 1,2,3 AND JONATHAN M. THOMPSON 1*
A GRASPKNAPSACK HYBRID FOR A NURSESCHEDULING PROBLEM MELISSA D. GOODMAN 1, KATHRYN A. DOWSLAND 1,2,3 AND JONATHAN M. THOMPSON 1* 1 School of Mathematics, Cardiff University, Cardiff, UK 2 Gower Optimal
More information4.1 Learning algorithms for neural networks
4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to
More informationC3P: ContextAware Crowdsourced Cloud Privacy
C3P: ContextAware Crowdsourced Cloud Privacy Hamza Harkous, Rameez Rahman, and Karl Aberer École Polytechnique Fédérale de Lausanne (EPFL) hamza.harkous@epfl.ch, rrameez@gmail.com, karl.aberer@epfl.ch
More informationOf Threats and Costs: A GameTheoretic Approach to Security Risk Management
Of Threats and Costs: A GameTheoretic Approach to Security Risk Management Patrick Maillé, Peter Reichl, and Bruno Tuffin 1 Introduction Telecommunication networks are becoming ubiquitous in our society,
More informationSequential ModelBased Optimization for General Algorithm Configuration (extended version)
Sequential ModelBased Optimization for General Algorithm Configuration (extended version) Frank Hutter, Holger H. Hoos and Kevin LeytonBrown University of British Columbia, 2366 Main Mall, Vancouver
More informationCuckoo Filter: Practically Better Than Bloom
Cuckoo Filter: Practically Better Than Bloom Bin Fan, David G. Andersen, Michael Kaminsky, Michael D. Mitzenmacher Carnegie Mellon University, Intel Labs, Harvard University {binfan,dga}@cs.cmu.edu, michael.e.kaminsky@intel.com,
More information2 Basic Concepts and Techniques of Cluster Analysis
The Challenges of Clustering High Dimensional Data * Michael Steinbach, Levent Ertöz, and Vipin Kumar Abstract Cluster analysis divides data into groups (clusters) for the purposes of summarization or
More information