Cloud Auto-Scaling with Deadline and Budget Constraints

Size: px
Start display at page:

Download "Cloud Auto-Scaling with Deadline and Budget Constraints"

From this document you will learn the answers to the following questions:

  • What type of types does the cloud auto - scalng mechansm help reduce user cost?

  • What does the word " scalng " mean?

  • What capacty does AWS auto - scalng?

Transcription

1 Prelmnary verson. Fnal verson appears In Proceedngs of 11th ACM/IEEE Internatonal Conference on Grd Computng (Grd 21). Oct 25-28, 21. Brussels, Belgum. Cloud Auto-Scalng wth Deadlne and Budget Constrants Mng Mao,Je L,Marty Humphrey Department of Computer Scence Unversty of Vrgna Charlottesvlle, VA, USA 2294 {mng, l3yh, Abstract Clouds have become an attractve computng platform whch offers on-demand computng power and storage capacty. Its dynamc scalablty enables users to quckly scale up and scale down underlyng nfrastructure n response to busness volume, performance desre and other dynamc behavors. However, challenges arse when consderng computng nstance non-determnstc acquston tme, multple VM nstance types, unque cloud bllng models and user budget constrants. Plannng enough computng resources for user desred performance wth less cost, whch can also automatcally adapt to workload changes, s not a trval problem. In ths paper, we present a cloud auto-scalng mechansm to automatcally scale computng nstances based on workload nformaton and performance desre. Our mechansm schedules VM nstance startup and shut-down actvtes. It enables cloud applcatons to fnsh submtted obs wthn the deadlne by controllng underlyng nstance numbers and reduces user cost by choosng approprate nstance types. We have mplemented our mechansm n Wndows Azure platform, and evaluated t usng both smulatons and a real scentfc cloud applcaton. Results show that our cloud auto-scalng mechansm can meet user specfed performance goal wth less cost. Keywords-cloud computng; auto-scalng; dynamc scalablty; nteger programmng I. INTRODUCTION Clouds have become an attractve computng platform whch offers on-demand computng power and storage capacty. Its dynamc scalablty enables users to scale up and scale down the underlyng nfrastructure n response to busness volume, performance desre and other dynamc behavors. To offload cloud admnstrators burden and automate scalng actvtes, cloud computng platforms have also offered mechansms to automatcally scale up and down VM capacty based on user defned polcy, such as AWS auto-scalng [1]. Usng auto-scalng, users can defne trggers by specfyng the performance metrcs and thresholds. Whenever the observed performance metrc s above or below the threshold, a predefned number of nstances wll be added to or removed from the applcaton. For example, a user can defne a trgger lke Add 2 nstances when CPU usage s above 6% for 5 mnutes. Such automaton largely enhances the cloud dynamc scalablty benefts. It transparently adds more resources to handle ncreasng workload and shuts down unnecessary machnes to save cost. In ths way, users do not have to worry about capacty plannng. The underlyng resource capacty can be adaptve to the applcaton real-tme workload. However, challenges arse when people look deeper nto the mechansms. In cloud auto-scalng mechansms, performance metrcs normally nclude CPU utlzaton, dsk operaton and bandwdth usage, etc. Such nfrastructure level performance metrcs are good ndcators for system utlzaton nformaton. But t cannot clearly reflect the qualty of servce a cloud applcaton s provdng or tell whether the performance meets user s expectaton. Choosng approprate performance metrc and fndng precse threshold s not a straghtforward task, and cases become more complcated f the workload pattern s contnuously changng. Moreover, consderng ndvdual utlzaton nformaton only may not robust to scale [9]. For example, a cluster gong from 1 to 2 nstances can ncrease capacty by 1%, whle gong from 1 to 11 nstances can only ncrease capacty by 1%. Current smple auto-scalng mechansms normally gnore such non-constant effects when addng a fxed number of resources. Another factor such auto-scalng mechansms overlook s the tme lag to boot a VM nstance. Though nstance acquston requests can be made at any tme, they are not mmedately avalable to users. Such nstance startup lag typcally nvolves fndng the rght spot for the requested nstances n cloud data center, downloadng specfed OS mage, bootng the vrtual machne, and fnshng network setup, etc. Based on our experences and research [5], t could take as long as 1 mn to start an nstance n Wndows Azure, and such startup lag can change over tme. In other words, t s very lkely that users may request nstances late f they do not consder nstance startup tme factor. Cost s also an ssue worth careful consderaton when usng cloud. Cloud computng nstances are charged by hours. A fracton of an hour s counted as a whole hour. Therefore, t could be a waste of money for machnes shut down before a whole hour operaton. In addton to notcng the full hour prncpal, clouds now usually offers varous nstance types, such as hgh-cpu and hgh I/O nstances. Choosng approprate nstance types based on the applcaton workload can further save user money and mprove performance. We beleve cloud scalng actvtes can be done better by consderng usng dfferent nstance types than ust manpulatng nstance numbers. In ths paper, we present a cloud dynamc scalng mechansm, whch could automatcally scale up and scale down underlyng cloud nfrastructures to accommodate changng workload based on applcaton level performance metrc ob deadlne. Durng the scalng actvtes, the

2 mechansm tres to form a cheap VM startup plan by choosng approprate nstance types, whch could save more cost compared to only consderng one nstance type. The rest of ths paper s organzed as followng. Secton II ntroduces the related work. Secton III dentfes cloud scalng characterstcs and descrbes applcaton performance model. Secton IV formalzes the problem and detals our mplementaton archtecture n Wndows Azure platform. Secton V evaluates our mechansm usng both smulatons and a real scentfc applcaton. Secton VI concludes the paper and descrbes future works. II. RELATED WORK There have been a number works on dynamc resource provsonng n vrtualzed computng envronment [9][1][12][4]. Feedback control theory has been appled n these works to create autonomc resource management systems. In [9][1], target range s proposed to solve the control stablty ssue. Further n [9], t focuses on control system desgn. It ponts out that reszng nstances s a coarse graned actuator when applyng control theory n cloud envronment and proposed proportonal threasholdng to fx the non-constant effect problem. These works use nfrastructure level performance metrcs and manly focus on control theory applcaton n cloud envronment. They do not consder varous VM types or total runnng cost. In [8], dynamc scalng s explored for cloud web applcatons. They consdered web server specfc scalng ndcators, such as the number of current users and the number of current connectons. The work uses smple trggers and thresholds to determne nstance number and does not consder VM type nformaton and budget constrants as well. In [4], they consdered extendng computng capacty usng cloud nstances and compared the ncurred cost of dfferent polces. Partcularly n cloud computng, dynamc scalablty becomes more attractve and practcal because of the unlmted resource pool. Most cloud provders offer cloud management API to enable users to control ther purchased computng nfrastructure programmatcally, but few of them drectly offers a complete soluton for automatc scalablty actvtes n cloud. Amazon web servce auto-scalng servce s one of them. AWS auto-scalng s a mechansm to automatcally scale up and down vrtual machne nstances based on user defned trggers [1]. Trggers descrbe the thresholds of observed performance metrc, whch nclude CPU utlzaton, network usage and dsk operatons. Whenever the montored metrc s above the upper lmt, a predefned number of nstances wll be started, and when t s below the lower lmt, a predefned number of nstances wll be shut down. Another work worth mentonng here s RghtScale [3]. It works as a broker between users and cloud provders by provdng unfed nterfaces. Users can nteract wth multple cloud provders on one screen. The ncely desgned user nterface, hghly customzed OS mages and many predefned utlty scrpts enable users to deploy and manage ther cloud applcatons quckly and convenently. In dynamc scalng, they borrow the dea of trggers and thresholds but extend scalng ndcator choces broadly. Includng system utlzaton metrcs, they further support some popular mddle-ware performance metrcs, such as Mysql connectons, Apache http server requests and DNS queres. However, these scalng ndcators may not be able to support all applcaton types and not all of them can drectly reflect qualty of servce requrements. Also, they do not consder cost explctly. To the best of our knowledge, our work s the frst auto-scalng mechansm whch addresses both performance and budget constrant n cloud. III. CLOUD SCALING A. Cloud Scalng Characterstcs and Analyss As a computng platform, clouds own dstnct characterstcs compared to utlty computng and grd computng. We have dentfed the followng characterstcs whch can largely affect the way people use cloud platforms, especally n cloud scalng actvtes. Unlmted resources lmted budget. Clouds offer users unlmted computng power and storage capacty. Though by default the resource capacty s capped to some number, e.g., 2 computng unts per account n Wndows Azure, such usage cap s not a hard constrant. Cloud provders allow users to negotate for more resources. Unlmted resource enables applcatons to scale to extremely large sze. On the other hand, these unlmted resources are not free. Every cycle used and byte transferred are gong to appear on the bll. Budget cap s a necessary constrant for users to consder whey they deploy applcatons n clouds. Therefore, a cloud auto-scalng mechansm should explctly consder user budget constrants when acqurng resources. Non-gnorable VM nstance acquston tme. Though cloud nstance acquston requests can be made at any tme and computng power can be scaled up to extremely large, t does not mean cloud scales fast. Based on our prevous experences and research [5], t could take around 1 more mnutes from an nstance acquston request untl t s ready to use. Moreover, such nstance startup lag could keep changng over the tme. On the other sde, VM shuttng down tme s qute stable, around 2-3 mnutes n Wndows Azure. Ths mples that users have to consder two ssues n cloud dynamc scalng actvtes. Frst, count n the computng power of pendng nstances. If an nstance s n pendng status, t means t s gong to be ready soon. Ignorng pendng nstances may result n bootng more nstances than necessary, therefore waste money. Second, count how long the pendng nstance has been acqured and how long further t needs to be ready to use. If the startup tme delay can be well observed and predcted, applcaton admn can acqure machnes n advance and prepare early for workload surges. Full hour bllng model. The pay-as-you-go bllng model s attractve, because t saves money when users shut down machnes. However, VM nstances are always blled by hours. Fracton consumpton of an nstance-hour s counted as a full hour. In other words, 1 mnute and 6 mnute usage are both blled as 1 hour usage and f an nstance s started and shut down twce n an hour, users wll be charged for two nstance hours. The shuttng down tme therefore can greatly affect cloud cost. If cloud auto-scalng

3 mechansms do not consder ths factor, t could be easly trcked by fluctuate workloads. Therefore, a reasonable polcy s that whenever an nstance s started, t s better to be shut down when approachng full hour operaton. Multple nstance types. Instead offerng one sut-for-all nstance type, clouds now normally offer varous nstance types for users to choose. Users can start dfferent types of nstances based on ther applcatons and performance requrement. For example, EC2 nstances are grouped nto three famles, standard, hgh-cpu and hgh-memory. Standard nstances are sutable for all general purpose applcatons. Hgh-CPU nstances are well suted for computng ntensve applcaton, lke mage processng. Hgh-memory nstances are more sutable for I/O ntensve applcaton, lke database systems and memory cachng applcatons. One mportant thng s that nstances are charged dfferently and not necessarly proportonal to ts computng power. For example, n EC2, c1.medum costs twce as much as m1.small. But t offers 5 tmes more compute power than m1.small. Thus for computng heavy obs t s cheaper to use c1.medum nstead of the least expensve m1.small. Therefore, users need to choose nstance type wsely. Choosng cost-effectve nstance types can both mprove performance and save cost. B. Cloud Applcaton Performance Model In ths paper, we consder the problem of controllng cloud applcaton performance by automatcally manpulatng the runnng nstance types and nstance numbers. Instead of usng nfrastructure level performance metrcs, we target applcaton level performance metrc, the response tme of a submtted ob. We beleve a drect performance metrc can better reflect users performance requrements, therefore can better nstruct cloud scalng mechansms for precse VM schedulng. At the same tme, we ntroduce cost as the other goal n our cloud scalng mechansm as well. Our problem statement s how to enable cloud applcatons to fnsh all the submtted obs before user specfed deadlne wth as lttle money as possble. To keep the cloud applcaton performance model general and smple, we consder a sngle queue model as shown n Fg. 1. Also, we make followng assumptons. Workload s consdered as non-dependent obs submtted n the ob queue. Users don t have knowledge about ncomng workload n advance. Jobs are served n FCFS manner and they are farly dstrbuted among the runnng nstances. Every nstance can only process a sngle ob at one tme. All the obs have the same performance goal, e.g. 1 hour response tme deadlne (from submsson to fnsh). Deadlne can be dynamcally changed VM nstances acquston requests can be made at any tme, but t may take a whle for newly requested pendng nstance to be ready to use. We call such tme VM startup delay. There could be dfferent classes of obs, such as computng ntensve obs and I/O ntensve obs. A ob class may have dfferent processng tme on dfferent nstance types. For example, a computng ntensve ob can run faster on hgh-cpu machnes than hgh-i/o machnes. The ob queue s large enough to hold all unprocessed obs and ts performance scales well wth ncreasng number of nstances. Fgure 1. Cloud applcaton performance model IV. SOLUTION & ARCHITECTURE Based on the problem descrpton n prevous secton, we formalze the problem n ths secton and present our mplementaton archtecture n Wndows Azure. A. Soluton One of the key nsghts to ths problem s that, to fnsh the all submtted obs before the deadlne, auto-scalng mechansm needs to ensure that the computng power of all acqured VM nstances s large enough to handle the workload. We summarze the key varables n the Table. I. TABLE I. Varables J KEY VARIABLES USED IN CLOUD PERFORMANCE MODEL the th ob class Meanng n the number of J submtted n the queue V I c d s v v, v the VM type the th nstance (runnng or pendng) the cost per hour of VM type V average startup delay of VM type V the tme already spent n pendng status of I t average processng tme of runnng ob J on V D C W P deadlne (e.g. 1 hour or 1 seconds) budget constrant (dollars/hour) Workload obs need to be fnshed computng power obs can be fnshed Usng the above notatons, we defne the system workload as a vector W. For each ob class J, there are submtted obs. W = ( J, n ) The computng power of nstance I can be represented as a vector P. The dea s to calculate how many obs can be fnshed for each ob class before the deadlne on nstance I.We use deadlne and ndvdual completon tme (assume all the obs are fnshed by that nstance) rato to approxmate the number of obs that can be fnshed. n

4 D n P = ( J, ) t n, type( I ) For nstance whose status s pendng, ts computng power can be represented as followng, where s s the tme already spent n startng the nstance. ( D ( dtype( I ) s )) n P = ( J, ) t n, type( I ) Therefore, the total computng power of current nstance can be represented as. Clearly f W > P, we need to P start more nstances P ' ( ' means new nstances) to handle the ncreased workload. The problem becomes fndng a VM nstance combnaton plan, n whch P ' W P At the same, we also want to mnmze the cost we spend for these newly added nstances. Mn( c ) type( I ') In the cases where there are nsuffcent budget, the dea to generate as much computng power as possble wthn the budget constrants type I Max( P ') c C c ( ') type( I ) When one nstance I s approachng full hour operaton, s we need to decde whether to shut-down the machne or not. In ths case, we can calculate the computng power wthout nstance I, and compare wth the workload. If the s computng power s stll bg enough to handle the workload, we can remove the nstance. P P W s To better explan the problem, we can go through a smple example. Assume we have three ob classes ( 1, 2, 3 ) and three VM types ( V 1, V 2, V 3 ). Currently, the workload n the system s [6, 6, 6] and there are two runnng nstances I 1 and I 2. Our goal s to fnd a VM type combnaton [ n ' 1, n ' 2, n ' 3 ], whose computer power s greater than or equal to target computng power and ther cost s mnmal among all the possble VM type combnatons. 1 : x : y = 3 : z { { { { P ' W I I 2 1 : x 45 2 : n1 ' 5 n2 ' 2 n3 ' 1 y = 3 : z 35 { { { { V V V P ' Where Mn( c n ' + c n ' + c n ') c n ' + c n ' + c n ' + ctype I + ctype I C ( 1) ( 2 ) From the above analyss, our cloud auto-scalng mechansm s reduced to several nteger programmng problems. We try to mnmze the cost or maxmze the computng power wth ether computng power constrants or budget constrants. There are qute a few standard approaches to solve nteger programmng problems, such as cuttng-plane and branch-and-bound methods [13] [14]. We wll not duplcate the detals here. In addton to determnng the number and type of VM nstances, there are some other cases lke admsson control and deadlne mss handlng whch are also nterested to thnk about n cloud auto-scalng mechansms. However, our work s ntenson s not to create a hard real-tme cloud system whch all obs deadlne are guaranteed, we focus on automatc resource provsonng based on both performance goals and budget constrants. Deadlne s ust the metrc we choose, because t can better reflect users performance desre. Therefore, n real practce we beleve these are more lke polcy questons. Users can choose ther own polces based on ther applcatons. For example, to mantan servce avalablty and basc computng power, users can decde the mnmum number of runnng nstances. In other words, even there s no workload, a cloud applcaton wll always have at least 1 runnng nstance. For admsson control cases, when there s nsuffcent budget, auto-scalng mechansm could ether accept the ob and try to run wth maxmum computng power wthn the user budget constrants or users can smply deny the ob. In ether case, users may want to get notfcaton from the mechansm. For deadlne mss handlng, users can ether leave t alone or allow auto-scalng mechansm to ncrease as many nstances as possble to speed up the remanng processng. In our mplementaton, we have mplemented these polces and let user to confgure whch polcy s most approprate for ther cases, and users are allowed to mplement ther own polces as well. B. Archtecture We have desgned and mplemented our cloud autoscalng mechansm n Wndows Azure [3]. Fgure 2 shows the archtecture of our mplementaton. The mplementaton ncludes four components. They are performance montor, hstory repostory, auto-scalng decder and VM manager. Performance montor observes the current workload n the system, collects actual ob processng tme and arrval pattern nformaton, and updates the hstory repostory. VM manager works as the adapter between our auto-scalng mechansm and cloud provders. It montors all pendng and ready VM nstances, and updates hstory repostory wth actual startup tme of dfferent VM types. Moreover, t executes VM startup plan generated by auto-scalng decder and drectly nvokes cloud provder resource provsonng APIs. In our case, t s Wndows Azure management API. Our ntenton s that VM manager hdes all cloud provder detals and can be easly replaced wth other cloud adapters. Such nformaton hdng enhances the reusablty and

5 customzablty of our mplementaton when workng wth dfferent cloud provders. Hstory repostory contans two data structures. One s the confguraton fle, whch ncludes applcaton deadlne, budget constrant, montor executon nterval nformaton, etc. As shown n Fg. 2, applcaton admnstrators can dynamcally control the behavor of cloud auto-scalng mechansm by changng the confguraton fle. The other data structure s hstorcal data table, whch records the hstorcal ob processng tme and arrval pattern nformaton provded by performance montor, and nstance startup delay nformaton provded by VM manager. By mantanng hstorcal data, the repostory mproves the nput parameter precseness and also helps decder to prepare for possble workload surges early. Decder s the core of our cloud auto-scalng mechansm. Relyng on real-tme workload and VM status nformaton from performance montor and VM manager, as well as confguraton parameters and hstorcal records from hstory repostory, t solves the nteger programmng problem we formalzed n the prevous secton and generates a VM startup plan for VM manager to execute. The VM startup plan could be empty because the workload may be well handled by extng nstances or t can contan nstance type and number pars to notfy VM manager acqure enough computng power. In our current mplementaton, we use Mcrosoft Solver Foundaton [11] to solve the nteger programmng problem. Acqurng nstance actons are ntaled by decder. After every sleep nterval, t nvokes the logc to determne the VM startup plan. On the other sde, releasng nstance actons are ntaled by VM manager because t montors whch nstance s approachng full hour operaton and could be the potental shut-down targets. But t has to ask decder to see whether remanng computng power s large enough to handle the workload. We have publshed our current mplementaton as a lbrary and plug t n MODIS applcaton [7]. The evaluaton of our mechansm n ths real scentfc applcaton can be found n the next secton. Mn( c ) type( I ') P ' > W P Fgure 2. Archtecture of Cloud auto-scalng n Azure V. EVALUATION In ths secton, we evaluate our mechansm usng both smulatons and a real scentfc applcaton (MODIS) runnng n Wndows Azure. Through smulaton framework, we can easly control the nput parameters, such as workload pattern and ob processng tme, whch helps to dentfy the key factors n our mechansm. Moreover, usng smulaton extensvely reduces the evaluaton tme and cost. The scentfc applcaton tests our mechansm s performance n real envronment. In our evaluaton, we smulated three types of obs. They are mx, computng ntensve and I/O ntensve. At the same tme, we smulated three types of machnes. They are General, Hgh-CPU and Hgh-I/O machnes. We summarze ther smulaton parameters n Table II. The smulaton data s derved from prcng tables and nstance descrptons of EC2. For example, n EC2, c1.medum nstance costs twce as much as m1.small. But t offers 5 tmes more compute power than m1.small [1]. In our case, we assume mx obs are half computaton and half I/O. The speedup factor of powerful machnes s 4-5. General.85$/hour Delay 6s Hgh-CPU.17$/hour Delay 72s Hgh-IO.17$/hour Delay 72s TABLE II. Mx Avg 3 obs/hour STD 5 obs/hour Average 3s STD 5s Average 21s STD 25s Average 21s STD 25s AVAREAGE PROCESSING TIME Computng Intensve Avg 3 obs/hour STD 5 obs/hour Average 3s STD 5s Average 75s STD 15s Average 3s STD 5s I/O Intensve Avg 3 obs/hour STD 5 obs/hour Average 3s STD 5s Average 3s STD 5s Average 75s STD 15s A. Deadlne For deadlne performance goal, we consder two cases. 1) Stable workload wth changng deadlne. We generate the workload usng Table II and plot the ob response tme n Fg. 3. Every data pont n the graph reflects the ob response tme n every 5 mnutes and we record average, mnmum and maxmum response tme for all the obs fnshed n that nterval. The deadlne s frst set as 36s, then changed to 54s and fnally swtched back. The purpose s to evaluate our mechansm s reacton to dynamc user performance requrement change. Fg. 3 shows that more than 95% of obs are fnshed wthn the deadlne and most of the msses happen at the second deadlne change. Ths s manly because our auto-scalng mechansm runs every 5 mnutes and VM nstances can only be ready 1-12 mnutes later after acquston requests. Besdes, we also calculate the nstantaneous nstance utlzaton rate. Job processng s consdered as utlzed whle all the other cases, such as pendng and dlng, are consdered as unutlzed. The hgh utlzaton rate (average 94%) shows that our mechansm does not aggressvely acqure nstances to guarantee the deadlne, and 6% of tme s spent on VM startups. 2) Changng workload wth fxed deadlne. In ths test, we fx the deadlne to 36s and create three workload peaks. Base workload s 3 mx obs per hour. The frst workload peak adds another 3 mx obs per hour. The second peak adds 3 computng ntensve obs per hour, and the thrd one adds 3 I/O ntensve obs per hour. The purpose of ths

6 test s to evaluate our mechansm s reacton to sudden ncreasng workload and ob type changes. Such workload pattern s normally seen n large volume data processng applcatons, n whch data computaton and analyss s performed n day tme, and data backups and movements are performed n nghts and holdays. From Fg. 4, we can see that the deadlne goal s well met for all three workload peaks. When workload goes back to normal, the over acqured nstances durng peak moments quckly reduce ob response tme. As more and more unnecessary nstances are shut down (approachng full hour operaton), the response tme goes back to average. Response (sec) Utlzaton (%) Tme (hour) utlzaton deadlne avg max mn Response (sec) Stable Workload & Changng Deadlne Fgure 3. Stable workload wth changng deadlne Changng Workload & Fxed Deadlne Tme (hour) deadlne avg max mn workload Fgure 4. Changng workload wth fxed deadlne 1.% 9.% 8.% 7.% 6.% 5.% 4.% 3.% 2.% 1.%.% Worload (ob/h) 35 B. Cost Usng the same evaluaton as we dd for changng workload fxed deadlne, we compare the cost of usng dfferent types of VM nstance. The VM type combnatons are llustrated n Table III. Fg. 5 shows the comparson result. TABLE III. INSTANCE TYPE VM Types Total Cost ($) % more than optmal Choce #1 General 98.52$ (43%) Choce #2 Hgh-CPU $ (87%) Choce #3 Hgh-IO $ (88%) Choce #4 General, Hgh-CPU, Hgh-IO 78.62$ (14%) Optmal General, Hgh-CPU, Hgh-IO 68.85$ To evaluate the performance of our mechansm, n addton to the four choces, we also calculate the possble optmal cost for the same workload and compare our soluton wth t. The optmal soluton can be obtaned because we know the workload n advance and we assume we can always put a ob to the most cost-effectve machnes, e.g., put computng ntensve obs on Hgh-CPU nstances for processng. From Fg. 5, we can see that by consderng all avalable nstance types (Choce #4), our mechansm can adapt to the workload changes and choose cost-effectve nstances. In ths way, the real-tme cost s always close to the optmal cost. On the other sde, General nstances always performs on average for all three workload peaks, whle Hgh-CPU and Hgh-IO can only save cost on ts preferred workload surges. Fg. 6 shows the accumulated cost. Choce #4 ncurs 14% more cost than the optmal soluton and saves 2% cost compared to General nstance choce, 45% compared to Hgh-CPU and Hgh-IO. Because of symmetry, Hgh-CPU and Hgh-IO nstances end up wth almost the same cost. General nstances has lower cost on average, therefore, n the long run, t outperforms Hgh-CPU and Hgh-IO cases. By choosng approprate nstance types, choce #4 can ncur less cost n all three workload peaks lke the optmal soluton, hence, t outperforms all the other cases. There are two reasons why our soluton cannot make the optmal decson. Auto-scalng decder does not know the future workload and can only make decsons locally. Second, t cannot control the runnng nstance for processng a ob. Cost (Dollar/hour) Instantaneous Cost Tme (hour) Choce #1 Choce #2 Choce #3 Choce #4 Optmal Fgure 5. Instantaneous cost of changng workload & fxed deadlne Cost (Dollar) Accumulated Cost Tme (hour) Choce #1 Choce #2 Choce #3 Choce #4 Optmal Fgure 6. Accumulated cost of changng workload & fxed deadlne

7 C. MODIS In addton to smulatons, we also have appled our approach to a real scentfc cloud applcaton MODIS [7]. MODIS s a cloud applcaton bult n Wndows Azure platform for large volume bophyscal data processng. It ntegrates data from ground-based sensors wth the Moderate Resoluton Imagng Spectroradometer satellte data. It s now used by bometeorology lab, UC Berkeley. We frst ntroduce MODIS workload and some confguraton parameters appled. MODIS workload can be understood n the followng way. 2X ndcates the year, Terra and Aqua represent satellte mages, and (x-y) represents the perod from day x to day y. For all our tests, we use all avalable 15 tle mages n MODIS system for a sngle day data processng. For example, Terra 24 (1-12) means processng all 15 tles of Terra mages from 24 Jan 1th to Jan 12th. Ths mples that totally 45 (15 3) obs are submtted at once. In our evaluaton, we fnd the actual ob processng tmes range from 1 sec to 13 mn wth average 5 mn and obs are processed most cost-effectvely n small nstance types. We set the performance montor nterval as 1 mn, decder nterval as 5 mn, ntal average VM delay as 15mn and we only notfy the users when deadlne s mssed. In MODIS evaluaton, we run both moderate scale (up to 2 nstances) and large scale (up to 9 nstances) tests. In moderate scale evaluaton, two test cases are randomly selected. One s Terra satellte 24 (1-12) and the other one s Aqua 28 (3-32). We record the test results n Table IV, ncludng both performance and nstance hours consumed (or cost). The table shows that 2 and 3 hour deadlne goals are better met than 1 hour deadlne for same workloads. After nvestgatng the VM nstance startup hstory, we fnd ths s largely because nstance startup delay s out of our expectaton. For example, n 1 hour deadlne tests, the average startup delay s around 22 mnutes. Some nstances even took 5 mnutes to be ready. There s lttle tme left for our mechansm to react n such cases. On the contrary, n longer deadlne tests, our mechansm acqured fewer nstances and hence the result s less affected by the startup delay varances. In both test cases, the theoretcal computng power needed s 4 nstance hours (all obs are processed by a sngle nstance). All tests actually acqured more than ths, e.g. 9 or 1 nstances hours for 1 hour deadlne test cases. Ths s caused by VM startup delay make up and mprecseness of ntal ob processng tme confguraton. Wth longer deadlnes, such over acquston s corrected because fewer nstances are acqured and ob processng tme s also updated by the hstorcal table. Therefore, longer deadlne test cases also ncur less cost. TABLE IV. Terra 24(1-12) Total 45 obs 4 C.H.* or.48$ Aqua 28(3-32) Total 45 obs 4 C.H. or.48$ MODIS MODERATE SCALE EVALUATOIN 1hour deadlne 2hour deadlne 3hour deadlne 18 mn late 8 mn early 2 mn early 9 C.H.or 1.8$ 6 C.H or.72$ 5 C.H.or.6$ 15mn late 2 mn early 29 mn early 1 C.H or 1.2$ 7 C.H.or.84$ 5 C.H.or.6$ * C.H. computng hour 1C.H. =.12$ n Wndows Azure For large scale (up to 9 nstances) MODIS evaluatons, we performed two tests and recorded the results n Table V. Smlar to moderate scale evaluatons, longer deadlne tests show better results. Agan, unexpected VM startup delay s the domnatng factor. We fnd Wndows Azure has longer VM startup delay and larger varances n large sze nstance acquston cases. For example, n Terra & Aqua 26 (1-75) 2 hour deadlne test, the average VM startup delay s 4 mnutes and there s one nstance whch s stll not ready 2 hours later. For 26 (1-125) 2 hour deadlne test, our decder calculaton shows 95 nstances are needed, whch s beyond our resource lmt. Ths ob s successfully dentfed and dened. TABLE V. Terra & Aqua 26(1-75) Total 1125 obs 93 C.H. or 11.16$ Terra & Aqua 26(1-15) Total 225 obs 185 C.H. or 22.2$ MODIS LARGE SCALE EVALUATOIN 2 hour deadlne 4 hour deadlne 2mn late 6 mn early 17 C.H. or 2.4$ 132 C.H. or 15.84$ Admsson Dened 22 mn early 243 C.H. or 29.16$ To better demonstrate our mechansm workng detals, we present nstance acquston and release nformaton for test case Terra & Aqua 26 (1-75) 4 hour deadlne n Fg. 7. Ths test totally ncludes 1125 obs and s submtted at tme. As shown n the fgure, after around 4 mnutes, the decder started 34 nstances (nstance 1-34) to handle the workload. The real nstance acquston tme took much longer than we confgured. Therefore, around 1.5 hours later, the decder started another 6 nstances (nstance 35-4) to make up for such unexpected startup delay. After approachng 2 full hour operaton, these 6 nstances were shut down due to decreased workload. After all obs are fnshed, nstance 1 to nstance 34 were shut down when they approached 4 hour operaton. At that tme, only nstance was kept alve to mantan servce avalablty. In ths case, the theoretcal ob processng tmes needed s 93 hours. The real nstance hours consumed s 132 hours wth 36 hours spent on VM startup. Both moderate and large scale tests show that longer deadlne has better performance and ncurs less cost. Ths s because longer deadlne tests are less affected by VM startup delay and have more chances to use the updated ob processng tme. Instance Number Instance Acquston and Release Tme (hour) Released Acqurng Ready Fgure 7. Instance acquston and release

8 VI. CONCLUSION & FUTURE WORKS In ths paper, we present a mechansm to dynamcally scale cloud computng nstances based on deadlne and budget nformaton. The mechansm automatcally scales up and scales down VM nstances by consderng two aspects of a cloud applcaton - performance and budget. From performance perspectve, our cloud auto-scalng mechansm enables cloud applcatons to fnsh all submtted obs wthn the desred deadlne by acqurng enough VM nstances. From cost perspectve, t reduces user cost by acqurng approprate nstance types whch ncurs less money and shuts down unnecessary nstances when they approach full hour operaton. We nterpreted the nstance startup plan generaton as an optmzaton problem and used nteger programmng to solve t. We have desgned and mplemented our mechansm n Wndows Azure platform, and have evaluated t usng both smulatons and a real scentfc applcaton MODIS. Evaluaton results show that our mechansm can provson enough nstances to meet user deadlne performance goals. Even n the cases of dynamc deadlne change or sudden workload surge, t can well adapt to the outsde behavors. More than 9% percent of submtted obs can meet the deadlne. In our soluton, nteger programmng s used to dentfy the most cost-effectve nstance types based on the ob composton nformaton of ncomng workload, and therefore, our approach can ncur less cost compared to fxed nstance type choces. The cost comparson shows that choosng approprate nstance type can save 2% - 45% compared to fxed nstance types and ncur 15% more compared to the optmal cost. MODIS evaluaton shows that VM startup delay plays qute an mportant role n cloud auto-scalng mechansms. Long unexpected VM startup delay could not only affect the performance, but can also domnate the utlzaton rate, and therefore the cost, especally for short deadlne cases. Workload and ob processng tme are also very mportant factors n our mechansm, because these two drectly affect the number and type of provsoned nstances. We use hstory repostory to mprove ther precseness n our mplementaton. In the future, one extenson of our work s to support ob class level deadlnes and extend cloud applcaton performance model nto mult-ter archtecture. By consderng ob class ndvdually and controllng ts executon nstance, better performance can be acheved through runnng obs on the most cost-effectve nstance types and save more money than far ob dstrbuton. Currently, we are tryng to use multple queues to submt obs by class. In mult-ter applcaton envronment, the amount of resources needed to acheve ther QoS goals mght be dfferent at each ter and may also depend on avalablty of resources n other ters. In both cases, a global vew of the applcaton s needed to generate optmzed resource provsonng plans. Second, ncludng on-demand pay-asyou-go nstances, clouds now offer other types of nstances as well, such as spot nstances and reserved nstances. Spot nstances cost around 1/3 of regular nstance prces, e.g., the average prce of a m1.small spot nstance s 3 cents an hour. It costs 8.5 cents an hour for the same type of on demand nstance. The cheaper cost comes from that cloud provders can automatcally shutdown users spot nstances f the spot prce s above predefned bd prce. Reserved nstances are even cheaper n the long run by payng a contract fee n advance. Complextes are added f cloud auto-scalng consder these cheaper nstances. Because based on our experences, spot nstances take even longer and more nondetermnstc tme to start. Auto-scalng controller needs to consder all these factors to make a VM nstance schedulng decson. To mantan servce avalablty, reserved nstances can be consdered as the always runnng nstances. The other drecton we are workng on s workflow executon n Cloud. In ths paper, we model the workload as submtted obs n a queue. The cost-savng VM startup plan can only be consdered durng an nterval nstead of globally, because users can never know the future workload n advance. In workflow context, however, t s dfferent. Users can foresee all the obs and ther decences; therefore, a globally optmzed VM startup plan can be generated. Besdes, data movement cost could make t a more nterestng problem. We also consder extendng our evaluatons to other real applcatons, lke well-known nternet workload traces, to see how our mechansm works n dfferent workload contexts. REFERENCES [1] AWS auto-scalng. [2] Wndows Azure. [3] RghtScale. [4] M. Assuncao et al., Evaluatng the Cost-Beneft of Usng Cloud Computng to Extend the Capacty of Clusters, 18th ACM Internatonal Symposum on Hgh performance Dstrbuted Computng (HPDC 29), pp [5] Z. Hll, J. L, M. Mao, A. Ruz-Alvarez, and M. Humphrey, Early Observatons on the Performance of Wndows Azure, 1st workshop on Scentfc Cloud Computng, 21. [6] R. Doyle, J. Chase, O. Asad, W. Jn, and A. Vahdat, Model-Based Resource Provsonng n a Web Servce Utlty, n Proceedngs of the USENIX Symposum on Internet Technologes and Systems, 23. [7] J. L, D. Agarwal, M. Humphrey, C. Ingen, K. Jackson, Y. Ryu, escence n the Cloud: A MODIS Satellte Data Reproecton and Reducton Ppelne n Wndows Azure Platform, IPDPS, 21 [8] Treu C. Cheu, Aay Mohndra, Alexe A. Karve, Alla Segal: Dynamc Scalng of Web Applcatons n a Vrtualzed Cloud Computng Envronment. ICEBE 29: [9] H. Lm, S. Babu, J. Chase, and S. Parekh. Automated Control n Cloud Computng: Challenges and Opportuntes. In 1st Workshop on Automated Control for Datacenters and Clouds, June 29. [1] P. Padala, K. Shn, X. Zhu, M. Uysal, Z. Wang, S. Snghal, A. Merchant, and K. Salem. Adaptve Control of Vrtualzed Resources n Utlty Computng Envronments. EuroSys, 27 [11] Mcrosoft Solver Foundaton. foundaton [12] B. Urgaonkar, P. Shenoy, A. Chandra, and P. Goyal. Dynamc provsonng of mult-ter nternet applcatons. ICAC, 25. [13] B. Rountree, D. Lowenthal, S. Funk, V. Freeh, B. Supnsk, and M. Schulz, Boundng energy consumpton n large-scale mp programs. SC 27, November 1-16, 27. [14] V Swamnathan. and K. Chakrabarty. Real-tme task schedulng for energy-aware embedded systems. In IEEE Real-Tme Systems Symposum, November 2.

9

Fault tolerance in cloud technologies presented as a service

Fault tolerance in cloud technologies presented as a service Internatonal Scentfc Conference Computer Scence 2015 Pavel Dzhunev, PhD student Fault tolerance n cloud technologes presented as a servce INTRODUCTION Improvements n technques for vrtualzaton and performance

More information

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing

A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

iavenue iavenue i i i iavenue iavenue iavenue

iavenue iavenue i i i iavenue iavenue iavenue Saratoga Systems' enterprse-wde Avenue CRM system s a comprehensve web-enabled software soluton. Ths next generaton system enables you to effectvely manage and enhance your customer relatonshps n both

More information

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop

IWFMS: An Internal Workflow Management System/Optimizer for Hadoop IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna lustrve@gmal.com, yshen@cs.sjtu.edu.cn

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

Introduction CONTENT. - Whitepaper -

Introduction CONTENT. - Whitepaper - OneCl oud ForAl l YourCr t c al Bus nes sappl c at ons Bl uew r esol ut ons www. bl uew r e. c o. uk Introducton Bluewre Cloud s a fully customsable IaaS cloud platform desgned for organsatons who want

More information

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign

PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

Vembu StoreGrid Windows Client Installation Guide

Vembu StoreGrid Windows Client Installation Guide Ser v cepr ov dered t on Cl enti nst al l at ongu de W ndows Vembu StoreGrd Wndows Clent Installaton Gude Download the Wndows nstaller, VembuStoreGrd_4_2_0_SP_Clent_Only.exe To nstall StoreGrd clent on

More information

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids

Cost-based Scheduling of Scientific Workflow Applications on Utility Grids Cost-based Schedulng of Scentfc Workflow Applcatons on Utlty Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted Systems Laboratory Dept. of Computer Scence and Software Engneerng

More information

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika.

VRT012 User s guide V0.1. Address: Žirmūnų g. 27, Vilnius LT-09105, Phone: (370-5) 2127472, Fax: (370-5) 276 1380, Email: info@teltonika. VRT012 User s gude V0.1 Thank you for purchasng our product. We hope ths user-frendly devce wll be helpful n realsng your deas and brngng comfort to your lfe. Please take few mnutes to read ths manual

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center

How To Solve An Onlne Control Polcy On A Vrtualzed Data Center Dynamc Resource Allocaton and Power Management n Vrtualzed Data Centers Rahul Urgaonkar, Ulas C. Kozat, Ken Igarash, Mchael J. Neely urgaonka@usc.edu, {kozat, garash}@docomolabs-usa.com, mjneely@usc.edu

More information

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment

Survey on Virtual Machine Placement Techniques in Cloud Computing Environment Survey on Vrtual Machne Placement Technques n Cloud Computng Envronment Rajeev Kumar Gupta and R. K. Paterya Department of Computer Scence & Engneerng, MANIT, Bhopal, Inda ABSTRACT In tradtonal data center

More information

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1

Open Access A Load Balancing Strategy with Bandwidth Constraint in Cloud Computing. Jing Deng 1,*, Ping Guo 2, Qi Li 3, Haizhu Chen 1 Send Orders for Reprnts to reprnts@benthamscence.ae The Open Cybernetcs & Systemcs Journal, 2014, 8, 115-121 115 Open Access A Load Balancng Strategy wth Bandwdth Constrant n Cloud Computng Jng Deng 1,*,

More information

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña Proceedngs of the 2008 Wnter Smulaton Conference S. J. Mason, R. R. Hll, L. Mönch, O. Rose, T. Jefferson, J. W. Fowler eds. A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION

More information

QoS-based Scheduling of Workflow Applications on Service Grids

QoS-based Scheduling of Workflow Applications on Service Grids QoS-based Schedulng of Workflow Applcatons on Servce Grds Ja Yu, Rakumar Buyya and Chen Khong Tham Grd Computng and Dstrbuted System Laboratory Dept. of Computer Scence and Software Engneerng The Unversty

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

J. Parallel Distrib. Comput. Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers J. Parallel Dstrb. Comput. 71 (2011) 732 749 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. ournal homepage: www.elsever.com/locate/pdc Envronment-conscous schedulng of HPC applcatons

More information

HP Mission-Critical Services

HP Mission-Critical Services HP Msson-Crtcal Servces Delverng busness value to IT Jelena Bratc Zarko Subotc TS Support tm Mart 2012, Podgorca 2010 Hewlett-Packard Development Company, L.P. The nformaton contaned heren s subject to

More information

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

Allocating Time and Resources in Project Management Under Uncertainty

Allocating Time and Resources in Project Management Under Uncertainty Proceedngs of the 36th Hawa Internatonal Conference on System Scences - 23 Allocatng Tme and Resources n Project Management Under Uncertanty Mark A. Turnqust School of Cvl and Envronmental Eng. Cornell

More information

Self-Adaptive SLA-Driven Capacity Management for Internet Services

Self-Adaptive SLA-Driven Capacity Management for Internet Services Self-Adaptve SLA-Drven Capacty Management for Internet Servces Bruno Abrahao, Vrglo Almeda and Jussara Almeda Computer Scence Department Federal Unversty of Mnas Geras, Brazl Alex Zhang, Drk Beyer and

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

J. Parallel Distrib. Comput.

J. Parallel Distrib. Comput. J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n

More information

Dynamic Scheduling of Emergency Department Resources

Dynamic Scheduling of Emergency Department Resources Dynamc Schedulng of Emergency Department Resources Junchao Xao Laboratory for Internet Software Technologes, Insttute of Software, Chnese Academy of Scences P.O.Box 8718, No. 4 South Fourth Street, Zhong

More information

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications Methodology to Determne Relatonshps between Performance Factors n Hadoop Cloud Computng Applcatons Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng and

More information

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment

An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing Environment An Optmal Model for Prorty based Servce Schedulng Polcy for Cloud Computng Envronment Dr. M. Dakshayn Dept. of ISE, BMS College of Engneerng, Bangalore, Inda. Dr. H. S. Guruprasad Dept. of ISE, BMS College

More information

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems STAN-CS-73-355 I SU-SE-73-013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part

More information

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance Calbraton Method Instances of the Cell class (one nstance for each FMS cell) contan ADC raw data and methods assocated wth each partcular FMS cell. The calbraton method ncludes event selecton (Class Cell

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

Cloud-based Social Application Deployment using Local Processing and Global Distribution

Cloud-based Social Application Deployment using Local Processing and Global Distribution Cloud-based Socal Applcaton Deployment usng Local Processng and Global Dstrbuton Zh Wang *, Baochun L, Lfeng Sun *, and Shqang Yang * * Bejng Key Laboratory of Networked Multmeda Department of Computer

More information

CloudMedia: When Cloud on Demand Meets Video on Demand

CloudMedia: When Cloud on Demand Meets Video on Demand CloudMeda: When Cloud on Demand Meets Vdeo on Demand Yu Wu, Chuan Wu, Bo L, Xuanja Qu, Francs C.M. Lau Department of Computer Scence, The Unversty of Hong Kong, Emal: {ywu,cwu,xjqu,fcmlau}@cs.hku.hk Department

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

Complex Service Provisioning in Collaborative Cloud Markets

Complex Service Provisioning in Collaborative Cloud Markets Melane Sebenhaar, Ulrch Lampe, Tm Lehrg, Sebastan Zöller, Stefan Schulte, Ralf Stenmetz: Complex Servce Provsonng n Collaboratve Cloud Markets. In: W. Abramowcz et al. (Eds.): Proceedngs of the 4th European

More information

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS

METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS METHODOLOGY TO DETERMINE RELATIONSHIPS BETWEEN PERFORMANCE FACTORS IN HADOOP CLOUD COMPUTING APPLICATIONS Lus Eduardo Bautsta Vllalpando 1,2, Alan Aprl 1 and Alan Abran 1 1 Department of Software Engneerng

More information

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA )

Hollinger Canadian Publishing Holdings Co. ( HCPH ) proceeding under the Companies Creditors Arrangement Act ( CCAA ) February 17, 2011 Andrew J. Hatnay ahatnay@kmlaw.ca Dear Sr/Madam: Re: Re: Hollnger Canadan Publshng Holdngs Co. ( HCPH ) proceedng under the Companes Credtors Arrangement Act ( CCAA ) Update on CCAA Proceedngs

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

A Simple Approach to Clustering in Excel

A Simple Approach to Clustering in Excel A Smple Approach to Clusterng n Excel Aravnd H Center for Computatonal Engneerng and Networng Amrta Vshwa Vdyapeetham, Combatore, Inda C Rajgopal Center for Computatonal Engneerng and Networng Amrta Vshwa

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

A Dynamic Energy-Efficiency Mechanism for Data Center Networks

A Dynamic Energy-Efficiency Mechanism for Data Center Networks A Dynamc Energy-Effcency Mechansm for Data Center Networks Sun Lang, Zhang Jnfang, Huang Daochao, Yang Dong, Qn Yajuan A Dynamc Energy-Effcency Mechansm for Data Center Networks 1 Sun Lang, 1 Zhang Jnfang,

More information

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers

A Load-Balancing Algorithm for Cluster-based Multi-core Web Servers Journal of Computatonal Informaton Systems 7: 13 (2011) 4740-4747 Avalable at http://www.jofcs.com A Load-Balancng Algorthm for Cluster-based Mult-core Web Servers Guohua YOU, Yng ZHAO College of Informaton

More information

A Programming Model for the Cloud Platform

A Programming Model for the Cloud Platform Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna luxaodongxht@qq.com

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING 1 MS. POOJA.P.VASANI, 2 MR. NISHANT.S. SANGHANI 1 M.Tech. [Software Systems] Student, Patel College of Scence and

More information

A Cost-Effective Strategy for Intermediate Data Storage in Scientific Cloud Workflow Systems

A Cost-Effective Strategy for Intermediate Data Storage in Scientific Cloud Workflow Systems A Cost-Effectve Strategy for Intermedate Data Storage n Scentfc Cloud Workflow Systems Dong Yuan, Yun Yang, Xao Lu, Jnjun Chen Faculty of Informaton and Communcaton Technologes, Swnburne Unversty of Technology

More information

Resource Scheduling in Desktop Grid by Grid-JQA

Resource Scheduling in Desktop Grid by Grid-JQA The 3rd Internatonal Conference on Grd and Pervasve Computng - Worshops esource Schedulng n Destop Grd by Grd-JQA L. Mohammad Khanl M. Analou Assstant professor Assstant professor C.S. Dept.Tabrz Unversty

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and Ths artcle appeared n a journal publshed by Elsever. The attached copy s furnshed to the author for nternal non-commercal research and educaton use, ncludng for nstructon at the authors nsttuton and sharng

More information

Load Balancing By Max-Min Algorithm in Private Cloud Environment

Load Balancing By Max-Min Algorithm in Private Cloud Environment Internatonal Journal of Scence and Research (IJSR ISSN (Onlne: 2319-7064 Index Coperncus Value (2013: 6.14 Impact Factor (2013: 4.438 Load Balancng By Max-Mn Algorthm n Prvate Cloud Envronment S M S Suntharam

More information

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,

More information

A Hierarchical Reliability Model of Service-Based Software System

A Hierarchical Reliability Model of Service-Based Software System 2009 33rd Annual IEEE Internatonal Computer Software and Applcatons Conference A Herarchcal Relablty Model of Servce-Based Software System Lun Wang, Xaoyng Ba, Lzhu Zhou Department of Computer Scence and

More information

A Novel Auction Mechanism for Selling Time-Sensitive E-Services

A Novel Auction Mechanism for Selling Time-Sensitive E-Services A ovel Aucton Mechansm for Sellng Tme-Senstve E-Servces Juong-Sk Lee and Boleslaw K. Szymansk Optmaret Inc. and Department of Computer Scence Rensselaer Polytechnc Insttute 110 8 th Street, Troy, Y 12180,

More information

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud

Profit-Aware DVFS Enabled Resource Management of IaaS Cloud IJCSI Internatonal Journal of Computer Scence Issues, Vol. 0, Issue, No, March 03 ISSN (Prnt): 694-084 ISSN (Onlne): 694-0784 www.ijcsi.org 37 Proft-Aware DVFS Enabled Resource Management of IaaS Cloud

More information

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING

FORMAL ANALYSIS FOR REAL-TIME SCHEDULING FORMAL ANALYSIS FOR REAL-TIME SCHEDULING Bruno Dutertre and Vctora Stavrdou, SRI Internatonal, Menlo Park, CA Introducton In modern avoncs archtectures, applcaton software ncreasngly reles on servces provded

More information

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng Envronment Congcong Xong, Long Feng, Lxan Chen A New Task Schedulng Algorthm Based on Improved Genetc Algorthm n Cloud Computng

More information

Performance Evaluation of Infrastructure as Service Clouds with SLA Constraints

Performance Evaluation of Infrastructure as Service Clouds with SLA Constraints Performance Evaluaton of Infrastructure as Servce Clouds wth SLA Constrants Anuar Lezama Barquet, Andre Tchernykh, and Ramn Yahyapour 2 Computer Scence Department, CICESE Research Center, Ensenada, BC,

More information

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers

Modeling and Analysis of 2D Service Differentiation on e-commerce Servers Modelng and Analyss of D Servce Dfferentaton on e-commerce Servers Xaobo Zhou, Unversty of Colorado, Colorado Sprng, CO zbo@cs.uccs.edu Janbn We and Cheng-Zhong Xu Wayne State Unversty, Detrot, Mchgan

More information

Multiple-Period Attribution: Residuals and Compounding

Multiple-Period Attribution: Residuals and Compounding Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens

More information

RESEARCH ON DUAL-SHAKER SINE VIBRATION CONTROL. Yaoqi FENG 1, Hanping QIU 1. China Academy of Space Technology (CAST) yaoqi.feng@yahoo.

RESEARCH ON DUAL-SHAKER SINE VIBRATION CONTROL. Yaoqi FENG 1, Hanping QIU 1. China Academy of Space Technology (CAST) yaoqi.feng@yahoo. ICSV4 Carns Australa 9- July, 007 RESEARCH ON DUAL-SHAKER SINE VIBRATION CONTROL Yaoq FENG, Hanpng QIU Dynamc Test Laboratory, BISEE Chna Academy of Space Technology (CAST) yaoq.feng@yahoo.com Abstract

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems

Application of Multi-Agents for Fault Detection and Reconfiguration of Power Distribution Systems 1 Applcaton of Mult-Agents for Fault Detecton and Reconfguraton of Power Dstrbuton Systems K. Nareshkumar, Member, IEEE, M. A. Choudhry, Senor Member, IEEE, J. La, A. Felach, Senor Member, IEEE Abstract--The

More information

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS

M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS M3S MULTIMEDIA MOBILITY MANAGEMENT AND LOAD BALANCING IN WIRELESS BROADCAST NETWORKS Bogdan Cubotaru, Gabrel-Mro Muntean Performance Engneerng Laboratory, RINCE School of Electronc Engneerng Dubln Cty

More information

A New Quality of Service Metric for Hard/Soft Real-Time Applications

A New Quality of Service Metric for Hard/Soft Real-Time Applications A New Qualty of Servce Metrc for Hard/Soft Real-Tme Applcatons Shaoxong Hua and Gang Qu Electrcal and Computer Engneerng Department and Insttute of Advanced Computer Study Unversty of Maryland, College

More information

Lecture 2: Single Layer Perceptrons Kevin Swingler

Lecture 2: Single Layer Perceptrons Kevin Swingler Lecture 2: Sngle Layer Perceptrons Kevn Sngler kms@cs.str.ac.uk Recap: McCulloch-Ptts Neuron Ths vastly smplfed model of real neurons s also knon as a Threshold Logc Unt: W 2 A Y 3 n W n. A set of synapses

More information

Network Aware Load-Balancing via Parallel VM Migration for Data Centers

Network Aware Load-Balancing via Parallel VM Migration for Data Centers Network Aware Load-Balancng va Parallel VM Mgraton for Data Centers Kun-Tng Chen 2, Chen Chen 12, Po-Hsang Wang 2 1 Informaton Technology Servce Center, 2 Department of Computer Scence Natonal Chao Tung

More information

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines

DBA-VM: Dynamic Bandwidth Allocator for Virtual Machines DBA-VM: Dynamc Bandwdth Allocator for Vrtual Machnes Ahmed Amamou, Manel Bourguba, Kamel Haddadou and Guy Pujolle LIP6, Perre & Mare Cure Unversty, 4 Place Jusseu 755 Pars, France Gand SAS, 65 Boulevard

More information

SMART: Scalable, Bandwidth-Aware Monitoring of Continuous Aggregation Queries

SMART: Scalable, Bandwidth-Aware Monitoring of Continuous Aggregation Queries : Scalable, Bandwdth-Aware Montorng of Contnuous Aggregaton Queres Navendu Jan, Praveen Yalagandula, Mke Dahln, and Yn Zhang Unversty of Texas at Austn HP Labs ABSTRACT We present, a scalable, bandwdth-aware

More information

Enterprise Master Patient Index

Enterprise Master Patient Index Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an

More information

An MILP model for planning of batch plants operating in a campaign-mode

An MILP model for planning of batch plants operating in a campaign-mode An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño

More information

Preventive Maintenance and Replacement Scheduling: Models and Algorithms

Preventive Maintenance and Replacement Scheduling: Models and Algorithms Preventve Mantenance and Replacement Schedulng: Models and Algorthms By Kamran S. Moghaddam B.S. Unversty of Tehran 200 M.S. Tehran Polytechnc 2003 A Dssertaton Proposal Submtted to the Faculty of the

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

The purpose of this benchmark was to compare the performance of

The purpose of this benchmark was to compare the performance of Ruby on Rals Database Benchmark: Clustrx and MySQL by Clayton Cole and Nel Harkns The purpose of ths benchmark was to compare the performance of Ruby on Rals usng MySQL, an open-source database commonly

More information

Sciences Shenyang, Shenyang, China.

Sciences Shenyang, Shenyang, China. Advanced Materals Research Vols. 314-316 (2011) pp 1315-1320 (2011) Trans Tech Publcatons, Swtzerland do:10.4028/www.scentfc.net/amr.314-316.1315 Solvng the Two-Obectve Shop Schedulng Problem n MTO Manufacturng

More information

BUSINESS PROCESS PERFORMANCE MANAGEMENT USING BAYESIAN BELIEF NETWORK. 0688, dskim@ssu.ac.kr

BUSINESS PROCESS PERFORMANCE MANAGEMENT USING BAYESIAN BELIEF NETWORK. 0688, dskim@ssu.ac.kr Proceedngs of the 41st Internatonal Conference on Computers & Industral Engneerng BUSINESS PROCESS PERFORMANCE MANAGEMENT USING BAYESIAN BELIEF NETWORK Yeong-bn Mn 1, Yongwoo Shn 2, Km Jeehong 1, Dongsoo

More information

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA).

2. SYSTEM MODEL. the SLA (unlike the only other related mechanism [15] we can compare it is never able to meet the SLA). Managng Server Energy and Operatonal Costs n Hostng Centers Yyu Chen Dept. of IE Penn State Unversty Unversty Park, PA 16802 yzc107@psu.edu Anand Svasubramanam Dept. of CSE Penn State Unversty Unversty

More information

Checkng and Testng in Nokia RMS Process

Checkng and Testng in Nokia RMS Process An Integrated Schedulng Mechansm for Fault-Tolerant Modular Avoncs Systems Yann-Hang Lee Mohamed Youns Jeff Zhou CISE Department Unversty of Florda Ganesvlle, FL 326 yhlee@cse.ufl.edu Advanced System Technology

More information

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application

Performance Analysis of Energy Consumption of Smartphone Running Mobile Hotspot Application Internatonal Journal of mart Grd and lean Energy Performance Analyss of Energy onsumpton of martphone Runnng Moble Hotspot Applcaton Yun on hung a chool of Electronc Engneerng, oongsl Unversty, 511 angdo-dong,

More information

Optimal Map Reduce Job Capacity Allocation in Cloud Systems

Optimal Map Reduce Job Capacity Allocation in Cloud Systems Optmal Map Reduce Job Capacty Allocaton n Cloud Systems Marzeh Malemajd Sharf Unversty of Technology, Iran malemajd@ce.sharf.edu Danlo Ardagna Poltecnco d Mlano, Italy danlo.ardagna@polm.t Mchele Cavotta

More information

Dynamic Pricing for Smart Grid with Reinforcement Learning

Dynamic Pricing for Smart Grid with Reinforcement Learning Dynamc Prcng for Smart Grd wth Renforcement Learnng Byung-Gook Km, Yu Zhang, Mhaela van der Schaar, and Jang-Won Lee Samsung Electroncs, Suwon, Korea Department of Electrcal Engneerng, UCLA, Los Angeles,

More information

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently.

To manage leave, meeting institutional requirements and treating individual staff members fairly and consistently. Corporate Polces & Procedures Human Resources - Document CPP216 Leave Management Frst Produced: Current Verson: Past Revsons: Revew Cycle: Apples From: 09/09/09 26/10/12 09/09/09 3 years Immedately Authorsaton:

More information

Credit Limit Optimization (CLO) for Credit Cards

Credit Limit Optimization (CLO) for Credit Cards Credt Lmt Optmzaton (CLO) for Credt Cards Vay S. Desa CSCC IX, Ednburgh September 8, 2005 Copyrght 2003, SAS Insttute Inc. All rghts reserved. SAS Propretary Agenda Background Tradtonal approaches to credt

More information

Performance Analysis and Comparison of QoS Provisioning Mechanisms for CBR Traffic in Noisy IEEE 802.11e WLANs Environments

Performance Analysis and Comparison of QoS Provisioning Mechanisms for CBR Traffic in Noisy IEEE 802.11e WLANs Environments Tamkang Journal of Scence and Engneerng, Vol. 12, No. 2, pp. 143149 (2008) 143 Performance Analyss and Comparson of QoS Provsonng Mechansms for CBR Traffc n Nosy IEEE 802.11e WLANs Envronments Der-Junn

More information

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays

VoIP Playout Buffer Adjustment using Adaptive Estimation of Network Delays VoIP Playout Buffer Adjustment usng Adaptve Estmaton of Network Delays Mroslaw Narbutt and Lam Murphy* Department of Computer Scence Unversty College Dubln, Belfeld, Dubln, IRELAND Abstract The poor qualty

More information

Lecture 3: Force of Interest, Real Interest Rate, Annuity

Lecture 3: Force of Interest, Real Interest Rate, Annuity Lecture 3: Force of Interest, Real Interest Rate, Annuty Goals: Study contnuous compoundng and force of nterest Dscuss real nterest rate Learn annuty-mmedate, and ts present value Study annuty-due, and

More information

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds *

An Integrated Dynamic Resource Scheduling Framework in On-Demand Clouds * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 30, 1537-1552 (2014) An Integrated Dynamc Resource Schedulng Framework n On-Demand Clouds * College of Computer Scence and Technology Zhejang Unversty Hangzhou,

More information

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

A Parallel Architecture for Stateful Intrusion Detection in High Traffic Networks

A Parallel Architecture for Stateful Intrusion Detection in High Traffic Networks A Parallel Archtecture for Stateful Intruson Detecton n Hgh Traffc Networks Mchele Colajann Mrco Marchett Dpartmento d Ingegnera dell Informazone Unversty of Modena {colajann, marchett.mrco}@unmore.t Abstract

More information

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to

Many e-tailers providing attended home delivery, especially e-grocers, offer narrow delivery time slots to Vol. 45, No. 3, August 2011, pp. 435 449 ssn 0041-1655 essn 1526-5447 11 4503 0435 do 10.1287/trsc.1100.0346 2011 INFORMS Tme Slot Management n Attended Home Delvery Nels Agatz Department of Decson and

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information