Scalable Hierarchical Multitask Learning Algorithms for Conversion Optimization in Display Advertising

Size: px
Start display at page:

Download "Scalable Hierarchical Multitask Learning Algorithms for Conversion Optimization in Display Advertising"

Transcription

1 Salable Hierarhial Multitask Learning Algorithms for Conversion Optimization in Display Advertising Amr Ahmed Google Abhimanyu Das Mirosoft Researh Alexander J. Smola Carnegie Mellon University and Google ABSTRACT Many estimation tasks ome in groups and hierarhies of related problems. In this paper we propose a hierarhial model and a salable algorithm to perform inferene for multitask learning. It infers task orrelation and subtask struture in a joint sparse setting. Implementation is ahieved by a distributed subgradient orale and the suessive appliation of prox-operators pertaining to groups and subgroups of variables. We apply this algorithm to onversion optimization in display advertising. Experimental results on over 1TB data for up to 1 billion observations and 1 million attributes show that the algorithm provides signifiantly better predition auray while simultaneously being effiiently salable by distributed parameter synhronization. Categories and Subjet Desriptors G.3 [Mathematis of Computing]: Probability and Statistis; I.2.6 [Artifiial Intelligene]: Learning 1. INTRODUCTION In many ases data inferene problems do not arise in isolation. That is, we usually enounter a range of related problems and there is onsiderable benefit in solving them jointly. This insight has been exploited repeatedly and it has led to algorithms ommonly known as multitask learning tehniques [4, 22, 12, 7, 18]. Appliations, e.g. to massively multitasked spam filtering [20] show its pratial importane. The key idea is that by solving related tasks we are able to learn more about an individual task. In this paper we study the problem of onversion maximization in display advertising. That is, we fous on maximizing the ourrene of ommerially relevant ations suh as purhases, aount reation, mailing list signups, et. This involves estimating a user s propensity to perform suh ations and to identify generally suseptible populations of users. The hallenge here is that we have both a broad range work done while the authors were at Yahoo! Researh Permission to make digital or hard opies of all or part of this work for personal or lassroom use is granted without fee provided that opies are not made or distributed for profit or ommerial advantage and that opies bear this notie and the full itation on the first page. Copyrights for omponents of this work owned by others than ACM must be honored. Abstrating with redit is permitted. To opy otherwise, or republish, to post on servers or to redistribute to lists, requires prior speifi permission and/or a fee. Request permissions from permissions@am.org. WSDM 14, February 24 28, 2014, New York, New York, USA. Copyright 2014 ACM /14/02...$ of different advertisers and also a range of subtasks (views, liks, onversions) that we wish to maximize. As is to be expeted in omputational advertising, the amount of data an be quite signifiant. Moreover, the data is not neessarily homogeneous. Tasks have wildly varying sizes, ommensurate with the finanial stake of the advertisers and the popularity of their produt. Likewise, attributes are sparse and many our only in a small number of ontexts. This requires an effetive inferene approah. Our work builds on well known multiple kernel learning [19] and ollaborative filtering tehniques namely that of effetively imposing a hyperprior on the regularization term. In doing so it is possible to ast multitask learning as a nontrivial onvex optimization problem. See e.g. [23] for details. This strategy is then ombined with a hierarhial model over task and subtask speifi parameters. Furthermore, we impose strutured sparsity along the lines of [5]. To solve the problem in pratie we rely on a distributed subgradient orale. Load-balaning is ahieved by using onsistent hashing for task distribution over proessors and distributed variable aggregation to mitigate the lateny and task restarts otherwise required in Hadoop MapRedue. That is, we use the variable distribution of [13] for storage. Subsequently we invoke a sequene of prox operators [5] to synhronize effiiently between loal and global penalties. To summarize, our ontributions are the following: We formulate the joint onversion, lik and unattributedonversion modeling problem in behavioral targeting as a large-sale hierarhial multitask learning problem and show that the onvex multitask learning approah of [23] an be adapted to this setting. We design an effiient distributed implementation of the above algorithm that sales to Terasale data. Using a real-world, web-sale display advertising targeting data set and two smaller publi datasets, we show the ability of our algorithm to signifiantly improve on the baseline modeling performane obtained by traditional single-task inferene systems. 1.1 Challenges in Conversion Maximization Reent trends in behavioral targeting and display advertising emphasize the importane of ommerially relevant ations. That is, rather than user liks, advertisers aim to maximize the number of onversions they reeive. Conversions stand for purhases, aount reation, or any other relevant ation that an advertiser sees as partiularly desirable. To obtain good performane, publishers tend to instrument

2 their websites with embedded ode whih allows third parties to apture user transations and generate user segments that are of high value for a partiular advertiser [1]. These segments primarily ontain users that are atually inlined to perform a transation as opposed to a asual visit to the web site through an aidental lik. Conversions on an advertiser s web site are either attributed to their orresponding display ads based on advertiserspeifi rules, suh as the amount of time elapsed between the onversion time and the time that the ad was last shown to the user, or they are unattributed if they annot be tied to a speifi display ad. Past work [3] has shown the superiority of targeting platforms maximizing for attributed onversions to traditional solutions maximizing for liks. For onversion-oriented behavioral targeting the traditional approah has been to only onsider attributed onversions. The orresponding inferene problem for eah advertising ampaign is then solved independently, for instane, by fitting a Support Vetor Mahine or Logisti Regression model. This generates separate models based on user data for eah ampaign. However, a typial behavioral targeting platform optimizing for attributed onversions, heneforth simply referred as onversions, faes two ore issues: There is a large volumes of user histories that need to be proessed in a periodi fashion in order to perform inferene over attributed onversions for a large number of ad ampaigns. Proessing ativities of billions of users on a daily basis imposes many hallenges suh as how to build user profiles in an effiient way, and how to optimize multiple ampaigns at the same time; When optimizing for eah ampaign separately we are likely to do poorly for infrequent ampaigns. We therefore need to design algorithms that an deal with sparseness of attributed onversions in many ampaigns. The absene of a suffiient number of labeled data for the inferene tasks reates a major bottlenek against ahieving tangible targeting performane improvement. 1.2 Multitask Learning For eah advertising ampaign, we an formulate several other related inferene tasks apart from onversion modeling: we an attempt to infer the likelihood of a lik (whih is typially a prerequisite for onversion) and to model the likelihood of unattributed onversions (the latter helps to identify similar users). These different inferene problems of eah ampaign are likely to be orrelated. Hene modeling them jointly should improve estimation performane. Additionally, it is quite likely that there is signifiant orrelations between inferene tasks aross different advertising ampaigns. For example, if there exists advertising ampaigns orresponding to different brands of ars, the onversion or lik models for all these ampaigns might be quite similar to eah other. Thus, performing a joint inferene over all these potentially-orrelated tasks together might lead to better performane than solving these inferene tasks separately. This is the basi premise of multitask learning. The key differene to onventional multitask learning is that in our ase there exists a hierarhy between tasks. That is, we expet that all tasks for a given ampaign (e.g. selling ar insurane for a partiular ompany) have related speifiity in terms of their user demographi. Hene it is only reasonable that the sets of features and preferenes are shared between them. It is to be expeted that joint feature seletion should improve the performane of eah estimator. Stritly speaking, we have two somewhat related goals one is to do well for all tasks, i.e. lik, onversion, and unattributed onversion estimation. This is a symmetri setting where the goal is to use the task orrelations to simultaneously improve the predition performane of all the tasks. The other task is to perform well for onversion predition while using the remaining data as side information. This is an asymmetri setting. While both tasks are rather related, they are subtly different in their performane riteria and in terms of the estimation problem. In this paper, we formulate two different hierarhial multitask models for these settings: a hierarhial model for the symmetri setting, and an attahment model for the asymmetri setting. In the hierarhial model, we first define an inter-ampaign orrelation matrix on a root-level set of feature weights for eah ampaign. This is then used to derive feature weights for its onversions, liks and unattributed onversions loally. In the attahment model, the interampaign orrelation matrix is applied diretly on the feature weights for the onversion model of eah ampaign. There is ample literature overing the subjet of multitask learning. However, in the ontext of our behavioral targeting problem, there are two objetives that a multitask learning algorithm should satisfy: it should be easily distributable and sale to thousands of ampaigns and millions of features; it should extend to a multi-level task hierarhy. This makes the setting rather nontrivial in terms of effiient inferene. 1.3 Approah In this paper, we use a onvex formulation approah for multitask learning. Its basi idea is desribed in several variants e.g. in the ontext of multitask learning [23], matrix inferene [16] and multiple kernel learning [14]. Essentially, one imposes a penalty over the ovariane matries governing orrelation between attribute vetors. This way we an ensure that primarily similar attribute sets and related attribute vetors are hosen. In terms of onversion maximization this means that we use an inter-ampaign ovariane matrix to model the relationships between the various ampaigns. Moreover, intraampaign ovariane matries are used to model the relationships between liks, onversions and unattributed onversions of eah ampaign. A matrix-variate normal prior is imposed on these ovariane matries. A joint optimization objetive an be formulated for all the tasks by obtaining a maximum likelihood estimate of the ovariane matries and the per-task feature weights for all the tasks. As we will show later, this objetive is onvex. It an be solved in a distributed fashion using proximal subgradient methods, suh as the Fast Iterative Shrinkage algorithm (FISTA) [6]. For both models, we use a salable alternating subspae desent method for simultaneous inferene of both the task orrelation matries and the feature weights. A key tool for ahieving salability will be to use a luster of mahines as a distributed subgradient orale [17]. Sine iterative thresholding algorithms like FISTA require a signifiant number of gradient omputations (e.g. [6] report 100 steps), our platform must preserve state and data loality between iterations. This makes it unsuitable to a naive Hadoop MapRedue implementation. Instead, we employ a onsistent hashing based synhronization algorithm. We apply our dis-

3 Ω W Y i X i i {1... m} {1... m} Ω Z W Y si X si Θ i {1... ms} s {1... m} {1... m} Figure 1: Top: Standard multitask learning using a Matrix-Variate distribution. Observations X i for ampaign reeive labels Y i. These are assigned using ampaign-speifi weight vetors W. The latter are exhangeable but not independent, hene jointly drawn from some distribution p(w Ω). Bottom: Hierarhial multitask learning. After drawing task speifi parameters Z from an exhangeable but not independent distribution we draw subtask speifi parameters W s using a joint parameter Θ and Z. The rest remains unhanged. tributed multitask learning framework to the onversion modeling problem desribed in [1] and we show how our system an improve the AUC performane signifiantly, when ompared to individual onversion modeling of eah ad-ampaign. 2. MULTITASK LEARNING 2.1 Notation We now ast the problem of ampaign-speifi estimation as a multitask learning problem. That is, we treat eah ampaign as a task. In eah suh ase we observe ovariates (patterns) x i and our goal is to infer labels y i. For simpliity we assume that x i X = R d onsists of d- dimensional vetors and moreover that y i Y are either binary Y = {±1} for lassifiation or real-valued Y = R for regression. At a later stage (in setion 3) we will assume that eah ampaign ontains a number of subtasks s. The table below gives an overview of the symbols used: X domain of observations (usually X = R d ) Y domain of labels (usually Y = {±1} or Y = R) ampaign index ( {1... m}) s sub-ampaign index (s {1... n}) j observation index (j {1... m } or j {1... m s}) x observation (x j or x sj) X set of observations for ampaign y observation (y j or y sj) Y set of labels for ampaign w parameter vetors (w or w s) W staked parameter vetors W = {... w...} z parameter vetors z for top level hierarhy Z staked parameter vetors Z = {... z...} Figure 1 aptures the formal struture of the multitask learning problem. To apture interation between ovariates x j, ampaigns and assoiated labels y j, e.g. whether a partiular user onverted on an ad in a partiular ampaign at a partiular oasion, we onsider the issue of estimating y x, for a large range of ampaigns simultaneously. We denote by m the total number of ampaigns and by m the number of observations per ampaign. Formally we onsider sets of ovariates and labels indexed by a ampaign, denoted by X = {x 1,..., x n } X and Y = {y1,..., yn } Y. Here eah pair (x j, y j) is drawn from some distribution p(x, y ) = p(y x, )p(x ) of ovariates and labels respetively. Finally, we denote by si the ombination of task, subtask, and oordinate, and by the entire vetor in the assoiated dimension. E.g. w i denotes the vetor over all subtasks s assoiated with for oordinate i. 2.2 Objetive The inferene problem is expressed either of risk minimization whenever we want to find a lassifier whih makes a small number of mistakes, or as one of maximizing the data likelihood. In the latter ase we want to find parameters W = {w 1,... w m} suh that the maximizes label likelihood: m m m p(y X, W ) = p(y X, w ) = p(y j x j, w ) (1) =1 Choies for y j x j, w are e.g. =1 j=1 p(y j x j, w ) = (2πσ 2 ) 2 1 e 1 2σ 2 (y j x j,w ) 2 (2) 1 p(y j x j, w ) = 1 + e y j x j,w (3) for regression and lassifiation respetively. A naive maximization of the onditional response likelihood p(y X, W ) leads to overfitting unless the model omplexity is overly small or unless a suitable prior is used. Multitask learning models aim to address this problem by imposing a suitable prior distribution p(w ) on W whih favors simple models and whih exploits orrelation between tasks. Consequently, instead of maximizing (1) one aims to find the Maximum-a- Posteriori (MAP) estimate of W via maximize W p(w ) m p(y X, w ) (4) =1 The hallenge is now to define models of p(w ) that are both omputationally tratable and statistially meaningful. This hierarhial modeling imperative leads to multitask learning. 2.3 Multitask Prior Our working assumption is that W is drawn from a matrixvariate distribution in suh a way as to exploit orrelations between the tasks, suh as assuming that the tasks are more onentrated in a lower dimensional subspae. This is a reasonable assumption sine there is no inherent order in whih the tasks are laid out. One option is to hoose a normal distribution as follows: W N(0, 1 d Ω) or equivalently w i N(0, Ω) (5) for all oordinates i. The likelihood of W Ω is given (up to onstants) by log p(w Ω) = tr W Ω 1 W + d log Ω + (6)

4 It is straightforward to modify this by inluding a onjugate Wishart hyperprior on Ω. Unfortunately, the outome is onave in W. An alternative is to replae the log-barrier arising from a onjugate prior on Ω by a trae onstraint and a positive semidefiniteness onstraint. That is, we replae log Ω by Ω 0 and tr Ω = 1. This is used, e.g. in [23]. Suh a modifiation leaves the eigenspae of the W -dependent part of the optimization problem unhanged. This leads to the following alternative: minimize log p(y X, w ) + λ tr W Ω 1 W (7a) W,Ω subjet to Ω 0 and tr Ω = 1 (7b) The above formulation is onvex in both W and Ω and an be solved using an effiient algorithm based on alternating subspae desent. For fixed Ω minimize (7) with respet to W. Subsequently, for fixed W, find the minimizer with respet to Ω. A simple onstrained optimization problem shows that this an be found via [ W ˆΩ W ] 2 1 = (8) tr [W W ] 1 2 This approah forms the baseline relative to whih we will ompare our proposed method. 3. HIERARCHICAL MULTITASK LEARN- ING While the flat models presented in Setion 2 an learn the orrelation struture between tasks, they are not so easily amenable for distributed optimization beause of the squared dependeny between all the tasks. Fortunately, many large sale multitask problems possess a hierarhial struture that allows us to deompose them into tasks and subtasks. For example, in display advertising eah advertiser an be regarded as a task (a ampaign) within whih we an define three subtasks as follows: Conversion predition: estimate if the user will onvert, i.e. perform a ommerially relevant ation, on the urrent display ad. Clik predition: predit if the user will lik on the urrently displayed ad. Unattributed onversion: historial data of users who onverted on previous advertisements of the advertiser. We use s to index the subtask. That is, rather than we now use the tuple s to index task and assoiated subtask, suh as (Coa Cola, liks). All remaining notation is unhanged relative to the previous setion. In a nutshell we have two options for dealing with the hierarhial struture: firstly, we estimate the joint model for all tasks, subtasks and all ampaigns. A seond strategy is to solve the model for the primary subtask of onversion estimation exlusively and to use the assoiate (seondary) subtasks only as side-information. We will refer to the former as a hierarhial model and to the latter as attahment model. The key differene is in the following assumption: Hierarhial Model: We assume that for eah task group there exists some parameter vetor z, with Z = z 1 w 11 w 12 w 13 w 11 w 12 w 13 Ω z 2 z 3 w 21 w 22 w 23 w 31 w 32 w 33 Ω w 21 w 31 w 22 w 23 w 32 w 33 z 4 w 41 w 42 w 43 w 41 w 42 w 43 Figure 2: Top: Hierarhial dependeny struture over parameter vetors for multitask learning. The intermediate parameter vetor z enapsulates ommonalities per task. Bottom: Attahment model. Here the onversion-speifi parameters are diretly oupled. For simpliity of the diagram we omitted Θ in both ases. {z 1,... z m} that speifies preferenes per task Z N(0, 1 d Ω) or equivalently z i N(0, Ω) (9) Moreover, within eah task, the distribution over subtasks is given by w i N(1 z i, Θ ). (10) This assumes that orrelations within subgroups are deoupled. Attahment Model: Denote by s = 1 the primary subtask (onversion estimation). Instead of using z as an intermediary we ouple the models diretly via w 1 and use a hierarhial model on the remaining parameters. This amounts to w 1i N(0, Ω) and w i N(1 w 1i, Θ ) for s > 1. The diagram in Figure 2 desribes the differene between both approahes for a rather simplisti struture of 4 tasks (in reality we may have millions of suh tasks). As previously disussed in Setion 2, we again resort to a reformulation that uses a trae onstraint and positive semidefiniteness rather than the log-barrier to restrit Ω and Θ. That is, instead of log p(w, Z Ω, Θ) (11) = 1 2 tr(w 1 z) (w 1 z )Θ 1 + m s log Θ tr Z Ω 1 Z + m log Ω +

5 for the hierarhial model and analogous setting for the attahment model, we use the following objetives L(W, Z, Ω, Θ): L hier (W, Z, Ω, Θ) (12) = 1 2 tr(w 1 z) (w 1 z )Θ tr Z Ω 1 Z subjet to tr Θ 0 and tr Θ = 1 and tr Ω 0 and tr Ω = 1 for the hierarhial model. Moreover, for the attahment model: L attah (W, Ω, Θ) (13) = 1 2 tr w w Θ w 1iΩ 1 w 1i 2 subjet to the same onstraints as for L hier. The only real differene is that we eliminated z and instead, we attah the model to w 1 diretly. In either ase this detahes the subtasks from the problem of joint task inferene. 3.1 Strutured Sparsity A seond aspet of multi-task learning is to use strutured sparsity [5] to selet relevant variables for an entire blok of terms jointly rather than eliminating terms for eah task individually. This is ahieved by adding a mixed norm on the parameters W and Z to the optimization problem. We need some more notation first: the p norm of a vetor x R d : x p p := i i x i p for p < and x := max x i i. Moreover, the mixed norm of a matrix X, where we apply sparsity row-wise, is defined for p, q 1 via q X p,q := X 1 p,... X d p. (14) Of partiular interest is the X 2,1 norm, whih attempts to eliminate entire rows of X at a time. Finally, we use the abbreviation X 1 := X 1,1 to denote the sum over absolute values in X. This leads to the following sparsity penalties for the hierarhial and attahment models respetively: S hier (W, Z) =λ 1 Z 1 + λ 2 Z 2,1 + (15) λ 1 W 1 + λ 2 W 2,1 S attah (W ) =λ 1 W 1 + λ 2 W 2,1 + λ 2 W 2,1 (16) The oeffiients λ 1 and λ 2 govern the trade-off between generi sparsity and group sparsity. That is, for λ 2 there will be no orrelation in sparsity patterns beyond what is obtained from data. For λ 1 = 0 we an assume that whenever any given W si 0 then also all related W s i will not vanish. 3.2 Optimization Problems We onlude this setion by stating the two optimization problems that we will solve subsequently. The key ingredients are a likelihood funtion log p(y X, W ) whih depends on the speifi problem to solve, a simplified multitask learning penalty as defined by L hier (W, Z, Ω, Θ) and L attah (W, Ω, Θ) respetively, and a sparsity penalty as in S hier (W, Z) and S attah (W ). We have the following for the hierarhial multitask model: minimize log p(y sj x sj, w s) + 1 W,Z,Ω,Θ 2 tr Z Ω 1 Z sj tr(w 1 z) (w 1 z )Θ 1 + λ 1 Z 1 + λ 2 Z 2,1 (17a) + λ 1 W 1 + λ 2 W 2,1 subjet to Ω, Θ 0 and tr Ω = tr Θ = 1. Moreover, the attahment multitask model yields: minimize log p(y sj x sj, w s) W,Ω,Θ sj tr w w Θ λ 1 W 1 + +λ 2 W 2,1 + subjet to Ω, Θ 0 and tr Ω = tr Θ = 1. i w 1iΩ 1 w 1i λ 2 W 2,1 (17b) (18a) (18b) 4. INFERENCE The optimization problems (17) and (18) are jointly onvex in (W, Z, Ω, Θ) and (W, Ω, Θ) respetively. For pratial optimization we resort to a Gauss-Southwell [15] approah of minimizing bloks of parameters at a time. In pratie this means that we alternate between minimizing with respet to W, Z and Ω, Θ. This is known to onverge to the globally optimal solution (albeit slowly on oasion). Issues of problem distribution and parallelization will be disussed in the next setion. 4.1 Covariane Updates Assume that we are given W, Z. In this ase we may find optimal values for the psd matries Ω, Θ using the derivation in (8) as follows: Ω = (tr O) 1 O (19) where O = (ZZ ) 1 2 (hierarhial) [ 1i] 12 O = w 1iw (attahment) i Likewise, for Θ we have the updates Θ = (tr T ) 1 T (20) [ ] 1 2 where T = (w i z i)(w i z i) (hierarhial) i [ 12 T = w iw i] (attahment) i This means that we an ompute T entirely with only aess to all subtask speifi parameters s for a given ampaign. Hene, as long as it is possible to have all suh data available on a single mahine, we need not ommuniate the lower level of the hierarhy outside the mahine.

6 4.2 Optimization with Sparsity Penalty Next we need to disuss update steps in terms of W and Z. Reall that we imposed a mixed norm penalty on both terms suh that we obtain group sparsity. Our strategy borrows from [5], and [10, Proposition 1]. Reall the struture of the penalties imposed by 2,1 and 1. They onstitute a hierarhy over nonzero terms in W and Z respetively the (2, 1)-norm attempts to zero out the entire set of ontributions for a given oordinate and the 1-norm ensures that even if we use an attribute, we only use it sparingly. Given Ω, Θ the remainder of the problem is a onvex unonstrained optimization problem. One of the algorithms reently proposed are of a struture resembling FISTA (Fast Iterative Suessive Thresholding) [6]. In it one interleaves a gradient desent step with regard to the onvex differentiable part of the objetive with a thresholding step with regard to the sparsity penalty. That is, for the problem: minimize f(a) + λω[a] (21) a one performs the following steps (after initializing a 0) b t+1 := a t η t af(a t) and (22) a t+1 = argmin a 1 2t t a b t λω[a] (23) Here (22) is essentially a gradient desent step in f. The step (23) is ommonly referred to as a prox-operator. The step size t i is hosen suh that t 1 i majorizes the Lipshitz onstant of af(a). We disuss omputing gradients with respet to the objetive in Setion 4.3 for the hierarhial model (the attahment model follows similarly). In this ontext we mean by F[Z, W ] either the first two lines of (17) or of (18) with Ω and Θ fixed at this point. For now note that in our ase Ω[a] deomposes into penalties applied per task. That is Ω[W, Z] = [ λ 1 z 1 + ] w s 1 + (24) s [ λ 2 z 2 + ] w s 2 s Solving (23) an be arried out for eah task and for eah z and w individually, hene it is amenable to easy distribution. Using [10, Proposition 1] one an see that performing suessive prox operations with respet to the l 1 norm and subsequently with respet to the l 2 norm lead to an exat solution of (23). For instane, for z this means that we perform the following steps z z i t i z F[Z, W ] (25) z i sgn z i max(0, z i t iλ 1) (26) and subsequently we threshold the entire vetor via z z z 2 max(0, z t iλ 2) (27) In other words, first we perform gradient desent. Then all oeffiients that are individually too small are eliminated and the remainder is shrunk. Finally, we perform shrinkage of the remainder in the diretion of their unit vetor. The objetive is either that of the hierarhial or of the attahment model. Updates with respet to W are entirely analogous and therefore omitted. tasks attributes proess for shrinkage over attributes Figure 3: The parameter matries W and Z are of size R d m. For gradient omputation we need to have aess to all parameters for a given task on a given mahine, hene the row-wise split. Subsequently, to perform shrinkage over attributes, we need all parameters pertaining to a feature (for all tasks) on a given mahine. Load balaning for both of these tasks is ahieved by onsistent hashing. 4.3 Gradients We omplete our overview of optimization by disussing the gradient in terms of W and Z. As before, we limit ourselves to a disussion of the hierarhial model of (17). Sine the reasoning required for (18) is essentially idential, we omit the details. Straightforward alulation yields zi F[Z, W ] = [ Ω 1 ] Z i (1 z i w i) (28) wsi F[Z, W ] = j + Θ 1 proess tasks independently wsi log p(y sj x sj, w s) (29) + Θ 1 (w i 1 z i) As an be seen, again all gradients deompose in terms of tasks and subtasks respetively. We will exploit this for distributed optimization. The exat form of the gradient for p(y sj x sj, w s) is straighforward to ompute for both the regression and lassifiation problem given the form of the onditional probability in (2) and (3) respetively. 5. DISTRIBUTED OPTIMIZATION We now disuss how to implement a distributed optimization algorithm effiiently on a luster of ommodity workstations. As disussed previously, invoking steps (22) and (23), and updating Ω and Θ requires the following operations: 1. Compute partial subgradients of F[Z, W ] for all ampaigns with respet to Z and W. 2. Aggregate subgradients obtained from all instanes and apply it to the model parameters. 3. Distribute oordinates (or subsets S S thereof) of the subgradients (or rather updated oordinates) to lients for appliation of the prox operator. 4. Invoke the prox operator. 5. Redistribute the results to the mahines holding the ampaign-speifi data. Sine this is an iterative proedure with two barriers per iteration (send subgradients, return values) this sounds as if it were a good fit for MapRedue. Unfortunately, this approah suffers from ineffiienies inherent in the Hadoop implementation of MapRedue: ontext in the mappers is not

7 preserved between iterations. Moreover, Hadoop ommuniates primarily via file I/O. Sine the proposed algorithm an take tens of iterations and sine we need to ommuniate parameters repeatedly, this means signifiant waste of resoures by repeatedly having to initialize the state of the mappers. Hene we resort to a method disussed in [13, 2], namely to alloate the mahines using Hadoop and then to establish an overlay ommuniation network. Algorithm 1 Distributed Optimization 1: for all i = 1 p parallel do 2: read data bloks from disk 3: for all ampaigns with m() = i do 4: ompute subgradient g 5: end for 6: write g to (key,value) store aording to m(i) 7: end for 8: reah a barrier 9: for all i = 1 p parallel do 10: read g from (key,value) store aording to m(i) 11: for all oordinates with m(j) = i do 12: solve the prox operator 13: end for 14: write to the (key,value) store aording to m() 15: Compute ontribution to suffiient statistis of Ω and write it bak to shared memory. 16: end for 17: reah a barrier 18: Read suffiient statistis of Ω and ompute new value. 5.1 Data and Task Distribution In the following we assume that we have p mahines to proess data. Reall that d denotes the number of attributes, i.e. x si R d and that m denotes the number of ampaigns. We use randomized load-balaning to determine whih mahine reeives whih portion of the data in both the databound and the parameter-bound part of the optimization proedure. This is ahieved, e.g. by onsistent hashing [11]: m() = argmin m M h(m, ) and m(i) = argmin h(m, i) (30) m M to assign mahines from a mahine pool M for ampaigns and oordinates i respetively. Finally, data exhange is arried out in the form of a distributed (key,value) store. For reasons of pratiality we used memahed as our referene implementation. This follows the design pattern of [21, 13] and it avoids file I/O for synhronization. Suh a strategy is muh more effiient than repeated invoations of Hadoop MapRedue. 5.2 Distributed Subgradient Orale By design, the subgradients of F[Z, W ] and F[W ] deompose into terms that are easily omputable in a ampaignspeifi manner (terms related to the negative log-likelihood) and terms that are easily omputable in a oordinate-speifi manner (the penalties in terms of Ω, Θ and the sparsity penalties). Furthermore, only the former requires diret aess to data, whereas the latter requires aess to a given oordinate aross all tasks. This means that we an ompute gradients in two stages: a pass over data, as performed by the workers that have the data, a reshuffle of parameters, and a finalizing pass (plus prox step) in a oordinate-speifi fashion. Likelihood gradients: Sine data is partitioned aording to tasks, subgradients with regard to w s are easily omputed via n s g s = ws log p(y sj x sj, w s). (31) j=1 s w sθ 1 w s, Next we ompute gradients with respet to 1 2 i.e. we add Θ 1 w to g s. The analogous reasoning holds for s (ws z) Θ 1 (w s z ). These gradients are then redistributed aording to Figure 3 suh that all g si for a given oordinate i (ranging over all tasks and subtasks) are available on the same mahine. Multitask gradients: At this point we an ompute oordinate speifi parts as arising from the Ω-dependent terms on a per-oordinate basis. For this purpose we only need Z i or w 1i, depending on whether we hose the hierarhial or attahment model respetively. We only need to read the weights orresponding to non-zero entries in Ω Distributed Prox Operator and Covariane Estimation The final step required is to solve the prox operator related to the 1 and 2,1 norms as these enfore sparsity. Whenever we have a fully hierarhial setting, Proposition 1 of [10] applies and we an simply perform prox steps bottom up in the proess. Whenever this assumption is not satisfied, we may still iterate the prox operator to obtain a suboptimal solution. This suffies as a desent step, sine optimization in Z and W is just a subroutine in the overall optimization senario involving Θ and Ω. Note that the prox operator an be arried out in linear time we only require omputing norms of vetors and resaling them. The data exhange is ompletely analogous to the gradient omputation, exept that we now work on attributes rather than ampaigns. After the prox operation we redistribute parameters bak into a (key,value) storage. As before, this requires a barrier to ensure that up-to-date values are available on all workers for another pass through the data. Similar to the gradient omputation phase, the read and write steps an be performed in parallel. Finally we note that estimating Θ an be done loally in eah worker however the suffiient statistis required to ompute Ω (see 19) is distributed on a per-attribute basis. Thus we overlay this step with the prox-operator step. Eah worker omputes its ontribution to the suffiient statistis using its assigned attributes. For example in the hierarhial model this redues to omputing a C C matrix i m(i) ziz i. After reahing a barrier, worker 0 then reads those partial sums and omputes the new value for Z using (8) and then writes Ω bak to a shared memory to be read by eah worker for the next iteration. Alternatively eah worker an read the suffiient statistis of Ω and ompute the new value deterministially. Moreover, instead of using (8), we ould use the graphial lasso estimation of [9] to get a sparse inverse ovariane estimation of Ω 1 from its suffiient statistis. This sparse inverse ovariane is desirable In distributed settings to minimize parameter movements when omputing the multi-task gradient as it depends on the non-zero elements of the inverse ovariane (i.e. Ω 1 ).

8 MTL Attahment MTL Hierarhial MTL Flat R2 sore Single Task AUC sore MTL Hierarhial MTL Flat Single Task Iterations Figure 4: Shool Data: R 2 Performane for various multitask learning algorithms. Note the faster onvergene and better performane of ATT-MTRL. 6. EXPERIMENTS ON PUBLIC DATA To establish the effiay of our approah we report results on both two publi datasets and one proprietary dataset. We make this hoie sine we are unable to share proprietary and finanially relevant data outside Yahoo, yet at the same time we wish to provide the reader with some means of gaining insight into the working of the proposed algorithm. In other words, we show that the proposed algorithm improves on the state of the art and simultaneously that it sales to substantial problem sizes. In terms of publi datasets we hoose two standard datasets: estimation of examination sores for students attending seondary shools in London and multi-task lassifiation of the 20-newsgroup dataset. The algorithms we ompare are the Hierarhial MTL (HIE-MTRL) and Attahment MTL (ATT-MTRL) algorithms, along with two baselines: the first one is the Single-task Learning (STL) algorithm whih does not use multitask learning, and optimizes all the tasks independently. The seond baseline is the Flat MTL (F-MTRL) algorithm of [23] that uses a matrix-variate normal prior on the task orrelation matrix. This algorithm performs multitask learning, however it does not aount for the hierarhial task and subtask struture. It flattens the task hierarhy and treats (task, subtask) as individual tasks and learns a joint ovariane struture. 6.1 Student sore estimation This dataset has been used widely for studying multitask regression 1. It onsists of the exam sores of 15, 362 students from 139 seondary shools in London during 1985, 1986 and Originally, the input onsists of the year of the exam, four shool-speifi and three student-speifi attributes. The goal is to estimate the exam sores for the students. Several papers [23, 4] evaluate multitask learning by treating the shool-ids as attributes, but one ould arguably treat this dataset as speifying a hierarhial task/subtask struture, where the shool ID refer to tasks and the exam years for eah shool orrespond to the subtasks of the shool ID. Thus, there are a total of 139 tasks, with up to 3 sub Iterations Figure 5: 20-newsgroup MTL: AUC Performane for various multitask learning algorithms. The hierarhial MTL algorithm starts off better than the flat algorithm and it onsistently outperforms both single task and flat multitask learning. tasks for eah task. We note that several tasks have only 2 tasks in the dataset. We replae eah ategorial attribute with one binary variable for eah possible attribute value as in [4] but remove the attributes orresponding to the exam years. As a result of this preproessing, we have a total of 24 input attributes. We use a 66/34 split of the dataset to use as training instanes and test instanes, and report the average (over all years and shools) performane of our algorithms, on the test set. For our performane measure, we use the normalized inner produt between the input sore vetor and the predited sore vetor. This measure is proportional to the squared multiple orrelation oeffiient R 2, a normalized version of the regression error, defined as R 2 := 1 i (yi fi)2. (32) Var[y] That is, it is the ratio between explained variane and total variane. See e.g. [8] for further details. For F-MTRL, HIE-MTRL and ATT-MTRL we use 5-fold ross validation to determine the optimal value of the appropriate regularization onstants and learning rate. Figure 6.1 plots the R 2 performane of the algorithms as optimization progresses. As an be seen in the graph, and as also reported by [23, 4], multitask learning provides a signifiant performane improvement. F-MTRL improves over the baseline by around 5%. However, by using the task-subtask hierarhy, our HIE-MTRL and ATT-MTRL obtain a further improvement from a sore of around 0.71 (for F-MTRL) to almost The performane for ATT-MTRL was slightly better than HIE-MTRL in this dataset. 6.2 Multi-Task Classifiation The task at hand is multi-task lassifiation of the 20- newsgroup dataset 2. The goal here is to predit the newsgroup of a given post. The 20 news groups are arranged into a two-level hierarhy. The first level omprises 5 ategories: 2 jason/20newsgroups/ 25

9 Table 1: 20-newsgroup analysis: AUC performane for single task and multitask algorithms on varying perentages of the data. Fration of data STL F-MTRL HIE-MTRL 20% % % % % politis, religion, rereational, siene, and omputers. Eah ategory has 2-5 subategories in the seond level. We map ategories to tasks and sub-ategories to sub-tasks. The dataset omprises 18k douments and we followed the standard test/train split. We removed stop words and words appearing less than 10 times. Five-fold ross validation is used to determine the values of the regularization parameters for all models. We measure lassifiation auray using the AUC measure. Sine in this dataset we do not have the notion of an anhor task, we only use the symmetri HIE-MTRL formulation. In Figure 5 we ompare the performane of HIE- MTRL against the state of the art algorithm in [23] F-MTRL and against single-task baseline STL. As evident from figure HIE-MTRL improves over the F-MTRL baseline by around 2% points and the improvement was statistially signifiant. Moreover HIE-MTRL outperforms all other ompeting algorithms in terms of speed of onvergene. To see the effet of varying the training set size, we selet different frations of the data for eah task to form the training set while keeping the test set fixed. As an be seen in Table 1 HIE-MTRL outperforms all other ompeting algorithms for a range of different sample sizes. Moreover, the improvement of HIE-MTRL over ompeting algorithms is more apparent when the training data size is small (a 5% improvement over the F-MTRL baseline using a 20% fration of the training data). 7. OPTIMIZING DISPLAY ADVERTISING 7.1 Data We olleted 4 weeks of advertising data, i.e. impressions, liks, and onversions, for a total of 1, 468 advertising ampaigns. 3 Eah ampaign is treated as a separate targeting task. 66% of the data is used for training, while the remaining 34% is used for soring. The train/test split is performed using a referene time stamp (impressions before that time stamp used for training and afterwards impression for testing). Sine the user profiles span 56 days of user history, eah training/soring example is preeded by at least 4 weeks of user events. This benhmark data set enables us to perform rigorous offline experiments. We ount users based on the unique number of browser ookies (see table below). days users features ampaigns dataset size , TB We study the performane of our tehniques ompared to the baseline system developed in [3]. We mainly ompare modeling performane in terms of the area under the ROC urve (AUC). Unless otherwise speified, all metris are measured as onversion-weighted average of AUC aross all ampaigns 3 We note here that data from users that opted out of behavioral targeting were not olleted. in the benhmark set. We denote the onversion-weighted average of AUC as Weighted AUC. We represent eah user using features from both ative and passive observations. Passive observations inlude viewing ads and visiting pages in whih an ation is not speifially required upon seeing the page. Ative observations inlude issuing searh queries and liking ads in whih users atually perform an ation on the page. Eah advertising ampaign has three subtasks: Prediting onversions: This sub task ontains data that shows whether users onverted on a given ampaign. That is, it ontains information whether they performed an advertiser-speified ation suh as purhasing a produt or filling a form. Prediting liks: This subtask ontains data that shows whether users liked on the ad of this ampaign or not. Predition on auxiliary (unattributed onversion) data: This subtask ontains data that shows whether users onverted on histori data on related ampaigns of the same advertiser. This data is supplied by the advertiser. We define the feature weight for a given user-(sub)ampaign example to be the number of days (before showing the user the ampaign ad) in whih the feature appears. Our platform experienes a large variane of feature weights aross our feature types thus making it hard to set a single ountthreshold below whih we onsider the feature to be irrelevant. We thus rely on the learning algorithm to perform joint onversion optimization and feature seletion. 7.2 Results All experiments reported in this setion were performed using 300 mahines. We assess the performane both in terms of AUC auray and salability of the algorithms. The attahment multitask learning algorithm (ATT-MTRL) signifiantly outperforms flat multitask and single task learning. Moreover, the results for hierarhial multitask learning (HIE-MTRL) were only slightly inferior to ATT-MTRL (thus we omit them for spae limitations). This finding is onsistent with our findings of Setion 6.1. Note that ATT- MTRL also performs multitask feature seletion, whih is essential here due to the large feature spae. We ompare our performane with the baseline Single-task Learning, whih optimizes for all the tasks and subtasks separately. We omit omparing with the F-MTRL for this task sine the flat MTL requires flattening the task-subtask struture ( 2k tasks) whih results in massive weight vetor movements aross mahines and as suh does not sale to this dataset (though in Table 4 we show the effet of introduing task and subtask ovariane on the overall performane). The parameters for all models were tuned on a validation set. In Table 2 we report the overall performane of the model against the baseline. As we an see, our model learly outperforms the baseline. All improvements of our models over the baselines are statistially signifiant. Note that the task of onversion predition is very diffiult sine positive examples are very rare. Seondly, we quantify the effet of feature seletion. For this purpose we selet the top 10k, 30k and 50k features (using mutual-information measure) and use them in the STL. For omparison we run ATT-MTRL using onservative

10 Table 2: Attahment multitask performane. AUC STL ATT-MTRL all subtasks onversions auxiliary (unattributed) liks Table 3: Feature seletion effetiveness: Conversion AUC features STL + l 2 + top features ,000 STL + l 2 + top features ,000 STL + l 2 + top features ,000 ATT-MTRL (aggressive) ,992 ATT-MTRL (onservative) ,789 Table 4: Ablation study for ATT-MTRL. AUC onversions all sub-tasks L L1+L L1+L12+Θ L1+L12+Θ+Ω (λ 1 = 0.4, λ 2 = 10) and aggressive (λ 1 = 0.4, λ 2 = 25 ) feature seletion parameters. The results in Table 3 are reported in terms of the weighted average AUC measure. Finally, in table 4 shows the ontributions of the various omponents of the ATT-MTRL algorithm towards the learning performane. Ω refers to the task-orrelation regularization, Θ refers to the sub-task orrelation regularization, and L1 and L12 refer to per-ampaign and multitask feature seletion respetively. As seen from the figure, using multitask feature seletion(l12) leads to only a marginal improvement over single task feature seletion(l1). However, adding the multitask learning omponents lead to a signifiant improvement over using just L1 and L12 regularization. These results learly show the importane of leveraging ross-ampaign and ross-ampaign-subtask information to improve the performane of ampaigns with very few onversions, as opposed to the baseline tehniques. 8. CONCLUSION In this paper we addressed the problem of hierarhial multitask learning when tasks are organized in a hierarhy. We presented two onvex formulations to this problem and showed that models that exploit the hierarhial struture outperformed flat models. Furthermore, we showed how to sale our models to a tera-sale advertising task. An advantage of our hierarhial formulation is the utilization of the task substruture for effiient parameter distribution that redues parameter movements aross mahines. We validated our models on both publi and private datasets with favorable performane. 9. REFERENCES [1] A. Ahmed, M. Aly, A. Das, A. Smola, and T. Anastasakos. Web-sale multi-task feature seletion for behavioral targeting. In CIKM, [2] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola. Salable inferene in latent variable models. In Web Siene and Data Mining (WSDM), [3] M. Aly, A. Hath, V. Josifovski, and V. K. Narayanan. Web-sale user modeling for targeting. In WWW, [4] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Mahine Learning, 73(3): , [5] F. Bah, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-induing penalties. Foundations and Trends in Mahine Learning, 4(1):1 106, [6] A. Bek and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sienes, 2(1): , [7] R. Caruana. Multitask learning. Mahine Learning, 28:41 75, [8] N. R. Draper and H. Smith. Applied Regression Analysis. John Wiley and Sons, New York, NY, [9] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse ovariane estimation with the graphial lasso. Biostatistis, 9(3): , [10] R. Jenatton, J. Mairal, G. Obozinski, and F. Bah. Proximal methods for hierarhial sparse oding. Journal of Mahine Learning Researh, 12: , [11] D. Karger, E. Lehman, T. Leighton, M. Levine, D. Lewin, and R. Panigrahy. Consistent hashing and random trees: Distributed ahing protools for relieving hot spots on the world wide web. In Symposium on the Theory of Computing STOC, pages , New York, May Assoiation for Computing Mahinery. [12] T. R. Shultz and F. Rivest. Using knowledge to speed learning: A omparison knowledge-based asade-orrelation and multi-task learning. In Pro. Intl. Conf. Mahine Learning, pages Morgan Kaufmann, San Franiso, CA, [13] A. J. Smola and S. Narayanamurthy. An arhiteture for parallel topi models. In Very Large Databases (VLDB), [14] S. Sonnenburg, G. Rätsh, C. Shäfer, and B. Shölkopf. Large sale multiple kernel learning. Journal of Mahine Learning Researh, 7: , [15] W. H. Southwell. Fitting data to nonlinear funtions with unertainties in all measurement variables. Comput. J., 19(1):69 73, [16] N. Srebro and A. Shraibman. Rank, trae-norm and max-norm. In P. Auer and R. Meir, editors, Pro. Annual Conf. Computational Learning Theory, number 3559 in Leture Notes in Artifiial Intelligene, pages Springer-Verlag, June [17] C. Teo, Q. Le, A. J. Smola, and S. V. N. Vishwanathan. A salable modular onvex solver for regularized risk minimization. In Pro. ACM Conf. Knowledge Disovery and Data Mining (KDD). ACM, [18] S. Thrun and J. O Sullivan. Disovering struture in multiple learning tasks: the TC algorithm. In Pro. Intl. Conf. Mahine Learning, pages Morgan Kaufmann, [19] M. Varma and B. R. Babu. More generality in effiient multiple kernel learning. In A. P. Danyluk, L. Bottou, and M. L. Littman, editors, ICML, volume 382 of ACM International Conferene Proeeding Series, page 134. ACM, [20] K. Weinberger, A. Dasgupta, J. Attenberg, J. Langford, and A. J. Smola. Feature hashing for large sale multitask learning. In L. Bottou and M. Littman, editors, International Conferene on Mahine Learning, [21] J. Ye, J. Chow, J. Chen, and Z. Zheng. Stohasti gradient boosted distributed deision trees. In CIKM. ACM, [22] K. Yu, V. Tresp, and A. Shwaighofer. Learning gaussian proesses from multiple tasks. In Proeedings of the 22nd International Conferene on Mahine Learning, volume 119, pages ACM, [23] Y. Zhang and D.-Y. Yeung. A onvex formulation for learning task relationships in multi-task learning. In Unertainty in Artifiial Intelligene, 2010.

Chapter 1 Microeconomics of Consumer Theory

Chapter 1 Microeconomics of Consumer Theory Chapter 1 Miroeonomis of Consumer Theory The two broad ategories of deision-makers in an eonomy are onsumers and firms. Eah individual in eah of these groups makes its deisions in order to ahieve some

More information

A Holistic Method for Selecting Web Services in Design of Composite Applications

A Holistic Method for Selecting Web Services in Design of Composite Applications A Holisti Method for Seleting Web Servies in Design of Composite Appliations Mārtiņš Bonders, Jānis Grabis Institute of Information Tehnology, Riga Tehnial University, 1 Kalu Street, Riga, LV 1658, Latvia,

More information

Hierarchical Clustering and Sampling Techniques for Network Monitoring

Hierarchical Clustering and Sampling Techniques for Network Monitoring S. Sindhuja Hierarhial Clustering and Sampling Tehniques for etwork Monitoring S. Sindhuja ME ABSTRACT: etwork monitoring appliations are used to monitor network traffi flows. Clustering tehniques are

More information

Pattern Recognition Techniques in Microarray Data Analysis

Pattern Recognition Techniques in Microarray Data Analysis Pattern Reognition Tehniques in Miroarray Data Analysis Miao Li, Biao Wang, Zohreh Momeni, and Faramarz Valafar Department of Computer Siene San Diego State University San Diego, California, USA faramarz@sienes.sdsu.edu

More information

An Efficient Network Traffic Classification Based on Unknown and Anomaly Flow Detection Mechanism

An Efficient Network Traffic Classification Based on Unknown and Anomaly Flow Detection Mechanism An Effiient Network Traffi Classifiation Based on Unknown and Anomaly Flow Detetion Mehanism G.Suganya.M.s.,B.Ed 1 1 Mphil.Sholar, Department of Computer Siene, KG College of Arts and Siene,Coimbatore.

More information

Granular Problem Solving and Software Engineering

Granular Problem Solving and Software Engineering Granular Problem Solving and Software Engineering Haibin Zhu, Senior Member, IEEE Department of Computer Siene and Mathematis, Nipissing University, 100 College Drive, North Bay, Ontario, P1B 8L7, Canada

More information

Weighting Methods in Survey Sampling

Weighting Methods in Survey Sampling Setion on Survey Researh Methods JSM 01 Weighting Methods in Survey Sampling Chiao-hih Chang Ferry Butar Butar Abstrat It is said that a well-designed survey an best prevent nonresponse. However, no matter

More information

Recovering Articulated Motion with a Hierarchical Factorization Method

Recovering Articulated Motion with a Hierarchical Factorization Method Reovering Artiulated Motion with a Hierarhial Fatorization Method Hanning Zhou and Thomas S Huang University of Illinois at Urbana-Champaign, 405 North Mathews Avenue, Urbana, IL 680, USA {hzhou, huang}@ifpuiuedu

More information

Static Fairness Criteria in Telecommunications

Static Fairness Criteria in Telecommunications Teknillinen Korkeakoulu ERIKOISTYÖ Teknillisen fysiikan koulutusohjelma 92002 Mat-208 Sovelletun matematiikan erikoistyöt Stati Fairness Criteria in Teleommuniations Vesa Timonen, e-mail: vesatimonen@hutfi

More information

An Enhanced Critical Path Method for Multiple Resource Constraints

An Enhanced Critical Path Method for Multiple Resource Constraints An Enhaned Critial Path Method for Multiple Resoure Constraints Chang-Pin Lin, Hung-Lin Tai, and Shih-Yan Hu Abstrat Traditional Critial Path Method onsiders only logial dependenies between related ativities

More information

Open and Extensible Business Process Simulator

Open and Extensible Business Process Simulator UNIVERSITY OF TARTU FACULTY OF MATHEMATICS AND COMPUTER SCIENCE Institute of Computer Siene Karl Blum Open and Extensible Business Proess Simulator Master Thesis (30 EAP) Supervisors: Luiano Garía-Bañuelos,

More information

Channel Assignment Strategies for Cellular Phone Systems

Channel Assignment Strategies for Cellular Phone Systems Channel Assignment Strategies for Cellular Phone Systems Wei Liu Yiping Han Hang Yu Zhejiang University Hangzhou, P. R. China Contat: wliu5@ie.uhk.edu.hk 000 Mathematial Contest in Modeling (MCM) Meritorious

More information

Supply chain coordination; A Game Theory approach

Supply chain coordination; A Game Theory approach aepted for publiation in the journal "Engineering Appliations of Artifiial Intelligene" 2008 upply hain oordination; A Game Theory approah Jean-Claude Hennet x and Yasemin Arda xx x LI CNR-UMR 668 Université

More information

Henley Business School at Univ of Reading. Pre-Experience Postgraduate Programmes Chartered Institute of Personnel and Development (CIPD)

Henley Business School at Univ of Reading. Pre-Experience Postgraduate Programmes Chartered Institute of Personnel and Development (CIPD) MS in International Human Resoure Management For students entering in 2012/3 Awarding Institution: Teahing Institution: Relevant QAA subjet Benhmarking group(s): Faulty: Programme length: Date of speifiation:

More information

A Context-Aware Preference Database System

A Context-Aware Preference Database System J. PERVASIVE COMPUT. & COMM. (), MARCH 005. TROUBADOR PUBLISHING LTD) A Context-Aware Preferene Database System Kostas Stefanidis Department of Computer Siene, University of Ioannina,, kstef@s.uoi.gr Evaggelia

More information

' R ATIONAL. :::~i:. :'.:::::: RETENTION ':: Compliance with the way you work PRODUCT BRIEF

' R ATIONAL. :::~i:. :'.:::::: RETENTION ':: Compliance with the way you work PRODUCT BRIEF ' R :::i:. ATIONAL :'.:::::: RETENTION ':: Compliane with the way you work, PRODUCT BRIEF In-plae Management of Unstrutured Data The explosion of unstrutured data ombined with new laws and regulations

More information

AUDITING COST OVERRUN CLAIMS *

AUDITING COST OVERRUN CLAIMS * AUDITING COST OVERRUN CLAIMS * David Pérez-Castrillo # University of Copenhagen & Universitat Autònoma de Barelona Niolas Riedinger ENSAE, Paris Abstrat: We onsider a ost-reimbursement or a ost-sharing

More information

BUILDING A SPAM FILTER USING NAÏVE BAYES. CIS 391- Intro to AI 1

BUILDING A SPAM FILTER USING NAÏVE BAYES. CIS 391- Intro to AI 1 BUILDING A SPAM FILTER USING NAÏVE BAYES 1 Spam or not Spam: that is the question. From: "" Subjet: real estate is the only way... gem oalvgkay Anyone an buy real estate with no

More information

Deadline-based Escalation in Process-Aware Information Systems

Deadline-based Escalation in Process-Aware Information Systems Deadline-based Esalation in Proess-Aware Information Systems Wil M.P. van der Aalst 1,2, Mihael Rosemann 2, Marlon Dumas 2 1 Department of Tehnology Management Eindhoven University of Tehnology, The Netherlands

More information

TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS

TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS TECHNOLOGY-ENHANCED LEARNING FOR MUSIC WITH I-MAESTRO FRAMEWORK AND TOOLS ICSRiM - University of Leeds Shool of Computing & Shool of Musi Leeds LS2 9JT, UK +44-113-343-2583 kia@i-maestro.org www.i-maestro.org,

More information

Fixed-income Securities Lecture 2: Basic Terminology and Concepts. Present value (fixed interest rate) Present value (fixed interest rate): the arb

Fixed-income Securities Lecture 2: Basic Terminology and Concepts. Present value (fixed interest rate) Present value (fixed interest rate): the arb Fixed-inome Seurities Leture 2: Basi Terminology and Conepts Philip H. Dybvig Washington University in Saint Louis Various interest rates Present value (PV) and arbitrage Forward and spot interest rates

More information

WORKFLOW CONTROL-FLOW PATTERNS A Revised View

WORKFLOW CONTROL-FLOW PATTERNS A Revised View WORKFLOW CONTROL-FLOW PATTERNS A Revised View Nik Russell 1, Arthur H.M. ter Hofstede 1, 1 BPM Group, Queensland University of Tehnology GPO Box 2434, Brisbane QLD 4001, Australia {n.russell,a.terhofstede}@qut.edu.au

More information

FOOD FOR THOUGHT Topical Insights from our Subject Matter Experts

FOOD FOR THOUGHT Topical Insights from our Subject Matter Experts FOOD FOR THOUGHT Topial Insights from our Sujet Matter Experts DEGREE OF DIFFERENCE TESTING: AN ALTERNATIVE TO TRADITIONAL APPROACHES The NFL White Paper Series Volume 14, June 2014 Overview Differene

More information

Lemon Signaling in Cross-Listings Michal Barzuza*

Lemon Signaling in Cross-Listings Michal Barzuza* Lemon Signaling in Cross-Listings Mihal Barzuza* This paper analyzes the deision to ross-list by managers and ontrolling shareholders assuming that they have private information with respet to the amount

More information

Learning Curves and Stochastic Models for Pricing and Provisioning Cloud Computing Services

Learning Curves and Stochastic Models for Pricing and Provisioning Cloud Computing Services T Learning Curves and Stohasti Models for Priing and Provisioning Cloud Computing Servies Amit Gera, Cathy H. Xia Dept. of Integrated Systems Engineering Ohio State University, Columbus, OH 4310 {gera.,

More information

Big Data Analysis and Reporting with Decision Tree Induction

Big Data Analysis and Reporting with Decision Tree Induction Big Data Analysis and Reporting with Deision Tree Indution PETRA PERNER Institute of Computer Vision and Applied Computer Sienes, IBaI Postbox 30 11 14, 04251 Leipzig GERMANY pperner@ibai-institut.de,

More information

Discovering Trends in Large Datasets Using Neural Networks

Discovering Trends in Large Datasets Using Neural Networks Disovering Trends in Large Datasets Using Neural Networks Khosrow Kaikhah, Ph.D. and Sandesh Doddameti Department of Computer Siene Texas State University San Maros, Texas 78666 Abstrat. A novel knowledge

More information

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules

Improved Vehicle Classification in Long Traffic Video by Cooperating Tracker and Classifier Modules Improved Vehile Classifiation in Long Traffi Video by Cooperating Traker and Classifier Modules Brendan Morris and Mohan Trivedi University of California, San Diego San Diego, CA 92093 {b1morris, trivedi}@usd.edu

More information

State of Maryland Participation Agreement for Pre-Tax and Roth Retirement Savings Accounts

State of Maryland Participation Agreement for Pre-Tax and Roth Retirement Savings Accounts State of Maryland Partiipation Agreement for Pre-Tax and Roth Retirement Savings Aounts DC-4531 (08/2015) For help, please all 1-800-966-6355 www.marylandd.om 1 Things to Remember Complete all of the setions

More information

Srinivas Bollapragada GE Global Research Center. Abstract

Srinivas Bollapragada GE Global Research Center. Abstract Sheduling Commerial Videotapes in Broadast Television Srinivas Bollapragada GE Global Researh Center Mihael Bussiek GAMS Development Corporation Suman Mallik University of Illinois at Urbana Champaign

More information

A Keyword Filters Method for Spam via Maximum Independent Sets

A Keyword Filters Method for Spam via Maximum Independent Sets Vol. 7, No. 3, May, 213 A Keyword Filters Method for Spam via Maximum Independent Sets HaiLong Wang 1, FanJun Meng 1, HaiPeng Jia 2, JinHong Cheng 3 and Jiong Xie 3 1 Inner Mongolia Normal University 2

More information

Sebastián Bravo López

Sebastián Bravo López Transfinite Turing mahines Sebastián Bravo López 1 Introdution With the rise of omputers with high omputational power the idea of developing more powerful models of omputation has appeared. Suppose that

More information

Henley Business School at Univ of Reading. Chartered Institute of Personnel and Development (CIPD)

Henley Business School at Univ of Reading. Chartered Institute of Personnel and Development (CIPD) MS in International Human Resoure Management (full-time) For students entering in 2015/6 Awarding Institution: Teahing Institution: Relevant QAA subjet Benhmarking group(s): Faulty: Programme length: Date

More information

Interpretable Fuzzy Modeling using Multi-Objective Immune- Inspired Optimization Algorithms

Interpretable Fuzzy Modeling using Multi-Objective Immune- Inspired Optimization Algorithms Interpretable Fuzzy Modeling using Multi-Objetive Immune- Inspired Optimization Algorithms Jun Chen, Mahdi Mahfouf Abstrat In this paper, an immune inspired multi-objetive fuzzy modeling (IMOFM) mehanism

More information

The Optimal Deterrence of Tax Evasion: The Trade-off Between Information Reporting and Audits

The Optimal Deterrence of Tax Evasion: The Trade-off Between Information Reporting and Audits The Optimal Deterrene of Tax Evasion: The Trade-off Between Information Reporting and Audits Yulia Paramonova Department of Eonomis, University of Mihigan Otober 30, 2014 Abstrat Despite the widespread

More information

Intelligent Measurement Processes in 3D Optical Metrology: Producing More Accurate Point Clouds

Intelligent Measurement Processes in 3D Optical Metrology: Producing More Accurate Point Clouds Intelligent Measurement Proesses in 3D Optial Metrology: Produing More Aurate Point Clouds Charles Mony, Ph.D. 1 President Creaform in. mony@reaform3d.om Daniel Brown, Eng. 1 Produt Manager Creaform in.

More information

Robust Classification and Tracking of Vehicles in Traffic Video Streams

Robust Classification and Tracking of Vehicles in Traffic Video Streams Proeedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conferene Toronto, Canada, September 17-20, 2006 TC1.4 Robust Classifiation and Traking of Vehiles in Traffi Video Streams

More information

Using Live Chat in your Call Centre

Using Live Chat in your Call Centre Using Live Chat in your Call Centre Otober Key Highlights Yesterday's all entres have beome today's ontat entres where agents deal with multiple queries from multiple hannels. Live Chat hat is one now

More information

Agent-Based Grid Load Balancing Using Performance-Driven Task Scheduling

Agent-Based Grid Load Balancing Using Performance-Driven Task Scheduling Agent-Based Grid Load Balaning Using Performane-Driven Task Sheduling Junwei Cao *1, Daniel P. Spooner, Stephen A. Jarvis, Subhash Saini and Graham R. Nudd * C&C Researh Laboratories, NEC Europe Ltd.,

More information

Capacity at Unsignalized Two-Stage Priority Intersections

Capacity at Unsignalized Two-Stage Priority Intersections Capaity at Unsignalized Two-Stage Priority Intersetions by Werner Brilon and Ning Wu Abstrat The subjet of this paper is the apaity of minor-street traffi movements aross major divided four-lane roadways

More information

Customer Efficiency, Channel Usage and Firm Performance in Retail Banking

Customer Efficiency, Channel Usage and Firm Performance in Retail Banking Customer Effiieny, Channel Usage and Firm Performane in Retail Banking Mei Xue Operations and Strategi Management Department The Wallae E. Carroll Shool of Management Boston College 350 Fulton Hall, 140

More information

FIRE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero

FIRE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero FE DETECTION USING AUTONOMOUS AERIAL VEHICLES WITH INFRARED AND VISUAL CAMERAS. J. Ramiro Martínez-de Dios, Luis Merino and Aníbal Ollero Robotis, Computer Vision and Intelligent Control Group. University

More information

Impact Simulation of Extreme Wind Generated Missiles on Radioactive Waste Storage Facilities

Impact Simulation of Extreme Wind Generated Missiles on Radioactive Waste Storage Facilities Impat Simulation of Extreme Wind Generated issiles on Radioative Waste Storage Failities G. Barbella Sogin S.p.A. Via Torino 6 00184 Rome (Italy), barbella@sogin.it Abstrat: The strutural design of temporary

More information

Design Implications for Enterprise Storage Systems via Multi-Dimensional Trace Analysis

Design Implications for Enterprise Storage Systems via Multi-Dimensional Trace Analysis Design Impliations for Enterprise Storage Systems via Multi-Dimensional Trae Analysis Yanpei Chen, Kiran Srinivasan, Garth Goodson, Randy Katz University of California, Berkeley, NetApp In. {yhen2, randy}@ees.berkeley.edu,

More information

How To Fator

How To Fator CHAPTER hapter 4 > Make the Connetion 4 INTRODUCTION Developing seret odes is big business beause of the widespread use of omputers and the Internet. Corporations all over the world sell enryption systems

More information

A Survey of Usability Evaluation in Virtual Environments: Classi cation and Comparison of Methods

A Survey of Usability Evaluation in Virtual Environments: Classi cation and Comparison of Methods Doug A. Bowman bowman@vt.edu Department of Computer Siene Virginia Teh Joseph L. Gabbard Deborah Hix [ jgabbard, hix]@vt.edu Systems Researh Center Virginia Teh A Survey of Usability Evaluation in Virtual

More information

To Coordinate Or Not To Coordinate? Wide-Area Traffic Management for Data Centers

To Coordinate Or Not To Coordinate? Wide-Area Traffic Management for Data Centers To Coordinate Or Not To Coordinate? Wide-Area Traffi Management for Data Centers Srinivas Narayana, Joe Wenjie Jiang, Jennifer Rexford, Mung Chiang Department of Computer Siene, and Department of Eletrial

More information

Optimal Sales Force Compensation

Optimal Sales Force Compensation Optimal Sales Fore Compensation Matthias Kräkel Anja Shöttner Abstrat We analyze a dynami moral-hazard model to derive optimal sales fore ompensation plans without imposing any ad ho restritions on the

More information

Improved SOM-Based High-Dimensional Data Visualization Algorithm

Improved SOM-Based High-Dimensional Data Visualization Algorithm Computer and Information Siene; Vol. 5, No. 4; 2012 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Siene and Eduation Improved SOM-Based High-Dimensional Data Visualization Algorithm Wang

More information

Procurement auctions are sometimes plagued with a chosen supplier s failing to accomplish a project successfully.

Procurement auctions are sometimes plagued with a chosen supplier s failing to accomplish a project successfully. Deision Analysis Vol. 7, No. 1, Marh 2010, pp. 23 39 issn 1545-8490 eissn 1545-8504 10 0701 0023 informs doi 10.1287/dea.1090.0155 2010 INFORMS Managing Projet Failure Risk Through Contingent Contrats

More information

SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments

SLA-based Resource Allocation for Software as a Service Provider (SaaS) in Cloud Computing Environments 2 th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing SLA-based Resoure Alloation for Software as a Servie Provider (SaaS) in Cloud Computing Environments Linlin Wu, Saurabh Kumar

More information

A Robust Optimization Approach to Dynamic Pricing and Inventory Control with no Backorders

A Robust Optimization Approach to Dynamic Pricing and Inventory Control with no Backorders A Robust Optimization Approah to Dynami Priing and Inventory Control with no Bakorders Elodie Adida and Georgia Perakis July 24 revised July 25 Abstrat In this paper, we present a robust optimization formulation

More information

Software Ecosystems: From Software Product Management to Software Platform Management

Software Ecosystems: From Software Product Management to Software Platform Management Software Eosystems: From Software Produt Management to Software Platform Management Slinger Jansen, Stef Peeters, and Sjaak Brinkkemper Department of Information and Computing Sienes Utreht University,

More information

) ( )( ) ( ) ( )( ) ( ) ( ) (1)

) ( )( ) ( ) ( )( ) ( ) ( ) (1) OPEN CHANNEL FLOW Open hannel flow is haraterized by a surfae in ontat with a gas phase, allowing the fluid to take on shapes and undergo behavior that is impossible in a pipe or other filled onduit. Examples

More information

Hierarchical Beta Processes and the Indian Buffet Process

Hierarchical Beta Processes and the Indian Buffet Process Hierarhial Beta Proesses and the Indian Buffet Proess Romain Thibaux Dept. of EECS University of California, Berkeley Berkeley, CA 9472 Mihael I. Jordan Dept. of EECS and Dept. of Statistis University

More information

Bayes Bluff: Opponent Modelling in Poker

Bayes Bluff: Opponent Modelling in Poker Bayes Bluff: Opponent Modelling in Poker Finnegan Southey, Mihael Bowling, Brye Larson, Carmelo Piione, Neil Burh, Darse Billings, Chris Rayner Department of Computing Siene University of Alberta Edmonton,

More information

Agile ALM White Paper: Redefining ALM with Five Key Practices

Agile ALM White Paper: Redefining ALM with Five Key Practices Agile ALM White Paper: Redefining ALM with Five Key Praties by Ethan Teng, Cyndi Mithell and Chad Wathington 2011 ThoughtWorks ln. All rights reserved www.studios.thoughtworks.om Introdution The pervasiveness

More information

UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE

UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE UNIVERSITY AND WORK-STUDY EMPLOYERS WEB SITE USER S GUIDE September 8, 2009 Table of Contents 1 Home 2 University 3 Your 4 Add 5 Managing 6 How 7 Viewing 8 Closing 9 Reposting Page 1 and Work-Study Employers

More information

An integrated optimization model of a Closed- Loop Supply Chain under uncertainty

An integrated optimization model of a Closed- Loop Supply Chain under uncertainty ISSN 1816-6075 (Print), 1818-0523 (Online) Journal of System and Management Sienes Vol. 2 (2012) No. 3, pp. 9-17 An integrated optimization model of a Closed- Loop Supply Chain under unertainty Xiaoxia

More information

GABOR AND WEBER LOCAL DESCRIPTORS PERFORMANCE IN MULTISPECTRAL EARTH OBSERVATION IMAGE DATA ANALYSIS

GABOR AND WEBER LOCAL DESCRIPTORS PERFORMANCE IN MULTISPECTRAL EARTH OBSERVATION IMAGE DATA ANALYSIS HENRI COANDA AIR FORCE ACADEMY ROMANIA INTERNATIONAL CONFERENCE of SCIENTIFIC PAPER AFASES 015 Brasov, 8-30 May 015 GENERAL M.R. STEFANIK ARMED FORCES ACADEMY SLOVAK REPUBLIC GABOR AND WEBER LOCAL DESCRIPTORS

More information

Computer Networks Framing

Computer Networks Framing Computer Networks Framing Saad Mneimneh Computer Siene Hunter College of CUNY New York Introdution Who framed Roger rabbit? A detetive, a woman, and a rabbit in a network of trouble We will skip the physial

More information

Recommending Questions Using the MDL-based Tree Cut Model

Recommending Questions Using the MDL-based Tree Cut Model WWW 2008 / Refereed Trak: Data Mining - Learning April 2-25, 2008 Beijing, China Reommending Questions Using the MDL-based Tree Cut Model Yunbo Cao,2, Huizhong Duan, Chin-Yew Lin 2, Yong Yu, and Hsiao-Wuen

More information

A Theoretical Analysis of Credit Card Reform in Australia *

A Theoretical Analysis of Credit Card Reform in Australia * A Theoretial Analysis of Credit Card Reform in Australia * by Joshua S. Gans and Stephen P. King Melbourne Business Shool University of Melbourne First Draft: 12 th May, 2001 This Version: 5 th May, 2003

More information

Trade Information, Not Spectrum: A Novel TV White Space Information Market Model

Trade Information, Not Spectrum: A Novel TV White Space Information Market Model Trade Information, Not Spetrum: A Novel TV White Spae Information Market Model Yuan Luo, Lin Gao, and Jianwei Huang 1 Abstrat In this paper, we propose a novel information market for TV white spae networks,

More information

MATE: MPLS Adaptive Traffic Engineering

MATE: MPLS Adaptive Traffic Engineering MATE: MPLS Adaptive Traffi Engineering Anwar Elwalid Cheng Jin Steven Low Indra Widjaja Bell Labs EECS Dept EE Dept Fujitsu Network Communiations Luent Tehnologies Univ. of Mihigan Calteh Pearl River,

More information

Solving the Game of Awari using Parallel Retrograde Analysis

Solving the Game of Awari using Parallel Retrograde Analysis Solving the Game of Awari using Parallel Retrograde Analysis John W. Romein and Henri E. Bal Vrije Universiteit, Faulty of Sienes, Department of Mathematis and Computer Siene, Amsterdam, The Netherlands

More information

1.3 Complex Numbers; Quadratic Equations in the Complex Number System*

1.3 Complex Numbers; Quadratic Equations in the Complex Number System* 04 CHAPTER Equations and Inequalities Explaining Conepts: Disussion and Writing 7. Whih of the following pairs of equations are equivalent? Explain. x 2 9; x 3 (b) x 29; x 3 () x - 2x - 22 x - 2 2 ; x

More information

PROCEEDS OF CRIME (BUSINESS IN THE REGULATED SECTOR) ORDER 2015

PROCEEDS OF CRIME (BUSINESS IN THE REGULATED SECTOR) ORDER 2015 Proeeds of Crime (Business in the Regulated Setor) Order 2015 Artile 1 Statutory Doument No. 2015/0073 Proeeds of Crime At 2008 PROCEEDS OF CRIME (BUSINESS IN THE REGULATED SECTOR) ORDER 2015 Approved

More information

NOMCLUST: AN R PACKAGE FOR HIERARCHICAL CLUSTERING OF OBJECTS CHARACTERIZED BY NOMINAL VARIABLES

NOMCLUST: AN R PACKAGE FOR HIERARCHICAL CLUSTERING OF OBJECTS CHARACTERIZED BY NOMINAL VARIABLES The 9 th International Days of Statistis and Eonomis, Prague, September 10-1, 015 NOMCLUST: AN R PACKAGE FOR HIERARCHICAL CLUSTERING OF OBJECTS CHARACTERIZED BY NOMINAL VARIABLES Zdeněk Šul Hana Řezanková

More information

Unit 12: Installing, Configuring and Administering Microsoft Server

Unit 12: Installing, Configuring and Administering Microsoft Server Unit 12: Installing, Configuring and Administering Mirosoft Server Learning Outomes A andidate following a programme of learning leading to this unit will be able to: Selet a suitable NOS to install for

More information

The Application of Mamdani Fuzzy Model for Auto Zoom Function of a Digital Camera

The Application of Mamdani Fuzzy Model for Auto Zoom Function of a Digital Camera (IJCSIS) International Journal of Computer Siene and Information Seurity, Vol. 6, No. 3, 2009 The Appliation of Mamdani Fuzzy Model for Auto Funtion of a Digital Camera * I. Elamvazuthi, P. Vasant Universiti

More information

Paid Placement Strategies for Internet Search Engines

Paid Placement Strategies for Internet Search Engines Paid Plaement Strategies for Internet Searh Engines Hemant K. Bhargava Smeal College of Business Penn State University 342 Beam Building University Park, PA 16802 bhargava@omputer.org Juan Feng Smeal College

More information

Behavior Analysis-Based Learning Framework for Host Level Intrusion Detection

Behavior Analysis-Based Learning Framework for Host Level Intrusion Detection Behavior Analysis-Based Learning Framework for Host Level Intrusion Detetion Haiyan Qiao, Jianfeng Peng, Chuan Feng, Jerzy W. Rozenblit Eletrial and Computer Engineering Department University of Arizona

More information

university of illinois library AT URBANA-CHAMPAIGN BOOKSTACKS

university of illinois library AT URBANA-CHAMPAIGN BOOKSTACKS university of illinois library AT URBANA-CHAMPAIGN BOOKSTACKS CENTRAL CIRCULATION BOOKSTACKS The person harging this material is responsible for its renewal or its return to the library from whih it was

More information

3 Game Theory: Basic Concepts

3 Game Theory: Basic Concepts 3 Game Theory: Basi Conepts Eah disipline of the soial sienes rules omfortably ithin its on hosen domain: : : so long as it stays largely oblivious of the others. Edard O. Wilson (1998):191 3.1 and and

More information

Neural network-based Load Balancing and Reactive Power Control by Static VAR Compensator

Neural network-based Load Balancing and Reactive Power Control by Static VAR Compensator nternational Journal of Computer and Eletrial Engineering, Vol. 1, No. 1, April 2009 Neural network-based Load Balaning and Reative Power Control by Stati VAR Compensator smail K. Said and Marouf Pirouti

More information

WATER CLOSET SUPPORTS TECHNICAL DATA

WATER CLOSET SUPPORTS TECHNICAL DATA WATER CLOSET SUPPORTS TECHNICAL DATA Smith engineers have developed an unusually omplete line of fixture supports for mounting all types of "off the floor" fixtures. Supports have been designed for water

More information

Optimal Online Buffer Scheduling for Block Devices *

Optimal Online Buffer Scheduling for Block Devices * Optimal Online Buffer Sheduling for Blok Devies * ABSTRACT Anna Adamaszek Department of Computer Siene and Centre for Disrete Mathematis and its Appliations (DIMAP) University of Warwik, Coventry, UK A.M.Adamaszek@warwik.a.uk

More information

REVISTA INVESTIGACIÓN OPERACIONAL Vol. 28, No.1, 4-16, 2007

REVISTA INVESTIGACIÓN OPERACIONAL Vol. 28, No.1, 4-16, 2007 REVISTA INVESTIGACIÓN OPERACIONAL Vol. 28 No.1 4-16 2007 ALGORITHMS FOR MEAN-RISK STOCHASTIC INTEGER PROGRAMS IN ENERGY Rüdiger Shultz Frederike Neise Department of Mathematis University of Duisburg-Essen

More information

Programming Basics - FORTRAN 77 http://www.physics.nau.edu/~bowman/phy520/f77tutor/tutorial_77.html

Programming Basics - FORTRAN 77 http://www.physics.nau.edu/~bowman/phy520/f77tutor/tutorial_77.html CWCS Workshop May 2005 Programming Basis - FORTRAN 77 http://www.physis.nau.edu/~bowman/phy520/f77tutor/tutorial_77.html Program Organization A FORTRAN program is just a sequene of lines of plain text.

More information

Ranking Community Answers by Modeling Question-Answer Relationships via Analogical Reasoning

Ranking Community Answers by Modeling Question-Answer Relationships via Analogical Reasoning Ranking Community Answers by Modeling Question-Answer Relationships via Analogial Reasoning Xin-Jing Wang Mirosoft Researh Asia 4F Sigma, 49 Zhihun Road Beijing, P.R.China xjwang@mirosoft.om Xudong Tu,Dan

More information

The Basics of International Trade: A Classroom Experiment

The Basics of International Trade: A Classroom Experiment The Basis of International Trade: A Classroom Experiment Alberto Isgut, Ganesan Ravishanker, and Tanya Rosenblat * Wesleyan University Abstrat We introdue a simple web-based lassroom experiment in whih

More information

Chapter 1: Introduction

Chapter 1: Introduction Chapter 1: Introdution 1.1 Pratial olumn base details in steel strutures 1.1.1 Pratial olumn base details Every struture must transfer vertial and lateral loads to the supports. In some ases, beams or

More information

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 3, MAY/JUNE 2012 401

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO. 3, MAY/JUNE 2012 401 IEEE TRASACTIOS O DEPEDABLE AD SECURE COMPUTIG, VOL. 9, O. 3, MAY/JUE 2012 401 Mitigating Distributed Denial of Servie Attaks in Multiparty Appliations in the Presene of Clok Drifts Zhang Fu, Marina Papatriantafilou,

More information

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS

INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS Virginia Department of Taxation INCOME TAX WITHHOLDING GUIDE FOR EMPLOYERS www.tax.virginia.gov 2614086 Rev. 07/14 * Table of Contents Introdution... 1 Important... 1 Where to Get Assistane... 1 Online

More information

User s Guide VISFIT: a computer tool for the measurement of intrinsic viscosities

User s Guide VISFIT: a computer tool for the measurement of intrinsic viscosities File:UserVisfit_2.do User s Guide VISFIT: a omputer tool for the measurement of intrinsi visosities Version 2.a, September 2003 From: Multiple Linear Least-Squares Fits with a Common Interept: Determination

More information

Chapter 6 A N ovel Solution Of Linear Congruenes Proeedings NCUR IX. (1995), Vol. II, pp. 708{712 Jerey F. Gold Department of Mathematis, Department of Physis University of Utah Salt Lake City, Utah 84112

More information

Product Warranties and Double Adverse Selection

Product Warranties and Double Adverse Selection rodut Warranties and Double Adverse eletion David A. oberman Assistant rofessor of Marketing INEAD Boulevard de Constane 77305 Fontainebleau Cede, Frane The author thanks rofessors Andy Mithell, Jak Mintz,

More information

RELEASING MICRODATA: DISCLOSURE RISK ESTIMATION, DATA MASKING AND ASSESSING UTILITY

RELEASING MICRODATA: DISCLOSURE RISK ESTIMATION, DATA MASKING AND ASSESSING UTILITY Setion on Survey Researh Methods JSM 008 RELEASING MICRODATA: DISCLOSURE RISK ESTIMATION, DATA MASKING AND ASSESSING UTILITY Natalie Shlomo 1 1 Southampton Statistial Sienes Researh Institute, University

More information

From a strategic view to an engineering view in a digital enterprise

From a strategic view to an engineering view in a digital enterprise Digital Enterprise Design & Management 2013 February 11-12, 2013 Paris From a strategi view to an engineering view in a digital enterprise The ase of a multi-ountry Telo Hervé Paault Orange Abstrat In

More information

Table of Contents. Appendix II Application Checklist. Export Finance Program Working Capital Financing...7

Table of Contents. Appendix II Application Checklist. Export Finance Program Working Capital Financing...7 Export Finane Program Guidelines Table of Contents Setion I General...........................................................1 A. Introdution............................................................1

More information

Dynamic and Competitive Effects of Direct Mailings

Dynamic and Competitive Effects of Direct Mailings Dynami and Competitive Effets of Diret Mailings Merel van Diepen, Bas Donkers and Philip Hans Franses ERIM REPORT SERIES RESEARCH IN MANAGEMENT ERIM Report Series referene number ERS-2006-050-MKT Publiation

More information

OpenScape 4000 CSTA V7 Connectivity Adapter - CSTA III, Part 2, Version 4.1. Developer s Guide A31003-G9310-I200-1-76D1

OpenScape 4000 CSTA V7 Connectivity Adapter - CSTA III, Part 2, Version 4.1. Developer s Guide A31003-G9310-I200-1-76D1 OpenSape 4000 CSTA V7 Connetivity Adapter - CSTA III, Part 2, Version 4.1 Developer s Guide A31003-G9310-I200-1-76 Our Quality and Environmental Management Systems are implemented aording to the requirements

More information

In many services, the quality or value provided by the service increases with the time the service provider

In many services, the quality or value provided by the service increases with the time the service provider MANAGEMENT SCIENCE Vol. 57, No. 1, January 2011, pp. 40 56 issn 0025-1909 eissn 1526-5501 11 5701 0040 informs doi 10.1287/mns.1100.1250 2011 INFORMS Quality Speed Conundrum: Trade-offs in Customer-Intensive

More information

Information Security 201

Information Security 201 FAS Information Seurity 201 Desktop Referene Guide Introdution Harvard University is ommitted to proteting information resoures that are ritial to its aademi and researh mission. Harvard is equally ommitted

More information

A novel active mass damper for vibration control of bridges

A novel active mass damper for vibration control of bridges IABMAS 08, International Conferene on Bridge Maintenane, Safety and Management, 3-7 July 008, Seoul, Korea A novel ative mass damper for vibration ontrol of bridges U. Starossek & J. Sheller Strutural

More information

Chapter 5 Single Phase Systems

Chapter 5 Single Phase Systems Chapter 5 Single Phase Systems Chemial engineering alulations rely heavily on the availability of physial properties of materials. There are three ommon methods used to find these properties. These inlude

More information

International Journal of Supply and Operations Management. Mathematical modeling for EOQ inventory system with advance payment and fuzzy Parameters

International Journal of Supply and Operations Management. Mathematical modeling for EOQ inventory system with advance payment and fuzzy Parameters nternational Journal of Supply and Operations Management JSOM November 0, Volume, ssue 3, pp. 60-78 SSN-Print: 383-359 SSN-Online: 383-55 www.ijsom.om Mathematial modeling for EOQ inventory system with

More information

RESEARCH SEMINAR IN INTERNATIONAL ECONOMICS. Discussion Paper No. 475. The Evolution and Utilization of the GATT/WTO Dispute Settlement Mechanism

RESEARCH SEMINAR IN INTERNATIONAL ECONOMICS. Discussion Paper No. 475. The Evolution and Utilization of the GATT/WTO Dispute Settlement Mechanism RESEARCH SEMINAR IN INTERNATIONAL ECONOMICS Shool of Publi Poliy The University of Mihigan Ann Arbor, Mihigan 48109-1220 Disussion Paper No. 475 The Evolution and Utilization of the GATT/WTO Dispute Settlement

More information

A Comparison of Service Quality between Private and Public Hospitals in Thailand

A Comparison of Service Quality between Private and Public Hospitals in Thailand International Journal of Business and Soial Siene Vol. 4 No. 11; September 2013 A Comparison of Servie Quality between Private and Hospitals in Thailand Khanhitpol Yousapronpaiboon, D.B.A. Assistant Professor

More information