1 Introduction. Reza Shokri* Privacy Games: Optimal User-Centric Data Obfuscation

Size: px
Start display at page:

Download "1 Introduction. Reza Shokri* Privacy Games: Optimal User-Centric Data Obfuscation"

Transcription

1 Proceeding on Privacy Enhancing Technologie 2015; 2015 (2):1 17 Reza Shokri* Privacy Game: Optimal Uer-Centric Data Obfucation Abtract: Conider uer who hare their data (e.g., location) with an untruted ervice provider to obtain a peronalized (e.g., location-baed) ervice. Data obfucation i a prevalent uer-centric approach to protecting uer privacy in uch ytem: the untruted entity only receive a noiy verion of uer data. Perturbing data before haring it, however, come at the price of the uer utility (ervice quality) experience which i an ineparable deign factor of obfucation mechanim. The entanglement of the utility lo and the privacy guarantee, in addition to the lack of a comprehenive notion of privacy, have led to the deign of obfucation mechanim that are either uboptimal in term of their utility lo, or ignore the uer information leakage in the pat, or are limited to very pecific notion of privacy which e.g., do not protect againt adaptive inference attack or the adverary with arbitrary background knowledge. In thi paper, we deign uer-centric obfucation mechanim that impoe the minimum utility lo for guaranteeing uer privacy. We optimize utility ubject to a joint guarantee of differential privacy (inditinguihability) and ditortion privacy (inference error). Thi double hield of protection limit the information leakage through obfucation mechanim a well a the poterior inference. We how that the privacy achieved through joint differential-ditortion mechanim againt optimal attack i a large a the maximum privacy that can be achieved by either of thee mechanim eparately. Their utility cot i alo not larger than what either of the differential or ditortion mechanim impoe. We model the optimization problem a a leader-follower game between the deigner of obfucation mechanim and the potential adverary, and deign adaptive mechanim that anticipate and protect againt optimal inference algorithm. Thu, the obfucation mechanim i optimal againt any inference algorithm. Keyword: Data Privacy, Obfucation, Utility, Differential Privacy, Ditortion Privacy, Inference Attack, Prior Knowledge, Optimization, Game Theory DOI /popet Received ; revied ; accepted *Correponding Author: Reza Shokri: Univerity of Texa at Autin, [email protected] 1 Introduction Data obfucation i a mechanim for hiding private data by uing mileading, fale, or ambiguou information with the intention of confuing an adverary [14]. A data obfucation mechanim act a a noiy information channel between a uer private data (ecret) and an untruted oberver [16]. The noiier thi channel i, the higher the privacy of the uer will be. We focu on uer-centric mechanim, in which each uer independently perturb her ecret before releaing it. Note that we are not concerned with databae privacy, but with the privacy iue of releaing a ingle enitive data ample (which however could be continuouly hared over time). For example, conider a mobile uer who i concerned about the information leakage through her location-baed querie. In thi cae, obfucation i the proce of randomizing true location o that the location-baed erver only receive the uer perturbed location. By uing obfucation mechanim, the privacy of a uer and her utility experience are at odd with each other, a the ervice that the uer receive i a function of what he hare with the ervice provider. There are problem to be addreed here. One i how to deign an obfucation mechanim that protect privacy of the uer and impoe a minimum utility cot. Another problem i how to guarantee the uer privacy, depite the lack of a ingle bet metric for privacy. Regarding utility optimization, we define utility lo of obfucation a the degradation of the uer ervicequality expectation due to haring the noiy data intead of it true value. Regarding privacy protection, there are two major metric propoed in the literature. Differential privacy limit the information leakage through obervation. But, it doe not reflect the abolute privacy level of the uer, i.e., what actually i learned about the uer ecret. So, uer would not know how cloe the adverary etimate will get to her ecret if he releae the noiy data, depite being ure that the relative gain of obervation for adverary i bounded. Ditortion privacy (inference error) metric overcome thi iue and meaure the error of inferring uer ecret from the obervation. Thi require aumption of a prior knowledge which enable u to quantify abolute privacy, but i not robut to adverarie with arbitrary

2 Privacy Game: Optimal Uer-Centric Data Obfucation 2 knowledge. Thu, either of thee metric alone i incapable of capturing privacy a a whole. The problem of optimizing the tradeoff between privacy and utility ha already been dicued in the literature, but notably for differential privacy in the context of tatitical databae [11, 25, 26, 28, 36]. Regarding uer-centric obfucation mechanim, [52] olve the problem of maximizing ditortion privacy under a contraint on utility lo. The author contruct the optimal adaptive obfucation mechanim a the uer bet repone to the adverary optimal inference in a Bayeian zero-um game. In the ame context, [9] olve the oppoite problem, i.e., optimizing utility but for differential privacy. In both paper, the author contruct the optimal olution uing linear programming. Differential and ditortion metric for privacy complement each other. The former i enitive to the likelihood of obervation given data. The latter i enitive to the joint probability of obervation and data. Thu, by guaranteeing both, we encompa all the defene that i theoretically poible. In thi paper, we model and olve the optimal obfucation mechanim that: (i) minimize utility lo, (ii) atifie differential privacy, and (iii) guarantee ditortion privacy, given a public knowledge on prior leakage about the ecret. We meaure the involved metric baed on eparate ditance function defined on the et of ecret. We model prior leakage a a probability ditribution over ecret, that can be etimated from the uer previouly releaed data. Ignoring uch information lead to overetimating the uer privacy and thu deigning a weak obfucation mechanim (againt adverarie who include uch expoed information in their inference attack). 1 A protection mechanim for ditortion privacy metric can be deigned uch that it i optimal againt a particular inference algorithm (e.g., Bayeian inference [7, 38] a privacy attack [51, 54]). But, by doing o, it i not guaranteed that the promied privacy level can be achieved in practice: an adverarial oberver can run inference attack that are optimally tailored againt the very obfucation mechanim ued by the uer (regardle 1 Note that the prior leakage i not equivalent to the adverary knowledge. An adverary might have acce to ome information about the uer data through channel where the uer i unaware of and ha no control over. No protection mechanim can guarantee the ditortion privacy againt adverarie with arbitrary knowledge. Imagine the wort cae where adverary know the exact ecret but through channel other than obervation of the obfucation mechanim. Therefore, our focu i on the uer, and we incorporate what the uer think ha been leaked o far. of the algorithm that the uer aume a priori). In fact, the adverary ha the upper hand a he infer the uer ecret (private information) after oberving the output of the obfucation mechanim. Thu, the obfucation mechanim mut anticipate the adaptive inference attack that will follow the obervation. Thi enable u to deign an obfucation mechanim that i independent of the adverary inference algorithm. To addre thi concern, we adapt a game-theoretic notion of privacy for deigning optimal obfucation mechanim againt adaptive inference. We formulate thi game a a Stackelberg game and olve it uing linear programming. 2 We then add the differential privacy guarantee a a contraint in the linear program and olve it to contruct the optimal mechanim. The reult of uing uch obfucation mechanim i that, not only the perturbed data ample are inditinguihable from the true ecret (due to differential privacy bound), but alo they cannot be ued to accurately infer the ecret uing the prior leakage (due to ditortion privacy meaure). To the bet of our knowledge, thi work i the firt to contruct utility maximizing obfucation mechanim with uch formal privacy guarantee. We illutrate the application of optimal protection mechanim on a real data et of uer location, where uer want to protect their location privacy againt location-baed ervice. We evaluate the effect of privacy guarantee on utility cot. We alo analyze the robutne of our optimal obfucation mechanim againt inference attack with different algorithm and background knowledge. We how that our joint differentialditortion mechanim are robut againt adverarie with optimal attack and background knowledge. Moreover, the utility lo i at mot equal to the utility lo of differential or ditortion privacy, eparately. The novelty of thi paper in the context of uercentric obfucation i twofold: We contruct optimal obfucation mechanim that provably limit the uer privacy rik (i.e., by guaranteeing the uer ditortion privacy) againt any inference attack, with minimum utility cot. We deign obfucation mechanim that optimally balance the tradeoff between utility and joint ditortion-differential privacy. The olution i robut againt adverary with arbitrary knowledge, yet it guarantee a required privacy given the uer etimation of the prior information leakage. 2 A oppoed to [52], the game i not zero-um anymore given that here uer maximize utility and oberver minimize privacy.

3 Privacy Game: Optimal Uer-Centric Data Obfucation 3 2 Related Work Thi paper contribute to the broad area of reearch that concern deigning obfucation mechanim, e.g., in the context of quantitative information flow [34, 40, 41], quantitative privacy in data haring ytem [4, 51, 52], a well a differential privacy [21, 26, 28, 36]. The conflict between privacy and utility ha been dicued in the literature [12, 31]. We build upon prevalent notion of privacy and protect it with repect to information leakage through both obervation (differential privacy) and poterior inference (ditortion privacy) while optimizing the tradeoff between utility and privacy. We alo formalize thi problem and olve it for uer-centric obfucation mechanim, where it each individual uer who perturb her ecret data before haring it with external oberver (e.g., ervice provider). The problem of perturbing data for differential and ditortion privacy, eparately, and optimizing their effect on utility ha already been dicued in the literature. Original metric for differential privacy meaure privacy of output perturbation method in tatitical databae [21]. Auming two tatitical databae to be neighbor if they differ only in one entry, [25] and [26] deign utility maximizing perturbation mechanim for the cae of counting querie. In [28, 36], author propoe different approache to deigning perturbation mechanim for counting querie under differential privacy. However, [11] preent ome impoibility reult of extending thee approache to other type of databae querie. Under ome aumption about the utility metric, [24] how that the optimal perturbation probability ditribution ha a ymmetric taircae-haped probability denity function. [6, 15, 49] extend differential privacy metric uing generic ditance function on the et of ecret. Some extenion of differential privacy alo conider the problem of incorporating the prior knowledge into it privacy definition [29, 32]. The mot related paper to our framework, in thi domain, i [9] where the author contruct utilitymaximizing differentially private obfucation mechanim uing linear programming. The author prove an intereting relation between utility-maximizing differential privacy and ditortion-privacy-maximizing mechanim that bound utility, when ditance function ued in utility and privacy metric are the ame. Thi, however, cannot guarantee ditortion privacy for general metric. The optimal differentially private mechanim, in general, do not incorporate the available knowledge about the ecret while achieving differential privacy. Ditortion privacy, which evaluate privacy a the inference error [51], i a follow-up of informationtheoretic metric for anonymity and information leakage [16, 20, 34, 50]. Thi cla of metric i concerned with what can be inferred about the true ecret of the uer by combining the obervation (of obfucated information) and prior knowledge. The problem of maximizing privacy under utility contraint, auming a prior, i proven to be equivalent to the uer bet trategy in a zero-um game againt adaptive adverarie [52]. With thi approach, one can find the optimal trategie uing linear programming. In fact, linear programming i the mot efficient olution for thi problem [18]. However, if we want to guarantee a certain level of privacy for the uer and maximize her utility, the problem cannot be modeled a a zero-um game anymore and there ha been no olution for it o far. We formalize thi game, and contruct a linear programming olution for thee privacy game too. Regarding the utility metric, we conider the expected ditance between the obervation and the ecret a the utility metric [11, 15, 25, 52]. The ditance function can depend on the uer and alo the application. In the cae of applying obfucation over time, we need to update the uer etimation of the prior leakage according to what ha been hared by the uer [19, 53]. We might alo need to update the differential privacy budget over time [17]. In thi paper, we model one time haring of a ecret, auming that the prior leakage and the differential privacy budget are properly computed and adjuted baed on the previou obervation. Our problem i alo related to the problem of adverarial machine learning [5, 30] and the deign of ecurity mechanim, uch a intelligent pam detection algorithm [13, 35, 37], againt adaptive attacker. It i alo imilar to the problem of placing ecurity patrol in an area to minimize the threat of attacker [48], and faking location-baed querie to protect againt localization attack [52]. The urvey [39] explore more example of the relation between ecurity and game theory. 3 Definition In thi ection, we define different part of our model. We aume a uer hare her data through an information haring ytem in order to obtain ome ervice (utility). We alo aume that uer want to protect their enitive information, while they hare their data with untruted entitie. For example, in the cae of har-

4 Privacy Game: Optimal Uer-Centric Data Obfucation 4 ing location-tagged data with a ervice provider, a uer might want to hide the exact viited location, their emantic, or her activitie that can be inferred from the viited location. We refer to the uer enitive information a her ecret. To protect her privacy, we aume that uer obfucate her data before haring or publihing it. Figure 1 illutrate the information flow that we aume in thi paper. The input to the protection mechanim i a ecret S, where S i the et of all poible value that can take (for example, the location that the uer can viit, or the individual that he i acquainted with). Let prior leakage π be the probability ditribution over value of to reflect the data model and the a priori expoed information about the ecret. π() = Pr{S = } (1) The probability ditribution π i etimated by the uer to be the predictability of the uer ecret given her expoed information in the pat. Thu, anytime that uer hare ome (obfucated) information, he need to update thi probability ditribution [19, 53]. Thi i how we incorporate the correlation between uer data hared over time. 3.1 Obfucation Mechanim We aume that a uer want to preerve her privacy with repect to. To protect her privacy, a uer obfucate her ecret and hare an inaccurate verion of it through the ytem. We aume that thi obfucated data o O i obervable through the ytem. We conider a generic cla of obfucation mechanim, in which the obervable o i ampled according to the following probability ditribution. p(o ) = Pr{O = o S = } (2) Thu, we model the privacy preerving mechanim a a noiy channel between the uer and the untruted oberver. Thi i imilar to the model ued in quantitative information flow and quantitative ide-channel analyi [34, 41]. The output, i.e., the et of obervable O, can in general be a member of the poweret of S. A an example, in the mot baic cae, O = S, i.e., the protection mechanim can only perturb the ecret by replacing it with another poible ecret value. Thi can happen through adding noie to. In a more generic cae, the member of O can contain a ubet of ecret. For example, the protection mechanim can generalize a location coordinate, by reducing it granularity. prior leakage π() ecret obfucation p(o ) c(o, ) utility cot o obervable uer-pecific privacy d(, ) inference q( o) etimate Fig. 1. The Information Sharing Framework. Probability ditribution π encode the uer etimation of a priori leaked information about ecret. The ecret i obfucated by the protection mechanim p whoe output i an obervable o. The adaptive adverary (anticipated by the uer) run inference attack q on o and draw a probability ditribution over etimate. Ditance function c denote the utility cot of the protection mechanim due to obfucation. Ditance function d denote the privacy of uer (for ditortion privacy metric) or the required inditinguihability between ecret (for differential privacy metric). Uer define the ditance function d to reflect her privacy enitivitie. 3.2 Utility Cot Uer incur a utility lo due to obfucation. Let the ditance function c(o, ) determine the utility cot (information uefulne degradation) due to replacing a ecret with an obervable o. The cot function i dependent on the application of the hared information, on the pecific ervice that i provided to the uer, and alo on the uer expectation. We compute the expected utility cot of a protection mechanim p a π() p(o ) c(o, ). (3) o We can alo compute the wort (maximum) utility cot over all poible ecret a max p(o ) c(o, ). (4) o In thi work, we do not plan to determine which metric are the bet repreentative utility lo metric for different type of ervice or uer. We only aume that the deigner of optimal obfucation mechanim i provided with uch a utility function, for example, by contructing it according to the application [43], or by learning it automatically [8] from the uer preference and application profile.

5 Privacy Game: Optimal Uer-Centric Data Obfucation Inference Attack We tated that the uer want to protect her privacy with repect to ecret againt untruted oberver. To be conitent with thi, we define the adverary a an entity who aim at finding the uer ecret by oberving the outcome of the protection mechanim and minimizing the uer privacy with repect to her privacy enitivitie. For any obervation o, then we determine the probability ditribution over the poible ecret S a to be the true ecret of the uer. q( o) = Pr{S = O = o} (5) The goal of the inference algorithm q i to invert a given protection mechanim p to etimate. The error of adverary, in thi etimation proce, determine the effectivene of the inference algorithm, which i captured by the ditortion privacy metric. 3.4 Ditortion Privacy Metric A tated above, the uer privacy and the adverary inference error are two ide of the ame coin. We define the privacy gain of the uer with ecret a a ditance between the two data point: d(, ), where i the a poteriori etimation of the ecret [51]. The ditance function d i determined by the enitivity of the uer toward each ecret when etimated a. A uer would be le worried about revealing o p(o ), if the portrait of her ecret in the eye of adverary i an etimate with a large ditance d(, ). Thi ditance function i defined by the uer. It could be a emantic ditance between different value of ecret to reflect the privacy rik of on uer when her ecret i. Uually, the highet rik i aociated with the cae where the etimate i equal to the ecret. However, ometime even wrong etimate can impoe a high rik on the uer, for example when they leak information about the emantic of the ecret. We compute the uer privacy obtained through a protection mechanim p, with repect to a given inference algorithm q, for a pecific ecret a p(o ) q( o) d(, ). (6) o By averaging thi value over all poible ecret, we compute the expected ditortion privacy of the uer a π() p(o ) q( o) d(, ). (7) o Thi metric how the average etimation error, or how ditorted the recontructed uer ecret i. Thu, we refer to it a the ditortion privacy metric. What aociate a emantic meaning to thi metric i the ditance function d. Many ditance function can be defined to reflect ditortion privacy. Thi depend on the type of the ecret and to the enitivity of the uer. For example, if the uer ecret i her age, function d could be the abolute ditance between two number. If the ecret i the uer location, function d could be a Euclidean ditance between location, or their emantic diimilarity. If the ecret i the movie that he ha watched, function d could be the Jaccard ditance between two et of movie. 3.5 Differential Privacy Metric The privacy that i achieved by an obfucation mechanim can be computed with repect to the information leakage through the mechanim, regardle of the ecret inference. For example, the differential privacy metric, originally propoed for protecting privacy in tatitical databae [21], i enitive only to the difference between the probabilitie of obfucating multiple ecret to the ame obervation (which i input to the attack). According to the original definition of differential privacy, a randomized function K (that act a the privacy protection mechanim) provide ɛ-differential privacy if for all data et D and D, that differ on at mot one element, and all Y Range(K), the following inequality hold. Pr{K(D) Y } exp(ɛ) Pr{K(D ) Y } (8) Differential privacy i not limited to tatitical databae. It ha been ued in many different context where variou type of adjacency relation capture the context dependent privacy. A typical example i edge privacy in graph [46]. It ha alo been propoed for arbitrary ditance function between ecret [15]. Thi notion can imply be ued for meauring information leakage [3]. It ha been hown that differential privacy impoe a bound on information leakage [1, 2]. And, thi i exactly why we are intereted in thi metric. Let d ɛ (, ) be a ditinguihability metric between, S. A protection mechanim i defined to be differentially private if for all ecret, S, where d ɛ (, ) d ɛ m, and all obervable o O, the following inequality hold. p(o ) exp(ɛ) p(o ) (9)

6 Privacy Game: Optimal Uer-Centric Data Obfucation 6 In thi paper, we ue a generic definition of differential privacy, auming arbitrary ditance function d ɛ () on the ecret [6, 15, 22, 49]. In thi form, a protection mechanim i differentially private if for all ecret, S, with ditinguihability d ɛ (, ), and for all obervable o O, the following hold. p(o ) exp(ɛ d ɛ (, )) p(o ) (10) In fact, the differential privacy metric guarantee that, given the obervation, there i not enough convincing evidence to prefer one ecret to other imilar one (given d ɛ ). In other word, it make multiple ecret value inditinguihable from each other. the ditinguihability threhold. The uer privacy i guaranteed if p atifie the following inequality. p (o ) exp(ɛ m ) p (o ), o,, : d ɛ (, ) d ɛ m (14) Or, alternatively (following [15] definition of differential privacy): p (o ) exp(ɛ m d ɛ (, )) p (o ),,, o (15) In thi paper, we mainly ue the latter definition, but make ue of the former one a the bai to reduce the computation cot of optimizing differential privacy (ee Appendix C). 4 Problem Statement The problem that we addre in thi paper i to find an optimal balance between privacy and utility, and to contruct the protection mechanim that achieve uch optimal point. More preciely, we want to contruct utility-maximizing obfucation mechanim with joint differential-ditortion privacy guarantee. The problem i to find a probability ditribution function p uch that it minimize utility cot of the uer, on average, p = argmin p π() o p(o ) c(o, ) (11) or, alternatively, over all the ecret p = argmin max p(o ) c(o, ) (12) p under the uer privacy contraint. 4.1 Ditortion Privacy Contraint Let d m be the minimum deired ditortion privacy level. The uer average ditortion privacy i guaranteed if the obfucation mechanim p atifie the following inequality. π() p (o ) q ( o) d(, ) d m (13) o where q i the optimal inference attack againt p. 4.2 Differential Privacy Contraint Let ɛ m be the differential privacy budget aociated with the minimum deired privacy of the uer, and d ɛ m be o 5 Solution: Privacy Game The flow of information tart from the uer where the ecret i generated. The uer then elect a protection mechanim, and obfucate her ecret according to it probabilitic function. After the adverary oberve the output, he can deign an optimal inference attack againt the obfucation mechanim to invert it and etimate the ecret. We aume the obfucation mechanim i not obliviou and i known to the adverary. Thi give the adverary the upper hand againt the uer in their conflict. So, deigning an obfucation mechanim againt a fixed attack i alway uboptimal. The bet obfucation mechanim i the one that anticipate the adverary attack. Thu, the obfucation mechanim hould be primarily deigned againt an adaptive attack which i tailored to each pecific obfucation mechanim. So, by auming that the adverary deign the bet inference attack againt each protection mechanim, the uer goal (a the defender) mut be to deign the obfucation mechanim that maximize her (privacy or utility) objective againt an adverary that optimize the conflicting objective of gueing the uer ecret. The adverary i an entity aumed by the uer a the entity whoe objective exactly the oppoite of the uer. So, we do not model any particular attacker but the one that minimize uer privacy according to ditance function d and d ɛ. For each obfucation mechanim there i an inference attack that optimize the adverary objective and lead to a certain privacy and utility payoff for the uer. The optimal obfucation mechanim for the uer i the one that bring the maximum payoff for her, againt the mechanim correponding optimal inference attack. Enumerating all pair of uer-attacker mechanim to find the optimal obfucation function i infeaible. We

7 Privacy Game: Optimal Uer-Centric Data Obfucation 7 model the joint uer-adverary optimization problem a a leader-follower (Stackelberg) game between the uer and the adverary. The uer lead the game by chooing the protection mechanim p, and the adverary follow by deigning the inference attack q. The olution to thi game i the pair of uer-adverary bet repone trategie p and q which are mutually optimal againt each other. If the uer implement p, we have already conidered the tronget attack q againt it. Thu, p i robut againt any algorithm ued a inference attack. For any ecret S, the trategy pace of the uer i the et of obervable O. For any obervable o O, the trategy pace of the adverary i the et of ecret S (all poible adverary etimate S). For a given ecret S, we repreent a mixed trategy for the uer by a vector p(. ) = (p(o 1 ), p(o 2 ),, p(o m )), where {o 1, o 2,, o m } = O. Similarly, a mixed trategy for the adverary, for a given obervable o O i a vector q(. o) = (q( 1 o), q( 2 o),, q( n o)), where { 1, 2,, n } = S. Note that the vector p(. ) and q(. o) are repectively the conditional ditribution function aociated with an obfucated function for a ecret and an inference algorithm for an obervable o. Let P and Q be the et of all mixed trategie of the uer and the adverary, repectively. P = {p(. ) = (p(o 1 ), p(o 2 ),, p(o m )), S : p(o i ) 0, o i O, i p(o i ) = 1} (16) Q = {q(. o) = (q( 1 o), q( 2 o),, q( n o)), o O : q( j o) 0, j S, j q( j o) = 1} (17) A member vector of et P or Q with a 1 for the kth component and zero elewhere i the pure trategy of chooing action k. For example, an obfucation function p(. ) for which p(o i ) = 0, i k and p(o k ) = 1 i the pure trategy of excluively and determinitically outputting obervable o k for ecret. Thu, the et of pure trategie of a player i a ubet of mixed trategie of the player. In the cae of the ditortion privacy metric, the game need to be formulated a a Bayeian Stackelberg game. In thi game, we aume the probability ditribution π on the ecret and we find p P and q Q that create the equilibrium point. If uer deviate from thi trategy and chooe p p, there would be an inference attack q againt it uch that (p, q ) lead to a lower privacy for the uer, i.e., p i optimal. In the cae of a differential privacy metric, a the metric i not dependent to the adverary inference attack, the dependency loop between finding optimal p and q i broken. Neverthele, it i till the uer who play firt by chooing the optimal protection mechanim. In the following ection, we olve thee game and provide olution on how to deign the optimal ueradverary trategie. 6 Stackelberg Privacy Game Aume that the nature draw ecret according to the probability ditribution π(). Given, the uer draw o according to her obfucation mechanim p(o ), and make it obervable to the adverary. Given obervation o, the adverary draw according to hi inference attack q( o). We aume that π() i known to both player. We want to find the mutually optimal p, q : The olution of the Bayeian Stackelberg privacy game. To thi end, we firt deign the optimal inference attack againt any given protection mechanim p. Thi will be the bet repone of the adverary to the uer trategy. Then, we deign the optimal protection mechanim for the uer according to her objective and contraint, a tated in Section 4. Thi will be the uer bet utility-maximizing trategy that anticipate the adverary bet repone. 6.1 Optimal Inference Attack The adverary objective i to minimize (the uer privacy and thu) the inference error in etimating the uer ecret. Given a ecret, the ditance function d(, ) determine the error of an adverary in etimating the ecret a. In fact, thi ditance i exactly what a uer want to maximize (or put a lower bound on) according to the ditortion privacy metric. We compute the expected error of the adverary a π() Pr{ } d(, ) = =,o, π() p(o ) q( o) d(, ) (18) Therefore, we deign the following linear program, through which we can compute the adverary inference trategy that, given the probability ditribution π and obfucation p, minimize hi expected error with repect to a ditance function d. q = argmin q π() p(o ) q( o) d(, ),o, (19a)

8 Privacy Game: Optimal Uer-Centric Data Obfucation 8 under the contraint that the olution i a proper conditional probability ditribution function. In the next ubection, we will how that the optimal determinitic inference (that aociate one ingle etimate with probability one to each obervation) reult in the ame privacy for the uer (33). Alternative way to formulate thi problem i given in Appendix A. 6.2 Optimal Protection Mechanim In thi cae, we aume the uer would like to minimize her utility cot (11) under a (lower bound) contraint on her privacy (13). Therefore, we can formulate the problem a p = argmin p π() p(o ) c(o, ),o (20a). t.,o, π() p(o ) q ( o) d(, ) d m (20b) However, olving thi optimization problem require u to know the optimal q againt p, for which we need to know p a formulated in (19). So, we have two linear program (one for the uer and one for the adverary) to olve. But, the olution of each one i required in olving the other. Thi optimization dependency loop reflect the game-theoretic concept of mutual bet repone of the two player. Thi game i a nonzero-um Stackelberg game a the uer (leader player) and adverary (follower player) have different optimization objective (one maximize utility, and the other minimize privacy). We break the dependency loop between the optimization problem uing the game-theoretic modeling, and we prove that the uer bet trategy can be contructed uing linear programming. Theorem 1. Given a probability ditribution π, the ditance function d and c, and the threhold d m, the olution to the following linear program i the optimal protection trategy p for the uer, which i the olution to (20) with repect to adverary bet repone (19). p = argmin p π() p(o ) c(o, ),o. t. (21a) π() p(o ) d(, ) x(o), o, x(o) d m Proof. See Appendix B. o (21b) (21c) 7 Optimal Differential Privacy In thi ection, we deign optimal differentially private protection mechanim. We olve the optimization problem for maximizing utility under privacy contraint. We deign the following linear program to find the uer trategy p that guarantee uer differential privacy (15), for a maximum privacy budget ɛ m, and minimize the utility cot (11) of the obfucation mechanim. min p. t. π() p(o ) c(o, ),o p(o ) p(o ) exp(ɛ m d ɛ (, )),,, o (22a) (22b) Or, alternatively, for a ditinguihability bound d ɛ m, we can olve the following. min π() p(o ) c(o, ) (23a) p. t.,o p(o ) p(o ) exp(ɛ m), o,, : d ɛ (, ) d ɛ m (23b) 8 Optimal Joint Differential and Ditortion Privacy Mechanim Obfucation mechanim deigned baed on ditortion and differential privacy protect the uer privacy from two different angle. In general, for arbitrary d and d ɛ, there i no guarantee that a mechanim with a bound on one metric hold a bound on the other. Ditortion privacy metric reflect the abolute privacy of the uer, baed on the poterior etimation on the obfucated information. Differential privacy metric reflect the relative information leakage of each obervation about the ecret. However, it i not a meaure on the extent to which the oberver, who already ha ome knowledge about the ecret from the previouly hared data, can gue the ecret correctly. So, the inference might be very accurate (becaue of the background knowledge) depite the fact that the obfucation in place i a differentially-private mechanim. A ditortion and differential metric guarantee different dimenion of the uer privacy requirement, we repect both in a protection mechanim. Thi aure that not only the information leakage i limited, but alo the abolute privacy level i at the minimum required level. Thank to our unified formulation of privacy optimization problem a linear program, the problem of

9 Privacy Game: Optimal Uer-Centric Data Obfucation 9 jointly optimizing and guaranteeing privacy with both metric can alo be formulated a a linear program. The olution to the following linear program i a protection mechanim p that maximize the uer utility and guarantee a minimum ditortion privacy d m and a minimum differential privacy ɛ m, given probability ditribution π and ditance function c and d and ditinguihability metric d ɛ. The value of the optimal olution i the utility cot of the optimal mechanim. min p π() p(o ) c(o, ),o. t. π() p(o ) d(, ) x(o), o, x(o) d m o 9 Analyi p(o ) p(o ) exp(ɛ m d ɛ (, )),,, o (24a) (24b) (24c) (24d) We have implemented all our linear program olution in a oftware tool that can be ued to proce data for different application, in different etting. In thi ection, we ue our tool to deign privacy protection mechanim, and alo to make a comparion between different optimal mechanim, i.e., ditortion, differential, and joint ditortion-differential privacy preerving mechanim. We tudy the propertie of thee mechanim and we how how robut they are with repect to inference attack algorithm a well a to the adverary knowledge on ecret. We alo invetigate their utility cot for protecting privacy. Furthermore, we how that the optimal joint ditortion-differential mechanim are more robut than the two mechanim eparately. In Appendix C, we dicu and evaluate approximation of the optimal olution for large number of contraint. We run experiment on location data, a today they are included in mot of data haring application. We ue a real data-et of location trace collected through the Nokia Lauanne Data Collection Campaign [33]. The location information belong to a 15 8km area. We plit the area into cell. We conider location of a mobile uer in a cell a her ecret. Hence, the et of ecret i equivalent to the et of location cell. We aume the et of obervable to be the et of cell, o the uer obfucate their location by perturbation (i.e., replacing their true location with any location in the map). We run our experiment on 10 randomly elected uer, to ee the difference in the reult due to difference in uer location ditribution π baed on uer different location acce profile. We build π for each uer eparately given their individual location trace, uing maximum likelihood etimation (normalizing the uer number of viit to each cell in the tarce). We aume a Euclidean ditance function for d and d ɛ. Thi reflect the enitivity of uer toward her location. By uing thi ditance function for ditortion privacy, we guarantee that the adverary cannot gue the uer true location with error lower than the required privacy threhold (d m ). Chooing Euclidean ditance function a the metric for ditinguihability enure that the inditinguihability between location i larger for location that are located cloer to each other. We aume a Hamming ditortion function for c (i.e., the utility cot i 0 only if the uer location and the oberved location are the ame, otherwie the cot i 1). The utility metric can vary depending on the location-baed haring application and alo the purpoe for which the uer hare her location [8]. Chooing the Hamming function reflect the utility requirement of uer who want to inform other about their current location in location check-in application. We evaluate utility-maximizing optimal protection mechanim with three different privacy contraint: Ditortion Privacy Protection, (21). Differential Privacy Protection, (22). Joint Ditortion-Differential Privacy Protection, (24). We compare the effectivene of thee protection mechanim againt inference attack by uing the ditortion privacy metric (7). We conider two inference attack: Optimal Attack, (19). Bayeian Inference Attack, uing the Baye rule: q( o) = π() p(o ) Pr{o} = π() p(o ) π() p(o ) (25) 9.1 Comparing Obfucation Mechanim Scenario 1. Our firt goal i to have a fair comparion between optimal ditortion privacy mechanim and optimal differential mechanim. To thi end, we et the privacy parameter ɛ m to {0.15, 0.3,, 0.9}. For each uer and each value of ɛ m, 1. We compute the optimal differential privacy mechanim uing (22). Let p ɛ m be the optimal mechanim. 2. We run optimal attack (19) on p ɛ m, and compute the uer abolute ditortion privacy a AP (p ɛ m ).

10 Privacy Game: Optimal Uer-Centric Data Obfucation 10 Ditortion Privacy (Expected Error) Utility Cot Joint Differential Ditortion Epilon Differential Privacy (a) Achieved ditortion privacy for an optimal differential privacy mechanim with ɛ m. Each line correpond to one uer Ditortion Privacy (Expected Error) (b) Utility cot metric veru ditortion privacy metric, for three different optimal obfucation mechanim Fig. 2. Privacy and Utility of optimal protection mechanim, computed baed on the evaluation cenario number 1 in Section 9.1. Each dot in the plot correpond to privacy of one uer for one value of ɛ m. 3. We compute the optimal ditortion privacy mechanim p d m uing (21). For thi, we et the privacy lower-bound d m to AP (p ɛ m ). Thi enforce the ditortion privacy mechanim to guarantee what the differential privacy mechanim provide. 4. We compute the optimal joint ditortion-differential privacy mechanim p ɛ m,d m uing (24). We et the privacy lower-bound to ɛ m and d m for the differential and ditortion contraint, repectively. 5. We run optimal attack (19) on both p d m and p ɛ m,d m, and compute the uer abolute ditortion privacy a AP (p d m ) and AP (p ɛ m,d m ), repectively. 6. A a baeline for comparion, we run Bayeian inference attack (25) on the three optimal mechanim p ɛ m, p d m, and p ɛ m,d m. Figure 2 how the reult of our analyi, explained above. Ditortion privacy i meaured in km and i equivalent to the expected error of adverary in correctly etimating location of uer. Figure 2(a) how how expected privacy of uer AP (p ɛ m ) decreae a we increae the value of the lower-bound on differential privacy ɛ m. Uer have different ecret probability ditribution, with different randomne. However, a ɛ m increae, expected error of adverary (the location privacy of uer) converge down to below 1km. Figure 2(b) plot the utility cot veru ditortion privacy of each optimal protection mechanim. A we have et the privacy bound of the optimal ditortion mechanim (and of coure the optimal joint mechanim) to the privacy achieved by the optimal differential mechanim, we can make a fair comparion between their utility cot. We oberve that the utility cot for achieving ome level of ditortion privacy i much higher for optimal differential and joint mechanim compared with the optimal ditortion mechanim. Note that the utility cot of differential and joint mechanim are the ame. So, ditortion privacy bound doe not impoe more cot than what i already impoed by the differential privacy mechanim. A we et d m to AP (p ɛ m ), the uer ditortion privacy in uing optimal ditortion and optimal differential mechanim i the ame, when we confront them with the optimal attack (19). In Figure 3, however, we compare the effectivene of thee two mechanim againt Bayeian inference attack (25). It i intereting to oberve that the optimal differential mechanim i more robut to uch attack compared to the optimal ditortion mechanim. Thi explain the extra utility cot due to optimal differential mechanim. In Figure 4, we compare the effectivene of Bayeian inference attack (25) and optimal attack (19). We how the reult for all three optimal protection mechanim. It i clear that optimal attack outperform the Bayeian attack, a uer have a relatively higher privacy level under the Bayeian inference. However, the difference i more obviou for the cae of differential protection and joint protection mechanim. The Bayeian attack overetimate uer privacy, a it ignore the ditance function d, wherea the optimal attack minimize the expected value of d over all ecret and etimate.

11 Privacy Game: Optimal Uer-Centric Data Obfucation 11 Privacy: Optimal Ditortion Protection Privacy: Optimal Attack Joint Differential Ditortion Privacy: Optimal Differential Protection Fig. 3. Ditortion privacy of uer againt the Bayeian inference attack (25) when uing optimal differential privacy obfucation veru uing optimal ditortion privacy obfucation. Each dot repreent privacy of one uer for one value of ɛ m Privacy: Bayeian Inference Attack Fig. 4. Ditortion privacy of uer uing any of the three optimal mechanim againt the baic Bayeian inference attack (25) veru their privacy againt the optimal attack (19). Each dot repreent privacy of one uer for one value of ɛ m. Scenario 2. In thi paper, we introduce the optimal joint ditortiondifferential protection mechanim to provide u with the benefit of both mechanim. Figure 2(b) how that the optimal joint mechanim i not more cotly than the two optimal ditortion and differential mechanim. It alo how that it guarantee the highet privacy for a certain utility cot. To further tudy the effectivene of optimal joint mechanim, we run the following evaluation cenario. We deign optimal differential mechanim for ome value of ɛ m. And, we deign optimal ditortion mechanim for ome value of d m that are higher than the ditortion privacy reulted from thoe differential privacy mechanim. We alo contruct their joint mechanim given the ɛ m and d m parameter. Figure 5 how how the optimal joint mechanim adapt itelf to guarantee the maximum of the privacy level guaranteed by optimal Bayeian and optimal differential mechanim individually. Thi i clear from the fact that uer privacy for the optimal joint mechanim i equal to their privacy for ditortion mechanim (that a we et in our cenario, they are higher than that of differential mechanim). Thu, by adding the ditortion privacy contraint in the deign of optimal mechanim, we can further increae the privacy of uer (with the ame utility cot) that cannot be otherwie achieved by only uing differential mechanim. Scenario 3. In order to further invetigate the relation between the privacy (and utility) outcome of the optimal joint mechanim and that of individual differential or ditortion privacy mechanim, we run the following et of experiment on all the available uer profile. 1. For any value of ɛ m in {0.2, 0.4,, 1}, we compute the utility of optimal differential privacy mechanim a well a it privacy againt optimal attack. 2. For any value of d m in {, 1,, d max m }, we compute the utility of optimal ditortion privacy mechanim a well a it privacy againt optimal attack. i dependent on π and i the maximum value that the threhold can take (beyond which there i no olution to the optimization problem). 3. For any value of ɛ m in {0.2, 0.4,, 1}, and for any d max m value of d m in {, 1,, d max m }, we compute the utility and privacy of the optimal joint mechanim. Figure 7 how the reult. By an experiment we refer to the comparion of privacy (or utility) of a joint mechanim (with bound ɛ m, d m ) with the correponding differential privacy mechanim (with bound ɛ m ) and the correponding ditortion privacy mechanim (with bound d m ). Note that here the threhold ɛ m and d m are choen independently a oppoed to cenario 1 (and alo 2). We put the reult of all the experiment next to each other in the x-axi. Therefore, any vertical cut on the Figure 7 plot contain three point for privacy/utility of p ɛ m,d m, p ɛ m, and p d m. To better viualize

12 Privacy Game: Optimal Uer-Centric Data Obfucation 12 Privacy: Optimal (Differential/Ditortion) Protection Differential Ditortion Privacy: Optimal Joint Protection Privacy: Optimal Ditortion Protection Privacy: Optimal Differential Protection Fig. 5. Uer privacy uing optimal differential or ditortion protection mechanim veru uing optimal joint protection mechanim. Ditortion privacy i computed uing optimal attack (19). the reult, we have orted all the experiment baed on the privacy/utility of the joint mechanim. A the reult how, the privacy achieved by the optimal joint mechanim i equal to the maximum privacy that each of the individual differential/ditortion mechanim provide eparately. Thi mean that the uer would indeed benefit from including a ditortion privacy contraint baed on her prior leakage into the deign criteria of the optimal obfucation mechanim. Thi come at no extra utility cot for the uer, a the utility graph how. In fact, the utility cot of an optimal joint mechanim i not additive and intead i the maximum of the two component, which i the differential privacy mechanim in all teted experiment. The reaon behind thi i that the differential privacy component make the joint obfucation mechanim robut to the cae where the background knowledge of the adverary include not only the prior leakage but alo other auxiliary information available to him. 9.2 Evaluating the Effect of Prior When uing ditortion metric in protecting privacy, we achieve optimal privacy given the uer etimated prior leakage modeled by probability ditribution π over the ecret. In the optimal attack againt variou protection mechanim, a real adverary make ue of a prior ditribution over the ecret. In thi ubection, we evaluate to what extent a more informed adverary can harm privacy of uer further than what i promied by the optimal protection mechanim. Note that no matter Fig. 6. Uer privacy againt the optimal attack uing optimal differential protection veru uing optimal ditortion protection. Each circle repreent privacy of a uer for a different ɛ m and for a different prior aumed in the attack. The red dot correpond to the cae where the probability π aumed in deigning the protection mechanim i the ame a the attacker knowledge. what protection mechanim i ued by the uer, a more knowledgable adverary will learn more about the ecret. In thi ection, our goal i not to how thi obviou fact, but to evaluate how robut our mechanim are with repect to adverarie with different knowledge accuracy level. To perform thi analyi, we conider a cenario in which the adverary aumption on π, for each uer, ha a lower level of uncertainty compared to π. Thi can happen in the real world when an adverary obtain new evidence about a uer ecret that i not ued by uer for computing π. Let ˆπ be the other verion of π aumed by adverary, for a given uer. For the ake of our analyi, we generate ˆπ by providing the adverary with more evidence about mot frequently viited location, e.g., home and work. Thi i equivalent to the cenario in which the adverary know the uer ignificant location, e.g., where the uer live and work. The entropy of ˆπ i le than that of π, hence it contain more information about the uer mobility. We contruct the protection mechanim auming π, and we attack them by optimal inference attack, but auming the lower entropy ˆπ prior. Figure 6 illutrate privacy of uer for different aumption of ˆπ, uing optimal differential protection veru optimal ditortion protection (auming π). We oberve that a more informed adverary ha a lower expected error. However, it further how that an optimal differential protection mechanim compared to an optimal ditortion mecha-

13 Privacy Game: Optimal Uer-Centric Data Obfucation Differential Ditortion Joint Differential Ditortion Joint Privacy (Expected Error) Utility Lo Experiment 0 Experiment Fig. 7. The comparion between privacy and utility of optimal joint mechanim p ɛ m,d m with the individual protection mechanim p ɛ m and p d m, i.e., the mechanim whoe bound are jointly repected in the optimal joint mechanim. The three point on each vertical line repreent the reult of one uch comparion experiment for different value of pair of privacy threhold (ɛ m, d m ) in {0.2, 0.4,, 1} {, 1,, d max m }. nim i more robut to knowledgable adverarie. Note that we et d m to AP (p ɛ m ), according to cenario 1 in Section 9.1. So, when ˆπ = π, both optimal protection mechanim guarantee the ame level of privacy. However, a there i more information in ˆπ than in π, more information can be inferred from the optimal ditortion mechanim compared to the differential mechanim. 10 Concluion We have olved the problem of deigning optimal uercentric obfucation mechanim for data haring ytem. We have propoed a novel methodology for deigning uch mechanim againt any adaptive inference attack, while maximizing uer utility. We have propoed a generic framework for quantitative privacy and utility, uing which we formalize the problem of maximizing uer utility under a lower-bound contraint on their privacy. The major novelty of the paper i to olve thee optimization problem for both tate-of-the-art ditortion and differential privacy metric, for the generic cae of any ditance function between the ecret. Being generic with repect to the ditance function, enable u to formalize any enitivity function on any type of ecret. We have alo propoed a new privacy notion, joint ditortion-differential privacy, and contructed it optimal mechanim that ha the trength of both metric. We have provided linear program olution for our optimization problem that provably achieve minimum utility lo under thoe privacy bound. Acknowledgement We would like to thank the PC reviewer for their contructive feedback, and Kota Chatzikokolaki for very ueful dicuion on thi work. Reference [1] M. S. Alvim, M. E. André, K. Chatzikokolaki, P. Degano, and C. Palamidei. Differential privacy: on the trade-off between utility and information leakage. In Formal Apect of Security and Trut, page Springer, [2] M. S. Alvim, M. E. André, K. Chatzikokolaki, and C. Palamidei. On the relation between differential privacy and quantitative information flow. In Automata, Language and Programming, page Springer, [3] M. S. Alvim, M. E. André, K. Chatzikokolaki, and C. Palamidei. Quantitative information flow and application to differential privacy. In Foundation of ecurity analyi and deign VI [4] M. E. André, N. E. Bordenabe, K. Chatzikokolaki, and C. Palamidei. Geo-inditinguihability: Differential privacy for location-baed ytem. In Proceeding of the 2013 ACM SIGSAC conference on Computer & communication ecurity, page ACM, [5] M. Barreno, B. Nelon, R. Sear, A. D. Joeph, and J. Tygar. Can machine learning be ecure? In Proceeding of the ACM Sympoium on Information, computer and communication ecurity, [6] G. Barthe, B. Köpf, F. Olmedo, and S. Zanella Béguelin. Probabilitic relational reaoning for differential privacy. ACM SIGPLAN Notice, 2012.

14 Privacy Game: Optimal Uer-Centric Data Obfucation 14 [7] J. O. Berger. Statitical deciion theory and Bayeian analyi. Springer, [8] I. Bilogrevic, K. Huguenin, S. Mihaila, R. Shokri, and J.- P. Hubaux. Predicting uer motivation behind location check-in and utility implication of privacy protection mechanim. In In Network and Ditributed Sytem Security (NDSS) Sympoium, [9] N. E. Bordenabe, K. Chatzikokolaki, and C. Palamidei. Optimal geo-inditinguihable mechanim for location privacy. In Proceeding of the 16th ACM conference on Computer and communication ecurity, [10] S. P. Boyd and L. Vandenberghe. Convex optimization. Cambridge univerity pre, [11] H. Brenner and K. Niim. Impoibility of differentially private univerally optimal mechanim. In Foundation of Computer Science (FOCS), t Annual IEEE Sympoium on, page IEEE, [12] J. Brickell and V. Shmatikov. The cot of privacy: Detruction of data-mining utility in anonymized data publihing. In Proceeding of the 14th ACM SIGKDD International Conference on Knowledge Dicovery and Data Mining, KDD 08, page 70 78, New York, NY, USA, ACM. [13] M. Brückner and T. Scheffer. Stackelberg game for adverarial prediction problem. In 17th ACM SIGKDD International Conference on Knowledge Dicovery and Data Mining (KDD 2011), [14] F. Brunton and H. Nienbaum. Vernacular reitance to data collection and analyi: A political theory of obfucation. Firt Monday, 16(5), [15] K. Chatzikokolaki, M. E. André, N. E. Bordenabe, and C. Palamidei. Broadening the cope of differential privacy uing metric. In Privacy Enhancing Technologie, page Springer, [16] K. Chatzikokolaki, C. Palamidei, and P. Panangaden. Anonymity protocol a noiy channel. Information and Computation, 206(2-4): , [17] K. Chatzikokolaki, C. Palamidei, and M. Stronati. A predictive differentially-private mechanim for mobility trace. In Privacy Enhancing Technologie, page Springer International Publihing, [18] V. Conitzer and T. Sandholm. Computing the optimal trategy to commit to. In Proceeding of the 7th ACM conference on Electronic commerce, [19] G. Danezi and C. Troncoo. You cannot hide for long: deanonymization of real-world dynamic behaviour. In Proceeding of the 12th ACM workhop on Workhop on privacy in the electronic ociety, page ACM, [20] C. Diaz, S. Sey, J. Claeen, and B. Preneel. Toward meauring anonymity. In Privacy Enhancing Technologie, page Springer Berlin Heidelberg, [21] C. Dwork. Differential privacy. In Automata, language and programming, page Springer, [22] C. Dwork, F. McSherry, K. Niim, and A. Smith. Calibrating noie to enitivity in private data analyi. In Theory of Cryptography, page Springer, [23] V. F. Faria and B. Van Roy. Tetri: A tudy of randomized contraint ampling. In Probabilitic and Randomized Method for Deign Under Uncertainty [24] Q. Geng and P. Viwanath. The optimal mechanim in differential privacy. arxiv preprint arxiv: , [25] A. Ghoh, T. Roughgarden, and M. Sundararajan. Univerally utility-maximizing privacy mechanim. In Proceeding of the 41t annual ACM ympoium on Theory of computing, page ACM, [26] A. Ghoh, T. Roughgarden, and M. Sundararajan. Univerally utility-maximizing privacy mechanim. SIAM Journal on Computing, 41(6): , [27] M. Grötchel, L. Lováz, and A. Schrijver. The ellipoid method and it conequence in combinatorial optimization. Combinatorica, [28] M. Gupte and M. Sundararajan. Univerally optimal privacy mechanim for minimax agent. In Proceeding of the twenty-ninth ACM SIGMOD-SIGACT-SIGART ympoium on Principle of databae ytem, [29] X. He, A. Machanavajjhala, and B. Ding. Blowfih privacy: Tuning privacy-utility trade-off uing policie. In Proceeding of the 2014 ACM SIGMOD international conference on Management of data, page ACM, [30] L. Huang, A. D. Joeph, B. Nelon, B. I. Rubintein, and J. Tygar. Adverarial machine learning. In Proceeding of the 4th ACM workhop on Security and artificial intelligence, [31] S. Ioannidi, A. Montanari, U. Weinberg, S. Bhagat, N. Fawaz, and N. Taft. Privacy tradeoff in predictive analytic. arxiv preprint arxiv: , [32] D. Kifer and A. Machanavajjhala. No free lunch in data privacy. In Proceeding of the 2011 ACM SIGMOD International Conference on Management of data, page ACM, [33] N. Kiukkonen, J. Blom, O. Doue, D. Gatica-Perez, and J. Laurila. Toward rich mobile phone dataet: Lauanne data collection campaign. Proc. ICPS, Berlin, [34] B. Köpf and D. Bain. An information-theoretic model for adaptive ide-channel attack. In Proceeding of the 14th ACM conference on Computer and communication ecurity, [35] D. Korzhyk, Z. Yin, C. Kiekintveld, V. Conitzer, and M. Tambe. Stackelberg v. Nah in ecurity game: An extended invetigation of interchangeability, equivalence, and uniquene. Journal of Artificial Intelligence Reearch, 41: , May Augut [36] C. Li, M. Hay, V. Ratogi, G. Miklau, and A. McGregor. Optimizing linear counting querie under differential privacy. In Proceeding of the twenty-ninth ACM SIGMOD-SIGACT- SIGART ympoium on Principle of databae ytem, page ACM, [37] W. Liu and S. Chawla. A game theoretical model for adverarial learning. In IEEE International Conference on Data Mining Workhop (ICDM 2009), [38] D. J. MacKay. Information theory, inference and learning algorithm. Cambridge univerity pre, [39] M. Manhaei, Q. Zhu, T. Alpcan, T. Baar, and J.-P. Hubaux. Game theory meet network ecurity and privacy. ACM Computing Survey, 45(3), [40] P. Mardziel, M. S. Alvim, M. Hick, and M. R. Clarkon. Quantifying information flow for dynamic ecret. In IEEE Sympoium on Security and Privacy, [41] S. A. Mario, K. Chatzikokolaki, C. Palamidei, and G. Smith. Meauring information leakage uing generalized gain function IEEE 25th Computer Security

15 Privacy Game: Optimal Uer-Centric Data Obfucation 15 Foundation Sympoium, [42] R. T. Marler and J. S. Arora. Survey of multi-objective optimization method for engineering. Structural and multidiciplinary optimization, 26(6): , [43] K. Micinki, P. Phelp, and J. S. Foter. An empirical tudy of location truncation on android. Weather, 2:21, [44] K. Miettinen. Nonlinear multiobjective optimization, volume 12. Springer, [45] Y. E. Neterov and A. Nemirovkii. Interior point polynomial method in convex programming: Theory and algorithm. IAM Publication. IAM, Philadelphia, UA, [46] K. Niim, S. Rakhodnikova, and A. Smith. Smooth enitivity and ampling in private data analyi. In Proceeding of the thirty-ninth annual ACM ympoium on Theory of computing, page ACM, [47] V. Pareto. Manuale di economia politica, volume 13. Societa Editrice, [48] P. Paruchuri, J. P. Pearce, J. Marecki, M. Tambe, F. Ordóñez, and S. Krau. Efficient algorithm to olve Bayeian Stackelberg game for ecurity application. In Conference on Artificial Intelligence, [49] J. Reed and B. C. Pierce. Ditance make the type grow tronger: a calculu for differential privacy. ACM Sigplan Notice, [50] A. Serjantov and G. Danezi. Toward an information theoretic metric for anonymity. In Privacy Enhancing Technologie, page Springer Berlin Heidelberg, [51] R. Shokri, G. Theodorakopoulo, J.-Y. Le Boudec, and J.-P. Hubaux. Quantifying location privacy. In Proceeding of the IEEE Sympoium on Security and Privacy, [52] R. Shokri, G. Theodorakopoulo, C. Troncoo, J.-P. Hubaux, and J.-Y. Le Boudec. Protecting location privacy: optimal trategy againt localization attack. In Proceeding of the ACM conference on Computer and communication ecurity, [53] G. Theodorakopoulo, R. Shokri, C. Troncoo, J.-P. Hubaux, and J.-Y. L. Boudec. Prolonging the hide-and-eek game: Optimal trajectory privacy for location-baed ervice. In ACM Workhop on Privacy in the Electronic Society (WPES 2014), [54] C. Troncoo and G. Danezi. The bayeian traffic analyi of mix network. In Proceeding of the 16th ACM conference on Computer and communication ecurity, [55] L. Zadeh. Optimality and non-calar-valued performance criteria. Automatic Control, IEEE Tranaction on, 8, A Optimal Inference Attack Given the uer protection mechanim p, the inference attack (19) i a valid trategy for the adverary, a there i no dependency between the defender and attacker trategie in the cae of differential privacy metric. However, a the differential privacy metric (ued in the protection mechanim) doe not include any probability ditribution on ecret, we can deign an inference attack whoe objective i to minimize the conditional expected error E : E = o, p (o ) q( o) d(, ) (26) for all ecret. Thi i a multi-objective optimization problem [42] that doe not prefer any of the E (for any ecret) to another. Under no uch preference, the objective i to minimize E, uing weighted um method with equal weight for each ecret. Thu, the following linear program contitute the optimal inference attack, under the mentioned aumption. min q p (o ) q( o) d(, ) (27),o, A all the wight of E are poitive (= 1), the minimum of (27) i Pareto optimal [55]. Thu, minimizing (27) i ufficient for Pareto optimality. The optimal point in a multi-objective optimization (a in our cae) i Pareto optimal if there i no other point that improve at leat one objective function without detriment to another function [42, 47]. An alternative approach i to ue the min-max formulation, and minimize the maximum conditional expected error E over all ecret. For thi, we introduce a new unknown parameter y (that will be the maximum E ). The following linear program olve the optimal inference attack uing the min-max formulation. Thi alo provide a neceary condition for the Pareto optimality [44]. min q y (28a). t. o, p (o ) q( o) d(, ) y, (28b) We can alo conider the expected error conditioned on both ecret and etimate a the adverary objective to minimize. So, we can ue E, = Pr{ } d(, ) intead of Pr{ } d(, ) in (26), and ue the ame approach a in (28). The following linear program find the optimal inference attack that minimize the conditional expected etimation error over all and, uing the min-max formulation. min q y (29a). t. o p (o ) q( o) d(, ) y,, (29b) Overall, we prefer the linear program (27) a it ha the leat number of contraint among the above three. We can alo ue (19) for comparion of optimal protection mechanim baed on ditortion and differential metric.

16 Privacy Game: Optimal Uer-Centric Data Obfucation 16 B Proof of Theorem 1 Proof. We contruct (21) from (20). In (20), we condition the optimal obfucation p on it correponding optimal inference (bet repone) attack q. So, for any obervable o, the inference trategy q (. o) i the one that, by definition of the bet repone, minimize the expected error q( o) π() p(o ) d(, ) (30) Thu, the privacy value (20b) to be guaranteed i π() p(o ) q ( o) d(, ) =,o, = o min q(. o) q( o) π() p(o ) d(, ) (31) Note that (30) i an average of π() p(o ) d(, ) over, and thu it mut be larger or equal to the mallet value of it for a particular. min q(. o) min q( o) π() p(o ) d(, ) π() p(o ) d(, ) (32) Let q (. o) be a conditional probability ditribution function uch that for any given obervable o, q ( o) = { 1 if = argmin 0 otherwie π() p(o ) d(, ) (33) Note that q Q i a pure trategy that repreent one particular inference attack. Moreover, (31) contruct q uch that it optimize (30) over the et of all mixed trategie Q that include all the pure trategie. The minimum value for the optimization over the et of all mixed trategie i clearly le than or equal to the minimum value for the optimization over it ubet (the pure trategie). Thu, the following inequality hold. q ( o) π() p(o ) d(, ) = min q( o) π() p(o ) d(, ) q(. o) q ( o) π() p(o ) d(, ) = min π() p(o ) d(, ) (34) Therefore, from inequalitie (32) and (34) we have π() p(o ) q ( o) d(, ),o, = o min π() p(o ) d(, ) = x(o) (35) o π() p(o ) d(, ), or equiva- π() p(o ) d(, ),. where x(o) = min lently x(o) Thu, the contraint (20b) in the linear program (20) i equivalent to (and can be replaced by) the contraint (21b) and (21c) in the linear program (21). C Approximating The Optimal Mechanim Here, we briefly dicu the computational apect of the deign of optimal protection mechanim. Although the olution to linear program provide u with the optimal protection mechanim, their computation cot i quadratic (for ditortion mechanim) and cubic (for differential mechanim) in the cardinality of the et of ecret and obervable. Providing privacy for a large et of ecret need a high computation budget. To etablih a balance between the computation budget and privacy requirement, we can make ue of approximation technique to deign optimal protection mechanim. We explore ome poible approache. Linear programming [10] i one of the fundamental area of mathematic and computer cience, and there i a variety of algorithm to olve a linear program. Surveying thoe algorithm and evaluating their efficiencie i out of the cope of thi paper. Thee algorithm earch the et of feaible olution of a problem for finding the optimal olution that meet the contraint. Many of thee algorithm are iterative and they converge to the optimal olution a the number of iteration increae [27, 45]. Thu, a imple approximation method i to top the iterative algorithm when our computation budget i over. Other approximation method exit. For example, [23] ugget a ampling algorithm to elect a ubet of contraint in an optimization problem to peed up the computation. Moreover, we can rely on the particular tructure of ecret to reduce the et of contraint [9]. We can implement thoe approximation technique to olve approximately optimal protection mechanim in an affordable time. Furthermore, we can rely on the definition of privacy to find the contraint that have a minor contribution to the deign of the protection mechanim. In thi ection, we tudy one approximation

17 Privacy Game: Optimal Uer-Centric Data Obfucation Approximation Error in Ditortion Privacy Approximation Threhold Computation Time Approximation Threhold Fig. 8. Error (in km) and computation time (in ec) of approximate optimal joint ditortion-differential privacy protection mechanim, by conidering a ubet of optimization contraint. We conider only the contraint for which the ditance d() (in km) between obervation and ecret and the ditance between two ecret i le than the x-axi. In the left-hand ide figure, the y-axi how the ditribution of the difference between privacy of uer with and without approximation. In the right-hand ide figure, the y-axi repreent the total computation time of olving the linear program of the approximate optimal joint mechanim in Matlab on a machine with 4 core CPU model Intel(R) Xeon(R) 2.40GHz. The central mark in each box how the median value, computed over all uer. The boxplot alo how the 25 th, and 75 th percentile a well a the outlier. method, following the intuition behind the differential privacy bound: we remove the contraint for which the ditance d(, ) i larger than a threhold. We can jutify thi by oberving that, in the definition of differential privacy metric (10), the privacy i more protected when for ecret,, the ditance d(, ) i mall. To put thi in perpective, note that if we ue the original definition of differential privacy, there would not be any contraint if d(, ) > 1. We alo apply thi approximation to the ditance between obervable and ecret. In Figure 8, we how the privacy lo of uer a well a the peed-up of their computation due to approximation. We performed the computation on a machine with 4 core CPU model Intel(R) Xeon(R) 2.40GHz. A we increae the approximation threhold (which i the ditance beyond it we ignore the contraint), the approximation error goe to zero. Thi ugget that, for a large et of ecret, if we chooe a relatively mall threhold the approximated protection mechanim provide almot the ame privacy level a in the optimal olution. The computation time, however, increae a the approximation error decreae (due to increaing the approximation threhold). Figure 8 capture uch a tradeoff of our approximation method.

Queueing systems with scheduled arrivals, i.e., appointment systems, are typical for frontal service systems,

Queueing systems with scheduled arrivals, i.e., appointment systems, are typical for frontal service systems, MANAGEMENT SCIENCE Vol. 54, No. 3, March 28, pp. 565 572 in 25-199 ein 1526-551 8 543 565 inform doi 1.1287/mnc.17.82 28 INFORMS Scheduling Arrival to Queue: A Single-Server Model with No-Show INFORMS

More information

A technical guide to 2014 key stage 2 to key stage 4 value added measures

A technical guide to 2014 key stage 2 to key stage 4 value added measures A technical guide to 2014 key tage 2 to key tage 4 value added meaure CONTENTS Introduction: PAGE NO. What i value added? 2 Change to value added methodology in 2014 4 Interpretation: Interpreting chool

More information

Bidding for Representative Allocations for Display Advertising

Bidding for Representative Allocations for Display Advertising Bidding for Repreentative Allocation for Diplay Advertiing Arpita Ghoh, Preton McAfee, Kihore Papineni, and Sergei Vailvitkii Yahoo! Reearch. {arpita, mcafee, kpapi, ergei}@yahoo-inc.com Abtract. Diplay

More information

A Spam Message Filtering Method: focus on run time

A Spam Message Filtering Method: focus on run time , pp.29-33 http://dx.doi.org/10.14257/atl.2014.76.08 A Spam Meage Filtering Method: focu on run time Sin-Eon Kim 1, Jung-Tae Jo 2, Sang-Hyun Choi 3 1 Department of Information Security Management 2 Department

More information

Assessing the Discriminatory Power of Credit Scores

Assessing the Discriminatory Power of Credit Scores Aeing the Dicriminatory Power of Credit Score Holger Kraft 1, Gerald Kroiandt 1, Marlene Müller 1,2 1 Fraunhofer Intitut für Techno- und Wirtchaftmathematik (ITWM) Gottlieb-Daimler-Str. 49, 67663 Kaierlautern,

More information

DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS

DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS Chritopher V. Kopek Department of Computer Science Wake Foret Univerity Winton-Salem, NC, 2709 Email: [email protected]

More information

Control of Wireless Networks with Flow Level Dynamics under Constant Time Scheduling

Control of Wireless Networks with Flow Level Dynamics under Constant Time Scheduling Control of Wirele Network with Flow Level Dynamic under Contant Time Scheduling Long Le and Ravi R. Mazumdar Department of Electrical and Computer Engineering Univerity of Waterloo,Waterloo, ON, Canada

More information

A Resolution Approach to a Hierarchical Multiobjective Routing Model for MPLS Networks

A Resolution Approach to a Hierarchical Multiobjective Routing Model for MPLS Networks A Reolution Approach to a Hierarchical Multiobjective Routing Model for MPLS Networ Joé Craveirinha a,c, Rita Girão-Silva a,c, João Clímaco b,c, Lúcia Martin a,c a b c DEEC-FCTUC FEUC INESC-Coimbra International

More information

Profitability of Loyalty Programs in the Presence of Uncertainty in Customers Valuations

Profitability of Loyalty Programs in the Presence of Uncertainty in Customers Valuations Proceeding of the 0 Indutrial Engineering Reearch Conference T. Doolen and E. Van Aken, ed. Profitability of Loyalty Program in the Preence of Uncertainty in Cutomer Valuation Amir Gandomi and Saeed Zolfaghari

More information

SCM- integration: organiational, managerial and technological iue M. Caridi 1 and A. Sianei 2 Dipartimento di Economia e Produzione, Politecnico di Milano, Italy E-mail: [email protected] Itituto

More information

DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS. G. Chapman J. Cleese E. Idle

DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS. G. Chapman J. Cleese E. Idle DISTRIBUTED DATA PARALLEL TECHNIQUES FOR CONTENT-MATCHING INTRUSION DETECTION SYSTEMS G. Chapman J. Cleee E. Idle ABSTRACT Content matching i a neceary component of any ignature-baed network Intruion Detection

More information

Optical Illusion. Sara Bolouki, Roger Grosse, Honglak Lee, Andrew Ng

Optical Illusion. Sara Bolouki, Roger Grosse, Honglak Lee, Andrew Ng Optical Illuion Sara Bolouki, Roger Groe, Honglak Lee, Andrew Ng. Introduction The goal of thi proect i to explain ome of the illuory phenomena uing pare coding and whitening model. Intead of the pare

More information

Unit 11 Using Linear Regression to Describe Relationships

Unit 11 Using Linear Regression to Describe Relationships Unit 11 Uing Linear Regreion to Decribe Relationhip Objective: To obtain and interpret the lope and intercept of the leat quare line for predicting a quantitative repone variable from a quantitative explanatory

More information

A Review On Software Testing In SDlC And Testing Tools

A Review On Software Testing In SDlC And Testing Tools www.ijec.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume - 3 Iue -9 September, 2014 Page No. 8188-8197 A Review On Software Teting In SDlC And Teting Tool T.Amruthavalli*,

More information

Apigee Edge: Apigee Cloud vs. Private Cloud. Evaluating deployment models for API management

Apigee Edge: Apigee Cloud vs. Private Cloud. Evaluating deployment models for API management Apigee Edge: Apigee Cloud v. Private Cloud Evaluating deployment model for API management Table of Content Introduction 1 Time to ucce 2 Total cot of ownerhip 2 Performance 3 Security 4 Data privacy 4

More information

Bi-Objective Optimization for the Clinical Trial Supply Chain Management

Bi-Objective Optimization for the Clinical Trial Supply Chain Management Ian David Lockhart Bogle and Michael Fairweather (Editor), Proceeding of the 22nd European Sympoium on Computer Aided Proce Engineering, 17-20 June 2012, London. 2012 Elevier B.V. All right reerved. Bi-Objective

More information

Two Dimensional FEM Simulation of Ultrasonic Wave Propagation in Isotropic Solid Media using COMSOL

Two Dimensional FEM Simulation of Ultrasonic Wave Propagation in Isotropic Solid Media using COMSOL Excerpt from the Proceeding of the COMSO Conference 0 India Two Dimenional FEM Simulation of Ultraonic Wave Propagation in Iotropic Solid Media uing COMSO Bikah Ghoe *, Krihnan Balaubramaniam *, C V Krihnamurthy

More information

A note on profit maximization and monotonicity for inbound call centers

A note on profit maximization and monotonicity for inbound call centers A note on profit maximization and monotonicity for inbound call center Ger Koole & Aue Pot Department of Mathematic, Vrije Univeriteit Amterdam, The Netherland 23rd December 2005 Abtract We conider an

More information

CASE STUDY BRIDGE. www.future-processing.com

CASE STUDY BRIDGE. www.future-processing.com CASE STUDY BRIDGE TABLE OF CONTENTS #1 ABOUT THE CLIENT 3 #2 ABOUT THE PROJECT 4 #3 OUR ROLE 5 #4 RESULT OF OUR COLLABORATION 6-7 #5 THE BUSINESS PROBLEM THAT WE SOLVED 8 #6 CHALLENGES 9 #7 VISUAL IDENTIFICATION

More information

AN OVERVIEW ON CLUSTERING METHODS

AN OVERVIEW ON CLUSTERING METHODS IOSR Journal Engineering AN OVERVIEW ON CLUSTERING METHODS T. Soni Madhulatha Aociate Preor, Alluri Intitute Management Science, Warangal. ABSTRACT Clutering i a common technique for tatitical data analyi,

More information

Cluster-Aware Cache for Network Attached Storage *

Cluster-Aware Cache for Network Attached Storage * Cluter-Aware Cache for Network Attached Storage * Bin Cai, Changheng Xie, and Qiang Cao National Storage Sytem Laboratory, Department of Computer Science, Huazhong Univerity of Science and Technology,

More information

TRADING rules are widely used in financial market as

TRADING rules are widely used in financial market as Complex Stock Trading Strategy Baed on Particle Swarm Optimization Fei Wang, Philip L.H. Yu and David W. Cheung Abtract Trading rule have been utilized in the tock market to make profit for more than a

More information

Project Management Basics

Project Management Basics Project Management Baic A Guide to undertanding the baic component of effective project management and the key to ucce 1 Content 1.0 Who hould read thi Guide... 3 1.1 Overview... 3 1.2 Project Management

More information

Mixed Method of Model Reduction for Uncertain Systems

Mixed Method of Model Reduction for Uncertain Systems SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol 4 No June Mixed Method of Model Reduction for Uncertain Sytem N Selvaganean Abtract: A mixed method for reducing a higher order uncertain ytem to a table reduced

More information

SELF-MANAGING PERFORMANCE IN APPLICATION SERVERS MODELLING AND DATA ARCHITECTURE

SELF-MANAGING PERFORMANCE IN APPLICATION SERVERS MODELLING AND DATA ARCHITECTURE SELF-MANAGING PERFORMANCE IN APPLICATION SERVERS MODELLING AND DATA ARCHITECTURE RAVI KUMAR G 1, C.MUTHUSAMY 2 & A.VINAYA BABU 3 1 HP Bangalore, Reearch Scholar JNTUH, Hyderabad, India, 2 Yahoo, Bangalore,

More information

FEDERATION OF ARAB SCIENTIFIC RESEARCH COUNCILS

FEDERATION OF ARAB SCIENTIFIC RESEARCH COUNCILS Aignment Report RP/98-983/5/0./03 Etablihment of cientific and technological information ervice for economic and ocial development FOR INTERNAL UE NOT FOR GENERAL DITRIBUTION FEDERATION OF ARAB CIENTIFIC

More information

OPINION PIECE. It s up to the customer to ensure security of the Cloud

OPINION PIECE. It s up to the customer to ensure security of the Cloud OPINION PIECE It up to the cutomer to enure ecurity of the Cloud Content Don t outource what you don t undertand 2 The check lit 2 Step toward control 4 Due Diligence 4 Contract 4 E-dicovery 4 Standard

More information

Risk Management for a Global Supply Chain Planning under Uncertainty: Models and Algorithms

Risk Management for a Global Supply Chain Planning under Uncertainty: Models and Algorithms Rik Management for a Global Supply Chain Planning under Uncertainty: Model and Algorithm Fengqi You 1, John M. Waick 2, Ignacio E. Gromann 1* 1 Dept. of Chemical Engineering, Carnegie Mellon Univerity,

More information

Redesigning Ratings: Assessing the Discriminatory Power of Credit Scores under Censoring

Redesigning Ratings: Assessing the Discriminatory Power of Credit Scores under Censoring Redeigning Rating: Aeing the Dicriminatory Power of Credit Score under Cenoring Holger Kraft, Gerald Kroiandt, Marlene Müller Fraunhofer Intitut für Techno- und Wirtchaftmathematik (ITWM) Thi verion: June

More information

A New Optimum Jitter Protection for Conversational VoIP

A New Optimum Jitter Protection for Conversational VoIP Proc. Int. Conf. Wirele Commun., Signal Proceing (Nanjing, China), 5 pp., Nov. 2009 A New Optimum Jitter Protection for Converational VoIP Qipeng Gong, Peter Kabal Electrical & Computer Engineering, McGill

More information

Support Vector Machine Based Electricity Price Forecasting For Electricity Markets utilising Projected Assessment of System Adequacy Data.

Support Vector Machine Based Electricity Price Forecasting For Electricity Markets utilising Projected Assessment of System Adequacy Data. The Sixth International Power Engineering Conference (IPEC23, 27-29 November 23, Singapore Support Vector Machine Baed Electricity Price Forecating For Electricity Maret utiliing Projected Aement of Sytem

More information

Health Insurance and Social Welfare. Run Liang. China Center for Economic Research, Peking University, Beijing 100871, China,

Health Insurance and Social Welfare. Run Liang. China Center for Economic Research, Peking University, Beijing 100871, China, Health Inurance and Social Welfare Run Liang China Center for Economic Reearch, Peking Univerity, Beijing 100871, China, Email: [email protected] and Hao Wang China Center for Economic Reearch, Peking

More information

Acceleration-Displacement Crash Pulse Optimisation A New Methodology to Optimise Vehicle Response for Multiple Impact Speeds

Acceleration-Displacement Crash Pulse Optimisation A New Methodology to Optimise Vehicle Response for Multiple Impact Speeds Acceleration-Diplacement Crah Pule Optimiation A New Methodology to Optimie Vehicle Repone for Multiple Impact Speed D. Gildfind 1 and D. Ree 2 1 RMIT Univerity, Department of Aeropace Engineering 2 Holden

More information

Auction-Based Resource Allocation for Sharing Cloudlets in Mobile Cloud Computing

Auction-Based Resource Allocation for Sharing Cloudlets in Mobile Cloud Computing 1 Auction-Baed Reource Allocation for Sharing Cloudlet in Mobile Cloud Computing A-Long Jin, Wei Song, Senior Member, IEEE, and Weihua Zhuang, Fellow, IEEE Abtract Driven by pervaive mobile device and

More information

CASE STUDY ALLOCATE SOFTWARE

CASE STUDY ALLOCATE SOFTWARE CASE STUDY ALLOCATE SOFTWARE allocate caetud y TABLE OF CONTENTS #1 ABOUT THE CLIENT #2 OUR ROLE #3 EFFECTS OF OUR COOPERATION #4 BUSINESS PROBLEM THAT WE SOLVED #5 CHALLENGES #6 WORKING IN SCRUM #7 WHAT

More information

Availability of WDM Multi Ring Networks

Availability of WDM Multi Ring Networks Paper Availability of WDM Multi Ring Network Ivan Rado and Katarina Rado H d.o.o. Motar, Motar, Bonia and Herzegovina Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Univerity

More information

CHARACTERISTICS OF WAITING LINE MODELS THE INDICATORS OF THE CUSTOMER FLOW MANAGEMENT SYSTEMS EFFICIENCY

CHARACTERISTICS OF WAITING LINE MODELS THE INDICATORS OF THE CUSTOMER FLOW MANAGEMENT SYSTEMS EFFICIENCY Annale Univeritati Apuleni Serie Oeconomica, 2(2), 200 CHARACTERISTICS OF WAITING LINE MODELS THE INDICATORS OF THE CUSTOMER FLOW MANAGEMENT SYSTEMS EFFICIENCY Sidonia Otilia Cernea Mihaela Jaradat 2 Mohammad

More information

Growth and Sustainability of Managed Security Services Networks: An Economic Perspective

Growth and Sustainability of Managed Security Services Networks: An Economic Perspective Growth and Sutainability of Managed Security Service etwork: An Economic Perpective Alok Gupta Dmitry Zhdanov Department of Information and Deciion Science Univerity of Minneota Minneapoli, M 55455 (agupta,

More information

Utility-Based Flow Control for Sequential Imagery over Wireless Networks

Utility-Based Flow Control for Sequential Imagery over Wireless Networks Utility-Baed Flow Control for Sequential Imagery over Wirele Networ Tomer Kihoni, Sara Callaway, and Mar Byer Abtract Wirele enor networ provide a unique et of characteritic that mae them uitable for building

More information

Return on Investment and Effort Expenditure in the Software Development Environment

Return on Investment and Effort Expenditure in the Software Development Environment International Journal of Applied Information ytem (IJAI) IN : 2249-0868 Return on Invetment and Effort Expenditure in the oftware Development Environment Dineh Kumar aini Faculty of Computing and IT, ohar

More information

Growing Self-Organizing Maps for Surface Reconstruction from Unstructured Point Clouds

Growing Self-Organizing Maps for Surface Reconstruction from Unstructured Point Clouds Growing Self-Organizing Map for Surface Recontruction from Untructured Point Cloud Renata L. M. E. do Rêgo, Aluizio F. R. Araújo, and Fernando B.de Lima Neto Abtract Thi work introduce a new method for

More information

THE IMPACT OF MULTIFACTORIAL GENETIC DISORDERS ON CRITICAL ILLNESS INSURANCE: A SIMULATION STUDY BASED ON UK BIOBANK ABSTRACT KEYWORDS

THE IMPACT OF MULTIFACTORIAL GENETIC DISORDERS ON CRITICAL ILLNESS INSURANCE: A SIMULATION STUDY BASED ON UK BIOBANK ABSTRACT KEYWORDS THE IMPACT OF MULTIFACTORIAL GENETIC DISORDERS ON CRITICAL ILLNESS INSURANCE: A SIMULATION STUDY BASED ON UK BIOBANK BY ANGUS MACDONALD, DELME PRITCHARD AND PRADIP TAPADAR ABSTRACT The UK Biobank project

More information

A Note on Profit Maximization and Monotonicity for Inbound Call Centers

A Note on Profit Maximization and Monotonicity for Inbound Call Centers OPERATIONS RESEARCH Vol. 59, No. 5, September October 2011, pp. 1304 1308 in 0030-364X ein 1526-5463 11 5905 1304 http://dx.doi.org/10.1287/opre.1110.0990 2011 INFORMS TECHNICAL NOTE INFORMS hold copyright

More information

T-test for dependent Samples. Difference Scores. The t Test for Dependent Samples. The t Test for Dependent Samples. s D

T-test for dependent Samples. Difference Scores. The t Test for Dependent Samples. The t Test for Dependent Samples. s D The t Tet for ependent Sample T-tet for dependent Sample (ak.a., Paired ample t-tet, Correlated Group eign, Within- Subject eign, Repeated Meaure,.. Repeated-Meaure eign When you have two et of core from

More information

QUANTIFYING THE BULLWHIP EFFECT IN THE SUPPLY CHAIN OF SMALL-SIZED COMPANIES

QUANTIFYING THE BULLWHIP EFFECT IN THE SUPPLY CHAIN OF SMALL-SIZED COMPANIES Sixth LACCEI International Latin American and Caribbean Conference for Engineering and Technology (LACCEI 2008) Partnering to Succe: Engineering, Education, Reearch and Development June 4 June 6 2008,

More information

Simulation of Sensorless Speed Control of Induction Motor Using APFO Technique

Simulation of Sensorless Speed Control of Induction Motor Using APFO Technique International Journal of Computer and Electrical Engineering, Vol. 4, No. 4, Augut 2012 Simulation of Senorle Speed Control of Induction Motor Uing APFO Technique T. Raghu, J. Sriniva Rao, and S. Chandra

More information

DUE to the small size and low cost of a sensor node, a

DUE to the small size and low cost of a sensor node, a 1992 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 14, NO. 10, OCTOBER 2015 A Networ Coding Baed Energy Efficient Data Bacup in Survivability-Heterogeneou Senor Networ Jie Tian, Tan Yan, and Guiling Wang

More information

Graph Analyi I Network Meaure of the Networked Adaptive Agents

Graph Analyi I Network Meaure of the Networked Adaptive Agents Uing Graph Analyi to Study Network of Adaptive Agent Sherief Abdallah Britih Univerity in Dubai, United Arab Emirate Univerity of Edinburgh, United Kingdom [email protected] ABSTRACT Experimental analyi of

More information

Report 4668-1b 30.10.2010. Measurement report. Sylomer - field test

Report 4668-1b 30.10.2010. Measurement report. Sylomer - field test Report 4668-1b Meaurement report Sylomer - field tet Report 4668-1b 2(16) Contet 1 Introduction... 3 1.1 Cutomer... 3 1.2 The ite and purpoe of the meaurement... 3 2 Meaurement... 6 2.1 Attenuation of

More information

Distributed Monitoring and Aggregation in Wireless Sensor Networks

Distributed Monitoring and Aggregation in Wireless Sensor Networks Ditributed Monitoring and Aggregation in Wirele Senor Network Changlei Liu and Guohong Cao Department of Computer Science & Engineering The Pennylvania State Univerity E-mail: {chaliu, gcao}@ce.pu.edu

More information

A Life Contingency Approach for Physical Assets: Create Volatility to Create Value

A Life Contingency Approach for Physical Assets: Create Volatility to Create Value A Life Contingency Approach for Phyical Aet: Create Volatility to Create Value homa Emil Wendling 2011 Enterprie Rik Management Sympoium Society of Actuarie March 14-16, 2011 Copyright 2011 by the Society

More information

The Cash Flow Statement: Problems with the Current Rules

The Cash Flow Statement: Problems with the Current Rules A C C O U N T I N G & A U D I T I N G accounting The Cah Flow Statement: Problem with the Current Rule By Neii S. Wei and Jame G.S. Yang In recent year, the tatement of cah flow ha received increaing attention

More information

Chapter 10 Stocks and Their Valuation ANSWERS TO END-OF-CHAPTER QUESTIONS

Chapter 10 Stocks and Their Valuation ANSWERS TO END-OF-CHAPTER QUESTIONS Chapter Stoc and Their Valuation ANSWERS TO EN-OF-CHAPTER QUESTIONS - a. A proxy i a document giving one peron the authority to act for another, typically the power to vote hare of common toc. If earning

More information

Performance of Multiple TFRC in Heterogeneous Wireless Networks

Performance of Multiple TFRC in Heterogeneous Wireless Networks Performance of Multiple TFRC in Heterogeneou Wirele Network 1 Hyeon-Jin Jeong, 2 Seong-Sik Choi 1, Firt Author Computer Engineering Department, Incheon National Univerity, [email protected] *2,Correponding

More information

MECH 2110 - Statics & Dynamics

MECH 2110 - Statics & Dynamics Chapter D Problem 3 Solution 1/7/8 1:8 PM MECH 11 - Static & Dynamic Chapter D Problem 3 Solution Page 7, Engineering Mechanic - Dynamic, 4th Edition, Meriam and Kraige Given: Particle moving along a traight

More information

Senior Thesis. Horse Play. Optimal Wagers and the Kelly Criterion. Author: Courtney Kempton. Supervisor: Professor Jim Morrow

Senior Thesis. Horse Play. Optimal Wagers and the Kelly Criterion. Author: Courtney Kempton. Supervisor: Professor Jim Morrow Senior Thei Hore Play Optimal Wager and the Kelly Criterion Author: Courtney Kempton Supervior: Profeor Jim Morrow June 7, 20 Introduction The fundamental problem in gambling i to find betting opportunitie

More information

Scheduling of Jobs and Maintenance Activities on Parallel Machines

Scheduling of Jobs and Maintenance Activities on Parallel Machines Scheduling of Job and Maintenance Activitie on Parallel Machine Chung-Yee Lee* Department of Indutrial Engineering Texa A&M Univerity College Station, TX 77843-3131 [email protected] Zhi-Long Chen** Department

More information

Multi-Objective Optimization for Sponsored Search

Multi-Objective Optimization for Sponsored Search Multi-Objective Optimization for Sponored Search Yilei Wang 1,*, Bingzheng Wei 2, Jun Yan 2, Zheng Chen 2, Qiao Du 2,3 1 Yuanpei College Peking Univerity Beijing, China, 100871 (+86)15120078719 [email protected]

More information

Auction Mechanisms Toward Efficient Resource Sharing for Cloudlets in Mobile Cloud Computing

Auction Mechanisms Toward Efficient Resource Sharing for Cloudlets in Mobile Cloud Computing 1 Auction Mechanim Toward Efficient Reource Sharing for Cloudlet in Mobile Cloud Computing A-Long Jin, Wei Song, Ping Wang, Duit Niyato, and Peijian Ju Abtract Mobile cloud computing offer an appealing

More information

Performance of a Browser-Based JavaScript Bandwidth Test

Performance of a Browser-Based JavaScript Bandwidth Test Performance of a Brower-Baed JavaScript Bandwidth Tet David A. Cohen II May 7, 2013 CP SC 491/H495 Abtract An exiting brower-baed bandwidth tet written in JavaScript wa modified for the purpoe of further

More information

TIME SERIES ANALYSIS AND TRENDS BY USING SPSS PROGRAMME

TIME SERIES ANALYSIS AND TRENDS BY USING SPSS PROGRAMME TIME SERIES ANALYSIS AND TRENDS BY USING SPSS PROGRAMME RADMILA KOCURKOVÁ Sileian Univerity in Opava School of Buine Adminitration in Karviná Department of Mathematical Method in Economic Czech Republic

More information

REDUCTION OF TOTAL SUPPLY CHAIN CYCLE TIME IN INTERNAL BUSINESS PROCESS OF REAMER USING DOE AND TAGUCHI METHODOLOGY. Abstract. 1.

REDUCTION OF TOTAL SUPPLY CHAIN CYCLE TIME IN INTERNAL BUSINESS PROCESS OF REAMER USING DOE AND TAGUCHI METHODOLOGY. Abstract. 1. International Journal of Advanced Technology & Engineering Reearch (IJATER) REDUCTION OF TOTAL SUPPLY CHAIN CYCLE TIME IN INTERNAL BUSINESS PROCESS OF REAMER USING DOE AND Abtract TAGUCHI METHODOLOGY Mr.

More information

Independent Samples T- test

Independent Samples T- test Independent Sample T- tet With previou tet, we were intereted in comparing a ingle ample with a population With mot reearch, you do not have knowledge about the population -- you don t know the population

More information

Improving the Performance of Web Service Recommenders Using Semantic Similarity

Improving the Performance of Web Service Recommenders Using Semantic Similarity Improving the Performance of Web Service Recommender Uing Semantic Similarity Juan Manuel Adán-Coello, Carlo Miguel Tobar, Yang Yuming Faculdade de Engenharia de Computação, Pontifícia Univeridade Católica

More information

6. Friction, Experiment and Theory

6. Friction, Experiment and Theory 6. Friction, Experiment and Theory The lab thi wee invetigate the rictional orce and the phyical interpretation o the coeicient o riction. We will mae ue o the concept o the orce o gravity, the normal

More information

Laureate Network Products & Services Copyright 2013 Laureate Education, Inc.

Laureate Network Products & Services Copyright 2013 Laureate Education, Inc. Laureate Network Product & Service Copyright 2013 Laureate Education, Inc. KEY Coure Name Laureate Faculty Development...3 Laureate Englih Program...9 Language Laureate Signature Product...12 Length Laureate

More information

Socially Optimal Pricing of Cloud Computing Resources

Socially Optimal Pricing of Cloud Computing Resources Socially Optimal Pricing of Cloud Computing Reource Ihai Menache Microoft Reearch New England Cambridge, MA 02142 [email protected] Auman Ozdaglar Laboratory for Information and Deciion Sytem Maachuett

More information

TOWARDS AUTOMATED LIDAR BORESIGHT SELF-CALIBRATION

TOWARDS AUTOMATED LIDAR BORESIGHT SELF-CALIBRATION TOWARDS AUTOMATED LIDAR BORESIGHT SELF-CALIBRATION J. Skaloud a, *, P. Schaer a a TOPO Lab, Ecole Polytechnique Fédérale de Lauanne (EPFL), Station 18, 1015 Lauanne, Switzerland KEY WORDS: airborne laer

More information

MBA 570x Homework 1 Due 9/24/2014 Solution

MBA 570x Homework 1 Due 9/24/2014 Solution MA 570x Homework 1 Due 9/24/2014 olution Individual work: 1. Quetion related to Chapter 11, T Why do you think i a fund of fund market for hedge fund, but not for mutual fund? Anwer: Invetor can inexpenively

More information

Network Architecture for Joint Failure Recovery and Traffic Engineering

Network Architecture for Joint Failure Recovery and Traffic Engineering Network Architecture for Joint Failure Recovery and Traffic Engineering Martin Suchara Dept. of Computer Science Princeton Univerity, NJ 08544 [email protected] Dahai Xu AT&T Lab Reearch Florham Park,

More information

SHARESYNC SECURITY FEATURES

SHARESYNC SECURITY FEATURES www.kyboxinnovation.com SHARESYNC SECURITY FEATURES ShareSync provide a high degree of ecurity and protection which allow adminitrator to: Aure compliance with ecurity bet practice Get full viibility over

More information

Control Theory based Approach for the Improvement of Integrated Business Process Interoperability

Control Theory based Approach for the Improvement of Integrated Business Process Interoperability www.ijcsi.org 201 Control Theory baed Approach for the Improvement of Integrated Buine Proce Interoperability Abderrahim Taoudi 1, Bouchaib Bounabat 2 and Badr Elmir 3 1 Al-Qualadi Reearch & Development

More information

Review of Multiple Regression Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 13, 2015

Review of Multiple Regression Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised January 13, 2015 Review of Multiple Regreion Richard William, Univerity of Notre Dame, http://www3.nd.edu/~rwilliam/ Lat revied January 13, 015 Aumption about prior nowledge. Thi handout attempt to ummarize and yntheize

More information

Research Article An (s, S) Production Inventory Controlled Self-Service Queuing System

Research Article An (s, S) Production Inventory Controlled Self-Service Queuing System Probability and Statitic Volume 5, Article ID 558, 8 page http://dxdoiorg/55/5/558 Reearch Article An (, S) Production Inventory Controlled Self-Service Queuing Sytem Anoop N Nair and M J Jacob Department

More information

Evaluating Teaching in Higher Education. September 2008. Bruce A. Weinberg The Ohio State University *, IZA, and NBER weinberg.27@osu.

Evaluating Teaching in Higher Education. September 2008. Bruce A. Weinberg The Ohio State University *, IZA, and NBER weinberg.27@osu. Evaluating Teaching in Higher Education September 2008 Bruce A. Weinberg The Ohio State Univerity *, IZA, and NBER [email protected] Belton M. Fleiher The Ohio State Univerity * and IZA [email protected]

More information

Brand Equity Net Promoter Scores Versus Mean Scores. Which Presents a Clearer Picture For Action? A Non-Elite Branded University Example.

Brand Equity Net Promoter Scores Versus Mean Scores. Which Presents a Clearer Picture For Action? A Non-Elite Branded University Example. Brand Equity Net Promoter Score Veru Mean Score. Which Preent a Clearer Picture For Action? A Non-Elite Branded Univerity Example Ann Miti, Swinburne Univerity of Technology Patrick Foley, Victoria Univerity

More information