DATA MINING - 1DL360 Fall 2013" An introductory class in data mining http://www.it.uu.se/edu/course/homepage/infoutv/per1ht13 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology, Uppsala University, Uppsala, Sweden 10/12/13 1
Introduction to Data Mining Privacy in Data mining (slides and selected papers)" Kjell Orsborn Department of Information Technology Uppsala University, Uppsala, Sweden 10/12/13 2
Privacy and security in data mining" Protecting private data is an important concern for society Several laws now require explicit consent prior to analysis of an individual s data However, its importance is not limited to individuals Corporations might also need to protect their information s privacy, even though sharing it for analysis could benefit the company. Clearly, the trade-off between sharing information for analysis and keeping it secret to preserve corporate trade secrets and customer privacy is a growing challenge 10/12/13 3
Techniques for privacy and security" Most data mining applications operate under the assumption that all the data is available at a single central repository, called a data warehouse. This poses a huge privacy problem because violating only a single repository security exposes all the data. A naive solution to the problem is de-identification remove all identifying information from the data and release it pinpointing exactly what constitutes identification information is difficult worse, even if de-identification is possible and (legally) acceptable, its extremely hard to do effectively without losing the datas utility. studies have used externally available public information to re-identify anonymous data and proved that effectively anonymizing the data required removal of substantial detail. Another solution is to avoid centralized warehouses Requires specialized distributed data mining algorithms, e.g. secure multi-party computation Accurate methods shown for classification and association analysis A third approach is data transformation and perturbation i.e. modifying data so that it no longer represents real individuals. 10/12/13 4
Privacy-preserving techniques in data mining" Most methods use some form of transformation of data to perform privacy preservation Typically, these methods reduce the granularity of representation to reduce the privacy Randomization techniques Introduce noise Group-based anonymization, e.g. K-anonymity Prohibits too detailed queries Distributed privacy preservation Prohibits distribution of individual data while supporting aggregate results Downgrading application effectiveness Results such as association rules, classification might violate privacy and can be restricted by a association rule hiding, classifier downgrading and query auditing 10/12/13 5
Privacy-preserving techniques in data mining" Randomization techniques Additative perturbation techniques - introduce noise, e.g. in the form of statistical distributions Can be attacked by analyzing correlation structure of randomized data Can also be attacked by matching the distribution of randomized data with the distribution of known public information Multiplicative perturbation techniques E.g. applying multidimensional projections to reduce dimensions of data Data swapping Values for different records are swapped while still being able to compute correct aggregate values Randomization approach is well suited for privacy-preservation in data stream mining since noise added is independent of the rest of the data 10/12/13 6
Privacy-preserving techniques in data mining" Group-based anonymization techniques K-anonymity Generalization and/or suppression of attributes to avoid identification of individual data Each release of the data must be such that every combination of values of quasiidentifiers (indirect identifiers) can be indistinguishably matched to at least k respondents. l-diversity In addition to k-anonymity focus on maintaining the diversity of sensitive attributes t-closeness model further enhancement to deal with e.g. skewed data sets Potential problems with sequential releases Several releases of data might reveal more details Linking successive releases must be prevented 10/12/13 7
Privacy-preserving techniques in data mining" Distributed privacy-preservation Horizontal partitioning See example next page Vertical partitioning See example next page Distributed algorithms for aggregate operations See example next page Distributed algorithms for k-anonymity Semi-honest adversaries Malicious adversaries 10/12/13 8
Distributed data mining" The way the data is distributed also plays an important role in defining the problem because data can be partitioned into many parts either vertically or horizontally. Vertical partitioning of data implies that although different sites gather information about the same set of entities, they collect different feature sets. Banks, for example, collect financial transaction information, whereas the IRS collects tax information. Figure 2 illustrates vertical partitioning and the kind of useful knowledge we can extract from it. The figure describes two databases, one containing individual medical records and another containing cell-phone information for the same set of people. Mining the joint global database might reveal such information as cell phones with Li/Ion batteries can lead to brain tumors in diabetics. 10/12/13 9
Distributed data mining" In horizontal partitioning, different sites collect the same set of information but about different entities. Different supermarkets, for example, collect the same type of grocery shopping data. Figure 3 illustrates horizontal partitioning and shows the credit-card databases of two different (local) credit unions. Taken together, we might see that fraudulent customers often have similar transaction histories. However, no credit union has sufficient data by itself to discover the patterns of fraudulent behavior. 10/12/13 10
Secure distributed computation " The secure sum protocol is a simple example of a (information theoretically) secure multi-party computation. Site k generates a random number R uniformly chosen from [0.. n], adds this to its local value x k, and then sends the sum R + x k (mod n) to site k+ 1 (mod l). Drawback of SMC is inefficiency and complexity of model 10/12/13 11
Privacy-preserving techniques in data mining" Privacy-preservation of application results Related to disclosure control in statistical databases Association rule-hiding Distortion Blocking Downgrading classifier effectiveness Modifying data so classification accuracy is reduced while retaining the utility of data for other applications Query auditing and inference control Query auditing denies one or more queries from a sequence of queries Query inference control underlying data (or query result) is perturbed so privacy is preserved See slides for statistical data security 10/12/13 12
Statistical database security " Databases often include sensitive information about single individuals that must be protected from unallowed use. However, statistical information should be extractable from the database. Statistical database security must prohibit access of individual data elements. Three main security mechanisms: conceptual, restriction-based, and perturbation-based. Examples: prohibit queries on attribute level only queries for statistical aggregation (statistical queries) statistical queries are prohibited when the selection from the population is to small. prohibit repeated statistical queries on the same tuples. introduce distortion into data. 10/12/13 13
Security in statistical databases" Statistical database security, (also called inference control), should prevent and avoid possibilities to infer protected information from the set of allowed and fully legitimate statistical queries (statistical aggregation). A security problem occur when providing statistical information without requiring to release sensitive information concerning individuals. The main problem with SDB security is to accomplish a good compromise between integrity for individuals and the need for knowledge and information management and analysis of organizations. 10/12/13 14
Inference protection techniques " One can divide inference protection techniques into three main categories: conceptual, restriction-based, and perturbation-based techniques. Conceptual techniques: Treats the security problem on a conceptual level lattice model conceptual partitioning 10/12/13 15
Inference protection techniques " Restriction-based techniques Prevent queries for certain types of statistical queries query-set size control expanded query-set size control query-set overlap control audit-based control 10/12/13 16
Inference protection techniques " Perturbation-based techniques Modifies information that is stored or presented data swapping random-sample queries fixed perturbation query-based perturbation rounding (systematic, random, controlled) 10/12/13 17
Privacy-preserving techniques in data mining" Limitation of privacy The curse of dimensionality Problems with many privacy-preserving algorithms in high-dimensional space due to sparseness Applications of privacy-preserving data mining Medical databases Sensitive info patients, family members, addresses etc Bioterrorism E.g. Need to compare possible antrax attack with data from outbreak of common respiratory diceases Homeland security Credential validation problem, identity theft, web camera and video surveillance, whatch list problem Genomic privacy Keeping privacy of DNA data while making it available for analysis 10/12/13 18