(Big) Data Anonymization Claude Castelluccia Inria, Privatics
|
|
|
- Clemence Nichols
- 10 years ago
- Views:
Transcription
1 (Big) Data Anonymization Claude Castelluccia Inria, Privatics
2 BIG DATA: The Risks Singling-out/ Re-Identification: ADV is able to identify the target s record in the published dataset from some know information Attribute Inference ADV can infer private/sensitive attributes from released dataset Because of cross-attributes and cross-users correlation! Example: a dataset reveals that all users who went to points A, B, C, also went to D (for example an hospital). I know that Target was at A, B, C i can then infer that Target was also in D! Note: Target does not even have to be part of the published datasets (in this case this is a guess).
3 BIG DATA The Risks of Re-identification: The AOL Case Possible Privacy Breach Examples: AOL, Netflix,.. In 2006, AOL released 20 million search queries for users «Anonymized» by removing AOL id and IP address Easily de-anonymized in a couple of days by looking at queries
4 BIG DATA and PRIVACY
5 Why is is sensitive? AOL user : how to kill your wife pictures of dead people photo of dead people car crash photo
6 BIG DATA The Risks of Inference: The Target Case Target identified about 25 products that, when analyzed together, allowed him to assign each shopper a pregnancy prediction score. More important, he could also estimate her due date to within a small window Target could (and does) send coupons timed to very specific stages of her pregnancy. Source: How Companies Learn Your Secrets, NYTimes, Feb. 2012
7 Data Anonymization/De-Identification Anonymisation is NOT pseudo-anonymization! What is Pseudo-Anonymization? Personal information contains identifiers, such as a name, date of birth, sex and address. When personal information is pseudonymised, the identifiers are replaced by one pseudonym. Pseudonymisation is achieved, for instance, by encryption of the identifiers in personal data. What is Anonymization? Data are anonymised if all identifying elements have been eliminated from a set of personal data (all quasi-identifiers). No element may be left in the information which could, by exercising reasonable effort, serve to re-identify the person(s) concerned. Where data have been successfully anonymised, they are no longer personal data. Source: Handbook on European data protection law, 7
8 Pseudo-Ano. vs Anonymization (2) Why does Pseudo-Anonymization does not work? It does not compose, i.e. several Pseudo-Anonymized data can be combined to de-anonymize External Information can also be exploited. See Recent example with NY Taxi 173 million individual trips de-anonymized* Why is Data Anonymization Difficult? Difficult (impossible) to identify all quasi-identifiers! *source: On Taxis and Rainbows 8
9 Quasi-Identifiers Quasi-identifiers are difficult to identify exhaustively Many combination of attributes can be used to «single-out» a user We are all unique by different ways, we are full of Q.I. See «Unicity me! *» Mobility pattern, webhistory,. Data (content) and meta-data i.e. timing can betray you! Google search timing pattern can tell when you were away! *Unicity Me! American Scientific, CompSciHayes.pdf 9
10 Unique in the Crowd [Nature2013] Only 4 spa*o- temporal points are necessary to uniquely iden*fy a user with a probability > 95%!
11 Some Data anonymization methods Random perturbation Input perturbation Output perturbation Generalization The data domain has a natural hierarchical structure. Suppression Permutation Destroying the link between identifying and sensitive attributes that could lead to a privacy leakage. 11
12 K-anonymity 12
13 But K-Ano. does not compose! 13
14 But K-ANO does not compose! 14
15 Other Attacks on k-anonymity k-anonymity does not provide privacy if Sensitive values in an equivalence class lack diversity The attacker has background knowledge Homogeneity attack Bob Zipcode Age Background knowledge attack Carl Zipcode Age A 3-anonymous patient table Zipcode Age Disease 476** 2* Heart Disease 476** 2* Heart Disease 476** 2* Heart Disease 4790* 40 Flu 4790* 40 Heart Disease 4790* 40 Cancer 476** 3* Heart Disease 476** 3* Cancer 476** 3* Cancer slide 15
16 Approche X-DATA Développer des Algorithmes d Anonymisation plus «surs» Développer une méthodologie d analyse de risques des données anonymisées Rencontrer régulièrement et Collaborer avec la CNIL
17 Toward Secure Privacy Model 17 Differential privacy e " apple Pr(M(D) =D ) Pr(M(D 0 )=D ) apple e" Secure even with arbitrary external knowledge! Compose Intuition of DP: Output is independent of my data
18 Differential Privacy 18
19 19 Example: Spatio-temporal density from CDR Postal data Call Data Record (CDR) Electricity consumption data Demographical data Water management data
20 Paris CDR (provided by Orange ) 20 1,992,846 users 1303 towers 10/09/ /09/2007 Mean trace length: (std.dev: 18) Max. trace length: 732
21 21 Goal: Release spatio-temporal density (and not CDR) Number of individuals at a given hour at any IRIS cell in Paris IRIS cells , Total visits: (Mean: 87) Original Visits Mo Tu Wed Thu Fri Sat Sun , Total visits: (Mean: 81) Original Challenge: Large dimensional data 80 0 Visits Mo Tu Wed Thu Fri Sat Sun
22 Overview of our approach Sample x ( 30) visits per user uniformly at random (to decrease sensi*vity) 2. Create *me- series: map tower cell counts to IRIS cell counts 3. Perturb these *me- series to guarantee differen*al privacy
23 Overview of our approach Sample x ( 30) visits per user uniformly at random 2. Create *me- series: map tower cell counts to IRIS cell counts 3. Perturb these *me- series to guarantee differen*al privacy
24 Perturbation of time series 24 Naïve solution: add properly calibrated Laplace noise to each count of the IRIS cell (one count per hour over 1 week) Problem: Counts are much smaller than the noise! Our approach: 1. cluster nearby less populated cells until their aggregated counts become sufficiently large to resist noise. 2. perturb the aggregated time series by adding noise to their largest Fourier coefficients 3. scale back with the (noisy) total number of visits of individual cells to get the individual time series Visit count Visit count Mo Naïve approach (ε=0.3) Tu Wed Thu Original Private (MRE: 0.73, PC: 0.59) Fri Sat Our approach (ε=0.3) Sun Original Private (MRE: 0.16, PC: 0.99) 0 Mo Tu Wed Thu Fri Sat Sun
25 25 Performance evaluation 1: Mean Relative Error MRE(X, ˆX) =(1/168) X168 i=1 ˆX i X i max(,x i ) Naïve approach (ε=0.3) MRE Our scheme (ε=0.3) MRE Average MRE: Average MRE:
26 Conclusion 1: There are no universal solutions! 26 There are no universal anonymization solutions that fit all applications in order to get the best accuracy, they have to be customized to the application and the public characteristics of the dataset specific context specific utility/privacy tradeoff Specific ADV models Specific impacts.. Anonymization is all about utility/efficiency trade-off!
27 Conclusion 2: Anonymization does not solve everything! Anonymization schemes protect against re-identification, not inference! You can learn and infer a lot from data and meta-data You can infer religion from Mobility data! Interest from Google search requests It is up to the society to decide what is acceptable or not! By balancing the benefits with the risks*. *Benefit-Risk Analysis for Big Data Projects, uploads/fpf_databenefitanalysis_final.pdf 27
28 Conclusion 3: Data Anonymization is Hard! 28 Most people don t understand it or don t want to understand it this is trivial, I need the data for my business It is technically difficult Requires real expertise Big Data Anonymization is even harder Finality/utility is not known We need better tools to: Anonymize datasets To perform PRA (Privacy Risk Analysis) of Anonymized Data To evaluate Anonymization solutions Security by Obscurity does not work Anonymization algorithms should be auditable
29 MERCI!
CS346: Advanced Databases
CS346: Advanced Databases Alexandra I. Cristea [email protected] Data Security and Privacy Outline Chapter: Database Security in Elmasri and Navathe (chapter 24, 6 th Edition) Brief overview of
Obfuscation of sensitive data in network flows 1
Obfuscation of sensitive data in network flows 1 D. Riboni 2, A. Villani 1, D. Vitali 1 C. Bettini 2, L.V. Mancini 1 1 Dipartimento di Informatica,Universitá di Roma, Sapienza. E-mail: {villani, vitali,
DATA MINING - 1DL360
DATA MINING - 1DL360 Fall 2013" An introductory class in data mining http://www.it.uu.se/edu/course/homepage/infoutv/per1ht13 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology,
Privacy Techniques for Big Data
Privacy Techniques for Big Data The Pros and Cons of Syntatic and Differential Privacy Approaches Dr#Roksana#Boreli# SMU,#Singapore,#May#2015# Introductions NICTA Australia s National Centre of Excellence
Privacy Challenges of Telco Big Data
Dr. Günter Karjoth June 17, 2014 ITU telco big data workshop Privacy Challenges of Telco Big Data Mobile phones are great sources of data but we must be careful about privacy 1 / 15 Sources of Big Data
ARTICLE 29 DATA PROTECTION WORKING PARTY
ARTICLE 29 DATA PROTECTION WORKING PARTY 0829/14/EN WP216 Opinion 05/2014 on Anonymisation Techniques Adopted on 10 April 2014 This Working Party was set up under Article 29 of Directive 95/46/EC. It is
Privacy-preserving Data-aggregation for Internet-of-things in Smart Grid
Privacy-preserving Data-aggregation for Internet-of-things in Smart Grid Aakanksha Chowdhery Postdoctoral Researcher, Microsoft Research ac@microsoftcom Collaborators: Victor Bahl, Ratul Mahajan, Frank
Secure Computation Martin Beck
Institute of Systems Architecture, Chair of Privacy and Data Security Secure Computation Martin Beck Dresden, 05.02.2015 Index Homomorphic Encryption The Cloud problem (overview & example) System properties
Privacy Preserving Data Mining
Privacy Preserving Data Mining Technion - Computer Science Department - Ph.D. Thesis PHD-2011-01 - 2011 Arie Friedman Privacy Preserving Data Mining Technion - Computer Science Department - Ph.D. Thesis
Shroudbase Technical Overview
Shroudbase Technical Overview Differential Privacy Differential privacy is a rigorous mathematical definition of database privacy developed for the problem of privacy preserving data analysis. Specifically,
Privacy Committee. Privacy and Open Data Guideline. Guideline. Of South Australia. Version 1
Privacy Committee Of South Australia Privacy and Open Data Guideline Guideline Version 1 Executive Officer Privacy Committee of South Australia c/o State Records of South Australia GPO Box 2343 ADELAIDE
Data Privacy and Biomedicine Syllabus - Page 1 of 6
Data Privacy and Biomedicine Syllabus - Page 1 of 6 Course: Data Privacy in Biomedicine (BMIF-380 / CS-396) Instructor: Bradley Malin, Ph.D. ([email protected]) Semester: Spring 2015 Time: Mondays
ENSURING ANONYMITY WHEN SHARING DATA. Dr. Khaled El Emam Electronic Health Information Laboratory & uottawa
ENSURING ANONYMITY WHEN SHARING DATA Dr. Khaled El Emam Electronic Health Information Laboratory & uottawa ANONYMIZATION Motivations for Anonymization Obtaining patient consent/authorization not practical
A GENERAL SURVEY OF PRIVACY-PRESERVING DATA MINING MODELS AND ALGORITHMS
Chapter 2 A GENERAL SURVEY OF PRIVACY-PRESERVING DATA MINING MODELS AND ALGORITHMS Charu C. Aggarwal IBM T. J. Watson Research Center Hawthorne, NY 10532 [email protected] Philip S. Yu IBM T. J. Watson
Aircloak Anonymized Analytics: Better Data, Better Intelligence
Aircloak Anonymized Analytics: Better Data, Better Intelligence An Aircloak White Paper Data is the new gold. The amount of valuable data produced by users today through smartphones, wearable devices,
No Free Lunch in Data Privacy
No Free Lunch in Data Privacy Daniel Kifer Penn State University [email protected] Ashwin Machanavajjhala Yahoo! Research [email protected] ABSTRACT Differential privacy is a powerful tool for
Data attribute security and privacy in distributed database system
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. V (Mar-Apr. 2014), PP 27-33 Data attribute security and privacy in distributed database system
CS377: Database Systems Data Security and Privacy. Li Xiong Department of Mathematics and Computer Science Emory University
CS377: Database Systems Data Security and Privacy Li Xiong Department of Mathematics and Computer Science Emory University 1 Principles of Data Security CIA Confidentiality Triad Prevent the disclosure
How To Protect Privacy From Reidentification
Big Data and Consumer Privacy in the Internet Economy 79 Fed. Reg. 32714 (Jun. 6, 2014) Comment of Solon Barocas, Edward W. Felten, Joanna N. Huey, Joshua A. Kroll, and Arvind Narayanan Thank you for the
Challenges of Data Privacy in the Era of Big Data. Rebecca C. Steorts, Vishesh Karwa Carnegie Mellon University November 18, 2014
Challenges of Data Privacy in the Era of Big Data Rebecca C. Steorts, Vishesh Karwa Carnegie Mellon University November 18, 2014 1 Outline Why should we care? What is privacy? How do achieve privacy? Big
Technical Approaches for Protecting Privacy in the PCORnet Distributed Research Network V1.0
Technical Approaches for Protecting Privacy in the PCORnet Distributed Research Network V1.0 Guidance Document Prepared by: PCORnet Data Privacy Task Force Submitted to the PMO Approved by the PMO Submitted
DATA MINING - 1DL105, 1DL025
DATA MINING - 1DL105, 1DL025 Fall 2009 An introductory class in data mining http://www.it.uu.se/edu/course/homepage/infoutv/ht09 Kjell Orsborn Uppsala Database Laboratory Department of Information Technology,
De-identification of Data using Pseudonyms (Pseudonymisation) Policy
De-identification of Data using Pseudonyms (Pseudonymisation) Policy Version: 2.0 Page 1 of 7 Partners in Care This is a controlled document. It should not be altered in any way without the express permission
Protecting Respondents' Identities in Microdata Release
1010 IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 13, NO. 6, NOVEMBER/DECEMBER 2001 Protecting Respondents' Identities in Microdata Release Pierangela Samarati, Member, IEEE Computer Society
Database and Data Mining Security
Database and Data Mining Security 1 Threats/Protections to the System 1. External procedures security clearance of personnel password protection controlling application programs Audit 2. Physical environment
Using multiple models: Bagging, Boosting, Ensembles, Forests
Using multiple models: Bagging, Boosting, Ensembles, Forests Bagging Combining predictions from multiple models Different models obtained from bootstrap samples of training data Average predictions or
The De-identification Maturity Model Authors: Khaled El Emam, PhD Waël Hassan, PhD
A PRIVACY ANALYTICS WHITEPAPER The De-identification Maturity Model Authors: Khaled El Emam, PhD Waël Hassan, PhD De-identification Maturity Assessment Privacy Analytics has developed the De-identification
De-Identification of Health Data under HIPAA: Regulations and Recent Guidance" " "
De-Identification of Health Data under HIPAA: Regulations and Recent Guidance" " " D even McGraw " Director, Health Privacy Project January 15, 201311 HIPAA Scope Does not cover all health data Applies
Airavat: Security and Privacy for MapReduce
Airavat: Security and Privacy for MapReduce Indrajit Roy Srinath T.V. Setty Ann Kilzer Vitaly Shmatikov Emmett Witchel The University of Texas at Austin {indrajit, srinath, akilzer, shmat, witchel}@cs.utexas.edu
Aircloak Analytics: Anonymized User Data without Data Loss
Aircloak Analytics: Anonymized User Data without Data Loss An Aircloak White Paper Companies need to protect the user data they store for business analytics. Traditional data protection, however, is costly
Digital Data Landscape
Digital Data Landscape John Neswadi Discussions Who is John Neswadi? In terms of Digital Data, is anyone benefiting from 3 rd party Digital Data today? Utilizing Behavioral Targeting Benefiting i from
Anonymisation Standard for Publishing Health and Social Care Data Specification
Title Anonymisation Standard for Publishing Health and Social Care Data Specification (Process Standard) Document ID ISB1523 Amd 20/2010 Sponsor Phil Walker Status Final Developer Clare Sanderson & Malcolm
Computer Security (EDA263 / DIT 641)
Computer Security (EDA263 / DIT 641) Lecture 12: Database Security Erland Jonsson Department of Computer Science and Engineering Chalmers University of Technology Sweden Outline Introduction to databases
Policy-based Pre-Processing in Hadoop
Policy-based Pre-Processing in Hadoop Yi Cheng, Christian Schaefer Ericsson Research Stockholm, Sweden [email protected], [email protected] Abstract While big data analytics provides
De-identification Koans. ICTR Data Managers Darren Lacey January 15, 2013
De-identification Koans ICTR Data Managers Darren Lacey January 15, 2013 Disclaimer There are several efforts addressing this issue in whole or part Over the next year or so, I believe that the conversation
Information Security in Big Data using Encryption and Decryption
International Research Journal of Computer Science (IRJCS) ISSN: 2393-9842 Information Security in Big Data using Encryption and Decryption SHASHANK -PG Student II year MCA S.K.Saravanan, Assistant Professor
Differential Privacy Tutorial Simons Institute Workshop on Privacy and Big Data. Katrina Ligett Caltech
Differential Privacy Tutorial Simons Institute Workshop on Privacy and Big Data Katrina Ligett Caltech 1 individuals have lots of interesting data... 12 37-5 π 2 individuals have lots of interesting data...
Module outline. CS 458 / 658 Computer Security and Privacy. (Relational) Databases. Module outline. Module 6 Database Security and Privacy.
Module outline CS 458 / 658 Computer Security and Privacy Module 6 Database Security and Privacy Fall 2008 1 Introduction to databases 2 Security requirements 3 Data disclosure and inference 4 Multilevel
Characterizing Task Usage Shapes in Google s Compute Clusters
Characterizing Task Usage Shapes in Google s Compute Clusters Qi Zhang 1, Joseph L. Hellerstein 2, Raouf Boutaba 1 1 University of Waterloo, 2 Google Inc. Introduction Cloud computing is becoming a key
A Brief Survey on Anonymization Techniques for Privacy Preserving Publishing of Social Network Data
A Brief Survey on Anonymization Techniques for Privacy Preserving Publishing of Social Network Data Bin Zhou School of Computing Science Simon Fraser University, Canada [email protected] Jian Pei School
NETWORK SECURITY: How do servers store passwords?
NETWORK SECURITY: How do servers store passwords? Servers avoid storing the passwords in plaintext on their servers to avoid possible intruders to gain all their users passwords. A hash of each password
De-Identification 101
De-Identification 101 We live in a world today where our personal information is continuously being captured in a multitude of electronic databases. Details about our health, financial status and buying
Top Ten Security and Privacy Challenges for Big Data and Smartgrids. Arnab Roy Fujitsu Laboratories of America
1 Top Ten Security and Privacy Challenges for Big Data and Smartgrids Arnab Roy Fujitsu Laboratories of America 2 User Roles and Security Concerns [SKCP11] Users and Security Concerns [SKCP10] Utilities:
A Practical Application of Differential Privacy to Personalized Online Advertising
A Practical Application of Differential Privacy to Personalized Online Advertising Yehuda Lindell Eran Omri Department of Computer Science Bar-Ilan University, Israel. [email protected],[email protected]
Computer Security (EDA263 / DIT 641)
Computer Security (EDA263 / DIT 641) Lecture in EDA263: Database Security Erland Jonsson Department of Computer Science and Engineering Chalmers University of Technology Sweden Outline Introduction to
Safely Sharing Data Between CSIRTs: The SCRUB* Security Anonymization Tool Infrastructure
Safely Sharing Data Between CSIRTs: The SCRUB* Security Anonymization Tool Infrastructure William Yurcik* Clay Woolam, Greg Hellings, Latifur Khan, Bhavani Thuraisingham University
Big Data Big Security Problems? Ivan Damgård, Aarhus University
Big Data Big Security Problems? Ivan Damgård, Aarhus University Content A survey of some security and privacy issues related to big data. Will organize according to who is collecting/storing data! Intelligence
Differential privacy in health care analytics and medical research An interactive tutorial
Differential privacy in health care analytics and medical research An interactive tutorial Speaker: Moritz Hardt Theory Group, IBM Almaden February 21, 2012 Overview 1. Releasing medical data: What could
Cryptography & Network Security. Introduction. Chester Rebeiro IIT Madras
Cryptography & Network Security Introduction Chester Rebeiro IIT Madras The Connected World 2 Information Storage 3 Increased Security Breaches 81% more in 2015 http://www.pwc.co.uk/assets/pdf/2015-isbs-executive-summary-02.pdf
NSF Workshop on Big Data Security and Privacy
NSF Workshop on Big Data Security and Privacy Report Summary Bhavani Thuraisingham The University of Texas at Dallas (UTD) February 19, 2015 Acknowledgement NSF SaTC Program for support Chris Clifton and
Degrees of De-identification of Clinical Research Data
Vol. 7, No. 11, November 2011 Can You Handle the Truth? Degrees of De-identification of Clinical Research Data By Jeanne M. Mattern Two sets of U.S. government regulations govern the protection of personal
Privacy- Preserving P2P Data Sharing with OneSwarm. Presented by. Adnan Malik
Privacy- Preserving P2P Data Sharing with OneSwarm Presented by Adnan Malik Privacy The protec?on of informa?on from unauthorized disclosure Centraliza?on and privacy threat Websites Facebook TwiFer Peer
future proof data privacy
2809 Telegraph Avenue, Suite 206 Berkeley, California 94705 leapyear.io future proof data privacy Copyright 2015 LeapYear Technologies, Inc. All rights reserved. This document does not provide you with
The De-identification of Personally Identifiable Information
The De-identification of Personally Identifiable Information Khaled El Emam (PhD) www.privacyanalytics.ca 855.686.4781 [email protected] 251 Laurier Avenue W, Suite 200 Ottawa, ON Canada K1P 5J6
Arnab Roy Fujitsu Laboratories of America and CSA Big Data WG
Arnab Roy Fujitsu Laboratories of America and CSA Big Data WG 1 The Big Data Working Group (BDWG) will be identifying scalable techniques for data-centric security and privacy problems. BDWG s investigation
Privacy Impact Assessment
DECEMBER 20, 2013 Privacy Impact Assessment MARKET ANALYSIS OF ADMINISTRATIVE DATA UNDER RESEARCH AUTHORITIES Contact Point: Claire Stapleton Chief Privacy Officer 1700 G Street, NW Washington, DC 20552
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
IC05 Introduction on Networks &Visualization Nov. 2009. <[email protected]>
IC05 Introduction on Networks &Visualization Nov. 2009 Overview 1. Networks Introduction Networks across disciplines Properties Models 2. Visualization InfoVis Data exploration
1 Signatures vs. MACs
CS 120/ E-177: Introduction to Cryptography Salil Vadhan and Alon Rosen Nov. 22, 2006 Lecture Notes 17: Digital Signatures Recommended Reading. Katz-Lindell 10 1 Signatures vs. MACs Digital signatures
HKU Big Data and Privacy Workshop. Privacy Risks of Big Data Analytics From a Regulator s Point of View
HKU Big Data and Privacy Workshop Privacy Risks of Big Data Analytics From a Regulator s Point of View 30 November 2015 Henry Chang, IT Advisor Office of the Privacy Commissioner for Personal Data, Hong
Health Data De-Identification by Dr. Khaled El Emam
RISK-BASED METHODOLOGY DEFENSIBLE COST-EFFECTIVE DE-IDENTIFICATION OPTIMAL STATISTICAL METHOD REPORTING RE-IDENTIFICATION BUSINESS ASSOCIATES COMPLIANCE HIPAA PHI REPORTING DATA SHARING REGULATORY UTILITY
Differential Privacy in Telco Big Data Platform
Differential Privacy in Telco Big Data Platform Xueyang Hu,, Mingxuan Yuan,+, Jianguo Yao,+, Yu Deng, Lei Chen, Qiang Yang, Haibing Guan and Jia Zeng,,+ Huawei Noah s Ark Lab, Hong Kong Shanghai Key Laboratory
International Journal of Advanced Computer Technology (IJACT) ISSN:2319-7900 PRIVACY PRESERVING DATA MINING IN HEALTH CARE APPLICATIONS
PRIVACY PRESERVING DATA MINING IN HEALTH CARE APPLICATIONS First A. Dr. D. Aruna Kumari, Ph.d, ; Second B. Ch.Mounika, Student, Department Of ECM, K L University, [email protected]; Third C.
Side Channel Analysis and Embedded Systems Impact and Countermeasures
Side Channel Analysis and Embedded Systems Impact and Countermeasures Job de Haas Agenda Advances in Embedded Systems Security From USB stick to game console Current attacks Cryptographic devices Side
MULTILATERAL SECURITY. Based on chapter 9 of Security Engineering by Ross Anderson
MULTILATERAL SECURITY Based on chapter 9 of Security Engineering by Ross Anderson עומר פפרו Paparo Presenter: Omer Outline Introduction Motivation Data flow models Compartmentation and the lattice model
Arnab Roy Fujitsu Laboratories of America and CSA Big Data WG
Arnab Roy Fujitsu Laboratories of America and CSA Big Data WG 1 Security Analytics Crypto and Privacy Technologies Infrastructure Security 60+ members Framework and Taxonomy Chair - Sree Rajan, Fujitsu
