BIG DATA INTEGRATION AT SCADS DRESDEN/LEIPZIG
|
|
|
- Winifred Nicholson
- 10 years ago
- Views:
Transcription
1 BIG DATA INTEGRATION AT SCADS DRESDEN/LEIPZIG ERHARD RAHM, UNIV. LEIPZIG
2 GERMAN CENTERS FOR BIG DATA Two Centers of Excellence for Big Data in Germany ScaDS Dresden/Leipzig Berlin Big Data Center (BBDC) ScaDS Dresden/Leipzig (Competence Center for Scalable Data Services and Solutions Dresden/Leipzig) scientific coordinators: Nagel (TUD), Rahm (UL) start: Oct duration: 4 years (option for 3 more years) initial funding: ca. 5.6 Mio. Euro 2
3 GOALS Bundling and advancement of existing expertise on Big Data Development of Big Data Services and Solutions Big Data Innovations Leipzig Dresden 3
4 FUNDED INSTITUTES Univ. Leipzig TU Dresden Leibniz Institute of Ecological Urban and Regional Development Max-Planck Institute for Molecular Cell Biology and Genetics 4
5 ASSOCIATED PARTNERS Avantgarde-Labs GmbH Data Virtuality GmbH E-Commerce Genossenschaft e. G. European Centre for Emerging Materials and Processes Dresden Fraunhofer-Institut für Verkehrs- und Infrastruktursysteme Fraunhofer-Institut für Werkstoff- und Strahltechnik GISA GmbH Helmholtz-Zentrum Dresden - Rossendorf Hochschule für Telekommunikation Leipzig Institut für Angewandte Informatik e. V. Landesamt für Umwelt, Landwirtschaft und Geologie Netzwerk Logistik Leipzig-Halle e. V. Sächsische Landesbibliothek Staatsund Universitätsbibliothek Dresden Scionics Computer Innovation GmbH Technische Universität Chemnitz Universitätsklinikum Carl Gustav Carus 5
6 GROBSTRUKTUR DES ZENTRUMS Life sciences Material and Engineering sciences Environmental / Geo sciences Digital Humanities Business Data Service center Big Data Life Cycle Management and Workflows Data Quality / Data Integration Knowledge Extraktion Visual Analytics Efficient Big Data Architectures 6
7 RESEARCH PARTNERS Data-intensive computing W.E. Nagel Data quality / Data integration E. Rahm Databases W. Lehner, E. Rahm Knowledge extraction/data mining C. Rother, P. Stadler, G. Heyer Visualization S. Gumhold, G. Scheuermann Service Engineering, Infrastructure K.-P. Fähnrich, W.E. Nagel, M. Bogdan 7
8 APPLICATION CORRDINATORS Life sciences G. Myers Material / Engineering sciences M. Gude Environmental / Geo sciences J. Schanze Digital Humanities G. Heyer Business Data B. Franczyk 8
9 AGENDA ScaDS Dresden/Leipzig Big Data Integration Introduction Matching product offers from web shops DeDoop: Deduplication with Hadoop Privacy-preserving record linkage with PP-Join Cryptographic bloom filters Privacy-Preserving PP-Join (P4Join) GPU-based implementation Summary and outlook References 9
10 Heterogeneity Volume Velocity Privacy Human collaboration BIG DATA ANALYSIS PIPELINE Data aquisition Data extraction / cleaning Data integration/ annotation Data analysis and visualization Interpretation 10
11 OBJECT MATCHING (DEDUPLICATION) Identification of semantically equivalent objects within one data source or between different sources Original focus on structured (relational) data, e.g. customer data Cno LastName FirstName Gender Address Phone/Fax 24 Smith Christoph M 23 Harley St, Chicago IL, Smith Kris L. F 2 Hurley Place, South Fork MN, / CID Name Street City Sex 11 Kristen Smith 2 Hurley Pl South Fork, MN Christian Smith Hurley St 2 S Fork MN 1 11
12 BIG DATA INTEGRATION USE CASE INTEGRATION OF PRODUCT OFFERS IN COMPARISON PORTAL Thousands of data sources (shops/merchants) Millions of products and product offers Continous changes Many similar, but different products Low data quality 12
13 USE OF PRODUCT CODES Frequent existence of specific product codes for certain products Product code = manufacturer-specific identifier any sequence consisting of alphabetic, special, and numeric characters split by an arbitrary number of white spaces. Utilize to differentiate similar but different products. Canon VIXIA HF S100 Camcorder p MP Hahnel HL-XF51 7.2V 680mAh for Sony NP-FF51 13
14 Web Verification PRODUCT CODE EXTRACTION Hahnel HL-XF51 7.2V 680mAh for Sony NP-FF51 7.2V 680mAh Features Hahnel HL-XF51 for Sony NP-FF51 Tokens Hahnel HL-XF51 Sony NP-FF51 Filtered Tokens HL-XF51 NP-FF51 Candidates [A-Z]{2}\-[A-Z]{2}[0-9]{2} 14
15 LEARNING-BASED MATCH APPROACH Pre-processing Training Product Code Extraction Training Data Selection Matcher Application Classifier Learning Product Offers Manufacturer Cleaning Application Classifier Automatic Classification Blocking (Manufacturer + Category) Matcher Application Classification Product Match Result 15
16 HOW TO SPEED UP OBJECT MATCHING? Blocking to reduce search space group similar objects within blocks based on blocking key restrict object matching to objects from the same block Parallelization split match computation in sub-tasks to be executed in parallel exploitation of Big Data infrastructures such as Hadoop (Map/Reduce or variations) 16
17 Re-Partitioning Grouping Grouping Grouping GENERAL OBJECT MATCHING WORKFLOW R S Blocking Similarity Computation Match Classification M R S Map Phase: Blocking Reduce Phase: Matching 17
18 LOAD BALANCING Data skew leads to unbalanced workload Large blocks prevent utilization of more than a few nodes Deteriorates scalability and efficiency Unnecessary costs (you also pay for underutilized machines!) Key ideas for load balancing Additional MR job to determine blocking key distribution, i.e., number and size of blocks (per input partition) Global load balancing that assigns (nearly) the same number of pairs to reduce tasks Simplest approach : BlockSplit (ICDE2012) split large blocks into sub-blocks with multiple match tasks distribute the match tasks among multiple reduce tasks 18
19 BLOCK SPLIT: 1 SLIDE ILLUSTRATION Example: 3 MP3 players + 6 cell phones 18 pairs (1 time unit) Parallel matching on 2 (reduce) nodes naiive approach BlockSplit 3 pairs (16%) pairs 6 pairs 9 pairs (50%) 15 pairs (84%) pair 8 pairs 9 pairs (50%) Speedup: 18/15=1.2 Speedup: 2 19
20 BLOCK SPLIT EVALUATION: SCALABILITY Evaluation on Amazon EC infrastructure using Hadoop Matching of product records 20
21 DEDOOP: EFFICIENT DEDUPLICATION WITH HADOOP Parallel execution of data integration/ match workflows with Hadoop Powerful library of match and blocking techniques Learning-based configuration GUI-based workflow specification Automatic generation and execution of Map/Reduce jobs on different clusters This tool by far shows the most mature use of MapReduce for data deduplication Automatic load balancing for optimal scalability Iterative computation of transitive closure (extension of MR-CC) 21
22 Core DEDOOP OVERVIEW General ER workflow T R S [0,1] Machine Learning R S Blocking Similarity Computation Match Classification M R S Classifier Training Job Data Analysis Job Dedoop s general MapReduce workflow Blocking-based Matching Job Decision Tree Logistic Regression SVM Standard Blocking Sorted Neighborhood PPJoin+ Blocking Key Generators Prefix Token-based Edit Distance n-gram TFIDF Threshold Match rules ML model 22
23 AGENDA ScaDS Dresden/Leipzig Big Data Integration Introduction Matching product offers from web shops DeDoop: Deduplication with Hadoop Privacy-preserving record linkage with PP-Join Cryptographic bloom filters Privacy-Preserving PP-Join (P4Join) GPU-based implementation Summary and outlook References 23
24 PRIVACY-PRESERVING RECORD LINKAGE Object matching with encrypted data to preserve privacy data exchange / integration for person-related data many use cases: medicine (e.g., cancer registries), census, numerous PPRL approaches (Vatsalan et al., 2013), some requiring trustee or secure multi-party protocol scalability problem for large datasets (e.g., for census purposes) 24
25 PPRL WITH BLOOM FILTERS effective and simple approach uses cryptographic bloom filters (Schnell et al, 2009) tokenize all match-relevant attribute values, e.g. using bigrams or trigrams typical attributes: first name, last name (at birth), sex, date of birth, country of birth, place of birth map each token with a family of hash functions to fixed-size bit vector (fingerprint) original data cannot be reconstructed match of bit vectors (Jaccard similarity) is good approximation of true match result 25
26 SIMILARITY COMPUTATION - EXAMPLE thomas thoman tho hom oma mas tho hom oma man tho hom oma tho hom oma mas h1(mas)= 3 h2(mas)= 7 h3(mas)= man h1(man)= 2 h2(man)= 0 h3(man)= Sim Jaccard (r1, r2) = (r1 ᴧ r2) / (r1 ᴠ r2) Sim Jaccard (r1, r2) = 7/11 26
27 PP-JOIN: POSITION PREFIX JOIN (XIAO ET AL, 2008) one of the most efficient similarity join algorithms determine all pairs of records with sim Jaccard (x,y) t use of filter techniques to reduce search space length, prefix, and position filter relatively easy to run in parallel good candidate to improve scalability for PPRL 27
28 LENGTH FILTER matching records pairs must have similar lengths Sim Jaccard (x, y) t x y t length / cardinality: number of tokens for strings, number of bits for bit vectors Example for minimal similarity t = 0,8: records tokens (bigrams) length r1=tom [_t, to, om, m_] 4 r2=thomas [_t, th, ho, om, ma, as, s_] 7 r4=tommy [_t, to, om, mm, my, y_] 6 r1 r2 t? 4 5,6? no r4 r2 t? 6 5,6? yes 28
29 Similar records must have a minimal overlap α in their sets of tokens (or set bit positions) Prefix filter approximates this test order all tokens (bit positions) for all records according to their overall frequency from infrequent to frequent exclude pairs of records without any overlap in their prefixes with Example (t = 0.8) r2=thomas r3=tomas r4=tommy PREFIX FILTER Sim Jaccard (x, y) t Overlap(x, y) α = é( t ( x ) + y ) ) 1+t prefix_length(x) = é ( (1-t) x ) + 1 records sorted tokens prefix length Prefix [ho, th, ma, as, s_, _t, om] [ma, as, s_, to, _t, om] [mm, my, y_, to, -t, om] [ho, th, ma] [ma, as, s_] [mm, my, y_] prefix(r2) prefix(r3) ={ma} {} prefix(r2) prefix(r4) = {} prefix(r3) prefix(r4) = {} 29
30 PRIVACY-PRESERVING PP-JOIN (P4JOIN) evaluate overlap of set positions in bit vectors Preprocessing phase determine frequency per bit positions and reorder all bit vectors according to the overall frequency of bit positions determine length and prefix per bit vector sort all bit vectors in ascending order of their length (number of set positions) Match phase (sequential scan) for each record apply length filter to determine window of relevant records to match with apply prefix filter (AND operation on prefix) to exclude record pairs without prefix overlap apply position filter for further savings 30
31 P4JOIN: PREPROCESSING (1) records (id, bit vector/ fingerprint) ID fingerprint card. tokens (set positions) A B C determine frequency ordering O f frequency count #occurences per index position O f sort positions in ascending frequency order (ignore unused positions) 31
32 PPPP-JOIN: PREPROCESSING (2) reorder fingerprints according to O f ID fingerprint card. tokens (set position) sorted tokens A B C O f ID reordered fingerprint card. continue with reordered fingerprints A B C
33 P4JOIN: PREPROCESSING (3) sort records by length (cardinality) and determine prefixes prefix_length(x) = é ( (1-t) x ) + 1 ID reordered fingerprint card. prefix length prefix fingerprint B C A
34 P4JOIN: APPLY LENGTH FILTER compare records ordered by length ID reordered fingerprint card. B C A length filter 7 * 0.8 = 5.6 when reading record C it is observed that it does not meet the length filter w.r.t. B -> record B ( B = 4) can be excluded from all further comparisons ID reordered fingerprint card. length filter B C A * 0.8 = 6.4 record A still needs to be considered w.r.t. C due to similar length 34
35 P4JOIN: PREFIX FILTER only records with overlapping prefix need to be matched AND operation on prefix fingerprints ID reordered fingerprint card. prefix fingerprint B C A AND operation on prefixes shows non-zero result for C and C so that these records still need to be considered for matching 35
36 P4JOIN: POSITION FILTER improvement of prefix filter to avoid matches even for overlapping prefixes estimate maximally possible overlap and checking whether it is below the minimal overlap α to meet threshold t original position filter considers the position of the last common prefix token revised position filter record x, prefix length 9 record y, prefix length 8 highest prefix position (here fourth pos. in x) limits possible overlap with other record: the third position in y prefix cannot have an overlap with x maximal possible overlap = #shared prefix tokens (2) + min (9-3, 8-3)= 7 < minimal overlap α = 8 36
37 EVALUATION comparison between NestedLoop, P4Join, MultiBitTree MultiBitTree: best filter approach in previous work by Schnell applies length filter and organizes fingerprints within a binary tree so that fingerprints with the same set bits are grouped within sub-trees can be used to filter out many fingerprints from comparison two input datasets R, S determined with FEBRL data generator N=[ , ,, ]. R =1/5 N, S =4/5 N bit vector length: 1000 similarity threshold
38 EVALUATION RESULTS runtime in minutes on standard PC Approach Dataset size N NestedLoop 6,10 27,68 66,07 122,02 194,77 MultiBitTree 4,68 18,95 40,63 78,23 119,73 P4 Length filter only 3,38 20,53 46,48 88,33 140,73 P4 Length+Prefix 3,77 22,98 52,95 99,72 159,22 P4 Length+Prefix+Position 2,25 15,50 40,05 77,80 125,52 similar results for P4Join and Multibit Tree relatively small improvements compared to NestedLoop 38
39 GPU-BASED PPRL Operations on bit vectors easy to compute on GPUs Length and prefix filters Jaccard similarity Frameworks CUDA und OpenCL support data-parallel execution of general computations on GPUs program ( kernel ) written in C dialect limited to base data types (float, long, int, short, arrays) no dynamic memory allocation (programmer controls memory management) important to minimize data transfer between main memory and GPU memory 39
40 card prefix card prefix card prefix card prefix bits matches bits bits bits EXECUTION SCHEME partition inputs R and S (fingerprints sorted by length) into equallysized partitions that fit into GPU memory generate match tasks per pair of partition only transfer to GPU if length intervals per partition meet length filter optional use of CPU thread to additionally match on CPU S Match task GPU change S 3 with S 4 R 0 S 0 S 1 S 2 S 3 S 4 GPU thread R 0 S 4 Replace S 3 with S 4 R 0 S 3 M R 0,S3 Kernel 0 r 0 -S 4 R R 1 R 2 R 3 Read M R 0,S4 r R 0-1-S 4 Kernel R 0-1 CPU thread(s) Main memory (host) GPU memory 40
41 GPU-BASED EVALUATION RESULTS GeForce GT Cuda 1GB 35 GeForce GT 540M 96 Cuda 1GB GForce GT 610 0,33 1,32 2,95 5,23 8,15 GeForce GT 540M 0,28 1,08 2,41 4,28 6,67 improvements by up to a factor of 20, despite low-profile graphic cards still non-linear increase in execution time with growing data volume 41
42 AGENDA ScaDS Dresden/Leipzig Big Data Integration Introduction Matching product offers from web shops DeDoop: Deduplication with Hadoop Privacy-preserving record linkage with PP-Join Cryptographic bloom filters Privacy-Preserving PP-Join (P4Join) GPU-based implementation Summary and outlook References 42
43 SUMMARY ScaDS Dresden/Leipzig Research focus on data integration, knowledge extraction, visual analytics broad application areas (scientific + business-related) solution classes for applications with similar requirements Big Data Integration Big data poses new requirements for data integration (variety, volume, velocity, veracity) comprehensive data preprocessinga and cleaning Hadoop-based approaches for improved scalability, e.g. Dedoop Usabilty: machine-learning approaches, GUI, monitoring 43
44 SUMMARY (2) Privacy-Preserving Record Linkage increasingly important tp protect personal information Scalability issues for Big Data Bloom filters allow simple, effective and relatively efficient match approach still scalability issues for Big Data -> reduce search space and apply parallel processing Privacy-preserving PP-Join (P4JOIN) relatively easy adoption for bit vectors with improved position filter comparable performance to Multibit trees but easier to parallelize GPU version achieves significant speedup further improvements needed to reduce quadratic complexity 44
45 OUTLOOK Parallel execution of more diverse data integration workflows for text data, image data, sensor data, etc. Learning-based configuration to minimize manual effort (active learning, crowd-sourcing) Holistic integration of many data sources (data + metadata) Clustering across many sources N-way merging of related ontologies (e.g. product taxonomies) Realtime data enrichment and integration for sensor data Improved privacy-preserving record linkage 45
46 REFERENCES H. Köpcke, A. Thor, S. Thomas, E. Rahm: Tailoring entity resolution for matching product offers. Proc. EDBT 2012: L. Kolb, E. Rahm: Parallel Entity Resolution with Dedoop. Datenbank-Spektrum 13(1): (2013) L. Kolb, A. Thor, E. Rahm: Dedoop: Efficient Deduplication with Hadoop. PVLDB 5(12), 2012 L. Kolb, A. Thor, E. Rahm: Load Balancing for MapReduce-based Entity Resolution. ICDE 2012: L. Kolb, Z. Sehili, E. Rahm: Iterative Computation of Connected Graph Components with MapReduce. Datenbank-Spektrum 14(2): (2014) E. Rahm, W.E. Nagel: ScaDS Dresden/Leipzig: Ein serviceorientiertes Kompetenzzentrum für Big Data. Proc. GI- Jahrestagung 2014: 717 R.Schnell, T. Bachteler, J. Reiher: Privacy-preserving record linkage using Bloom filters. BMC Med. Inf. & Decision Making 9: 41 (2009) Z. Sehili, L. Kolb, C. Borgs, R. Schnell, E. Rahm: Privacy Preserving Record Linkage with PPJoin. Proc. BTW Conf (to appear) D. Vatsalan, P. Christen, V. S. Verykios: A taxonomy of privacy-preserving record linkage techniques. Information Syst. 38(6): (2013) C. Xiao, W. Wang, X. Lin, J.X. Yu: Efficient Similarity Joins for Near Duplicate Detection. Proc. WWW
BIG DATA INTEGRATION RESEARCH AT THE UNIVERSITY OF LEIPZIG
BIG DATA INTEGRATION RESEARCH AT THE UNIVERSITY OF LEIPZIG ERHARD RAHM, UNIV. LEIPZIG www.scads.de UNIVERSITY OF LEIPZIG Founded in 1409 Now about 30.000 students in 14 faculties Computer science 13 professorships
Learning-based Entity Resolution with MapReduce
Learning-based Entity Resolution with MapReduce Lars Kolb 1 Hanna Köpcke 2 Andreas Thor 1 Erhard Rahm 1,2 1 Database Group, 2 WDI Lab University of Leipzig {kolb,koepcke,thor,rahm}@informatik.uni-leipzig.de
SCALABLE GRAPH ANALYTICS WITH GRADOOP AND BIIIG
SCALABLE GRAPH ANALYTICS WITH GRADOOP AND BIIIG MARTIN JUNGHANNS, ANDRE PETERMANN, ERHARD RAHM www.scads.de RESEARCH ON GRAPH ANALYTICS Graph Analytics on Hadoop (Gradoop) Distributed graph data management
Private Record Linkage with Bloom Filters
To appear in: Proceedings of Statistics Canada Symposium 2010 Social Statistics: The Interplay among Censuses, Surveys and Administrative Data Private Record Linkage with Bloom Filters Rainer Schnell,
SEMI AUTOMATIC DATA CLEANING FROM MULTISOURCES BASED ON SEMANTIC HETEROGENOUS
SEMI AUTOMATIC DATA CLEANING FROM MULTISOURCES BASED ON SEMANTIC HETEROGENOUS Irwan Bastian, Lily Wulandari, I Wayan Simri Wicaksana {bastian, lily, wayan}@staff.gunadarma.ac.id Program Doktor Teknologi
High Performance Spatial Queries and Analytics for Spatial Big Data. Fusheng Wang. Department of Biomedical Informatics Emory University
High Performance Spatial Queries and Analytics for Spatial Big Data Fusheng Wang Department of Biomedical Informatics Emory University Introduction Spatial Big Data Geo-crowdsourcing:OpenStreetMap Remote
Privacy Aspects in Big Data Integration: Challenges and Opportunities
Privacy Aspects in Big Data Integration: Challenges and Opportunities Peter Christen Research School of Computer Science, The Australian National University, Canberra, Australia Contact: [email protected]
Distributed Computing and Big Data: Hadoop and MapReduce
Distributed Computing and Big Data: Hadoop and MapReduce Bill Keenan, Director Terry Heinze, Architect Thomson Reuters Research & Development Agenda R&D Overview Hadoop and MapReduce Overview Use Case:
DduP Towards a Deduplication Framework utilising Apache Spark
DduP Towards a Deduplication Framework utilising Apache Spark Niklas Wilcke Datenbanken und Informationssysteme (ISYS) University of Hamburg Vogt-Koelln-Strasse 30 22527 Hamburg [email protected]
Load-Balancing the Distance Computations in Record Linkage
Load-Balancing the Distance Computations in Record Linkage Dimitrios Karapiperis Vassilios S. Verykios Hellenic Open University School of Science and Technology Patras, Greece {dkarapiperis, verykios}@eap.gr
Fast Analytics on Big Data with H20
Fast Analytics on Big Data with H20 0xdata.com, h2o.ai Tomas Nykodym, Petr Maj Team About H2O and 0xdata H2O is a platform for distributed in memory predictive analytics and machine learning Pure Java,
Big Data Mining Services and Knowledge Discovery Applications on Clouds
Big Data Mining Services and Knowledge Discovery Applications on Clouds Domenico Talia DIMES, Università della Calabria & DtoK Lab Italy [email protected] Data Availability or Data Deluge? Some decades
High Performance CUDA Accelerated Local Optimization in Traveling Salesman Problem
High Performance CUDA Accelerated Local Optimization in Traveling Salesman Problem Kamil Rocki, PhD Department of Computer Science Graduate School of Information Science and Technology The University of
Advanced record linkage methods: scalability, classification, and privacy
Advanced record linkage methods: scalability, classification, and privacy Peter Christen Research School of Computer Science, ANU College of Engineering and Computer Science, The Australian National University
Scalable Developments for Big Data Analytics in Remote Sensing
Scalable Developments for Big Data Analytics in Remote Sensing Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader,
Parallel Programming Map-Reduce. Needless to Say, We Need Machine Learning for Big Data
Case Study 2: Document Retrieval Parallel Programming Map-Reduce Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin January 31 st, 2013 Carlos Guestrin
German Record Linkage Center
German Record Linkage Center Microdata Computation Centre (MiCoCe) Workshop Nuremberg, 29 April 2014 Johanna Eberle FDZ of BA at IAB Agenda Basic information on German RLC Services & Software Projects
Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel
Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup:
Big Data Analytics and Healthcare
Big Data Analytics and Healthcare Anup Kumar, Professor and Director of MINDS Lab Computer Engineering and Computer Science Department University of Louisville Road Map Introduction Data Sources Structured
A Load-Balanced MapReduce Algorithm for Blocking-based Entity-resolution with Multiple Keys
Proceedings of the Twelfth Australasian Symposium on Parallel and Distributed Computing (AusPDC 0), Auckland, New Zealand A Load-Balanced MapReduce Algorithm for Blocking-based Entity-resolution with Multiple
Industry 4.0 and Big Data
Industry 4.0 and Big Data Marek Obitko, [email protected] Senior Research Engineer 03/25/2015 PUBLIC PUBLIC - 5058-CO900H 2 Background Joint work with Czech Institute of Informatics, Robotics and
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
Introduction to Data Mining
Introduction to Data Mining 1 Why Data Mining? Explosive Growth of Data Data collection and data availability Automated data collection tools, Internet, smartphones, Major sources of abundant data Business:
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
Classification On The Clouds Using MapReduce
Classification On The Clouds Using MapReduce Simão Martins Instituto Superior Técnico Lisbon, Portugal [email protected] Cláudia Antunes Instituto Superior Técnico Lisbon, Portugal [email protected]
LDIF - Linked Data Integration Framework
LDIF - Linked Data Integration Framework Andreas Schultz 1, Andrea Matteini 2, Robert Isele 1, Christian Bizer 1, and Christian Becker 2 1. Web-based Systems Group, Freie Universität Berlin, Germany [email protected],
Hadoop Parallel Data Processing
MapReduce and Implementation Hadoop Parallel Data Processing Kai Shen A programming interface (two stage Map and Reduce) and system support such that: the interface is easy to program, and suitable for
Analysis Tools and Libraries for BigData
+ Analysis Tools and Libraries for BigData Lecture 02 Abhijit Bendale + Office Hours 2 n Terry Boult (Waiting to Confirm) n Abhijit Bendale (Tue 2:45 to 4:45 pm). Best if you email me in advance, but I
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Big Data Analytics. Prof. Dr. Lars Schmidt-Thieme
Big Data Analytics Prof. Dr. Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany 33. Sitzung des Arbeitskreises Informationstechnologie,
A Service for Data-Intensive Computations on Virtual Clusters
A Service for Data-Intensive Computations on Virtual Clusters Executing Preservation Strategies at Scale Rainer Schmidt, Christian Sadilek, and Ross King [email protected] Planets Project Permanent
Data Quality Mining: Employing Classifiers for Assuring consistent Datasets
Data Quality Mining: Employing Classifiers for Assuring consistent Datasets Fabian Grüning Carl von Ossietzky Universität Oldenburg, Germany, [email protected] Abstract: Independent
An Overview of Knowledge Discovery Database and Data mining Techniques
An Overview of Knowledge Discovery Database and Data mining Techniques Priyadharsini.C 1, Dr. Antony Selvadoss Thanamani 2 M.Phil, Department of Computer Science, NGM College, Pollachi, Coimbatore, Tamilnadu,
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Big Data and Analytics: Challenges and Opportunities
Big Data and Analytics: Challenges and Opportunities Dr. Amin Beheshti Lecturer and Senior Research Associate University of New South Wales, Australia (Service Oriented Computing Group, CSE) Talk: Sharif
COMP9321 Web Application Engineering
COMP9321 Web Application Engineering Semester 2, 2015 Dr. Amin Beheshti Service Oriented Computing Group, CSE, UNSW Australia Week 11 (Part II) http://webapps.cse.unsw.edu.au/webcms2/course/index.php?cid=2411
Is a Data Scientist the New Quant? Stuart Kozola MathWorks
Is a Data Scientist the New Quant? Stuart Kozola MathWorks 2015 The MathWorks, Inc. 1 Facts or information used usually to calculate, analyze, or plan something Information that is produced or stored by
MapReduce and Hadoop. Aaron Birkland Cornell Center for Advanced Computing. January 2012
MapReduce and Hadoop Aaron Birkland Cornell Center for Advanced Computing January 2012 Motivation Simple programming model for Big Data Distributed, parallel but hides this Established success at petabyte
Algorithmic Techniques for Big Data Analysis. Barna Saha AT&T Lab-Research
Algorithmic Techniques for Big Data Analysis Barna Saha AT&T Lab-Research Challenges of Big Data VOLUME Large amount of data VELOCITY Needs to be analyzed quickly VARIETY Different types of structured
Subgraph Patterns: Network Motifs and Graphlets. Pedro Ribeiro
Subgraph Patterns: Network Motifs and Graphlets Pedro Ribeiro Analyzing Complex Networks We have been talking about extracting information from networks Some possible tasks: General Patterns Ex: scale-free,
The basic data mining algorithms introduced may be enhanced in a number of ways.
DATA MINING TECHNOLOGIES AND IMPLEMENTATIONS The basic data mining algorithms introduced may be enhanced in a number of ways. Data mining algorithms have traditionally assumed data is memory resident,
How To Balance In Cloud Computing
A Review on Load Balancing Algorithms in Cloud Hareesh M J Dept. of CSE, RSET, Kochi hareeshmjoseph@ gmail.com John P Martin Dept. of CSE, RSET, Kochi [email protected] Yedhu Sastri Dept. of IT, RSET,
CS455 - Lab 10. Thilina Buddhika. April 6, 2015
Thilina Buddhika April 6, 2015 Agenda Course Logistics Quiz 8 Review Giga Sort - FAQ Census Data Analysis - Introduction Implementing Custom Data Types in Hadoop Course Logistics HW3-PC Component 1 (Giga
Big Data and Analytics: Getting Started with ArcGIS. Mike Park Erik Hoel
Big Data and Analytics: Getting Started with ArcGIS Mike Park Erik Hoel Agenda Overview of big data Distributed computation User experience Data management Big data What is it? Big Data is a loosely defined
Detection of Distributed Denial of Service Attack with Hadoop on Live Network
Detection of Distributed Denial of Service Attack with Hadoop on Live Network Suchita Korad 1, Shubhada Kadam 2, Prajakta Deore 3, Madhuri Jadhav 4, Prof.Rahul Patil 5 Students, Dept. of Computer, PCCOE,
Mammoth Scale Machine Learning!
Mammoth Scale Machine Learning! Speaker: Robin Anil, Apache Mahout PMC Member! OSCON"10! Portland, OR! July 2010! Quick Show of Hands!# Are you fascinated about ML?!# Have you used ML?!# Do you have Gigabytes
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India
1 st Symposium on Colossal Data and Networking (CDAN-2016) March 18-19, 2016 Medicaps Group of Institutions, Indore, India Call for Papers Colossal Data Analysis and Networking has emerged as a de facto
2015 The MathWorks, Inc. 1
25 The MathWorks, Inc. 빅 데이터 및 다양한 데이터 처리 위한 MATLAB의 인터페이스 환경 및 새로운 기능 엄준상 대리 Application Engineer MathWorks 25 The MathWorks, Inc. 2 Challenges of Data Any collection of data sets so large and complex
StreamStorage: High-throughput and Scalable Storage Technology for Streaming Data
: High-throughput and Scalable Storage Technology for Streaming Data Munenori Maeda Toshihiro Ozawa Real-time analytical processing (RTAP) of vast amounts of time-series data from sensors, server logs,
Classifying Large Data Sets Using SVMs with Hierarchical Clusters. Presented by :Limou Wang
Classifying Large Data Sets Using SVMs with Hierarchical Clusters Presented by :Limou Wang Overview SVM Overview Motivation Hierarchical micro-clustering algorithm Clustering-Based SVM (CB-SVM) Experimental
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework
Recognization of Satellite Images of Large Scale Data Based On Map- Reduce Framework Vidya Dhondiba Jadhav, Harshada Jayant Nazirkar, Sneha Manik Idekar Dept. of Information Technology, JSPM s BSIOTR (W),
Journée Thématique Big Data 13/03/2015
Journée Thématique Big Data 13/03/2015 1 Agenda About Flaminem What Do We Want To Predict? What Is The Machine Learning Theory Behind It? How Does It Work In Practice? What Is Happening When Data Gets
A Knowledge-Poor Approach to BioCreative V DNER and CID Tasks
A Knowledge-Poor Approach to BioCreative V DNER and CID Tasks Firoj Alam 1, Anna Corazza 2, Alberto Lavelli 3, and Roberto Zanoli 3 1 Dept. of Information Eng. and Computer Science, University of Trento,
Study of Data Cleaning & Comparison of Data Cleaning Tools
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 3, March 2015,
What is Analytic Infrastructure and Why Should You Care?
What is Analytic Infrastructure and Why Should You Care? Robert L Grossman University of Illinois at Chicago and Open Data Group [email protected] ABSTRACT We define analytic infrastructure to be the services,
Parallel Computing. Benson Muite. [email protected] http://math.ut.ee/ benson. https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage
Parallel Computing Benson Muite [email protected] http://math.ut.ee/ benson https://courses.cs.ut.ee/2014/paralleel/fall/main/homepage 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework
Large-Scale Data Sets Clustering Based on MapReduce and Hadoop
Journal of Computational Information Systems 7: 16 (2011) 5956-5963 Available at http://www.jofcis.com Large-Scale Data Sets Clustering Based on MapReduce and Hadoop Ping ZHOU, Jingsheng LEI, Wenjun YE
16.1 MAPREDUCE. For personal use only, not for distribution. 333
For personal use only, not for distribution. 333 16.1 MAPREDUCE Initially designed by the Google labs and used internally by Google, the MAPREDUCE distributed programming model is now promoted by several
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON Overview * Introduction * Multiple faces of Big Data * Challenges of Big Data * Cloud Computing
From GWS to MapReduce: Google s Cloud Technology in the Early Days
Large-Scale Distributed Systems From GWS to MapReduce: Google s Cloud Technology in the Early Days Part II: MapReduce in a Datacenter COMP6511A Spring 2014 HKUST Lin Gu [email protected] MapReduce/Hadoop
Using the Grid for the interactive workflow management in biomedicine. Andrea Schenone BIOLAB DIST University of Genova
Using the Grid for the interactive workflow management in biomedicine Andrea Schenone BIOLAB DIST University of Genova overview background requirements solution case study results background A multilevel
Packet-based Network Traffic Monitoring and Analysis with GPUs
Packet-based Network Traffic Monitoring and Analysis with GPUs Wenji Wu, Phil DeMar [email protected], [email protected] GPU Technology Conference 2014 March 24-27, 2014 SAN JOSE, CALIFORNIA Background Main
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner
Recent and Future Activities in HPC and Scientific Data Management Siegfried Benkner Research Group Scientific Computing Faculty of Computer Science University of Vienna AUSTRIA http://www.par.univie.ac.at
Big Data Challenges in Bioinformatics
Big Data Challenges in Bioinformatics BARCELONA SUPERCOMPUTING CENTER COMPUTER SCIENCE DEPARTMENT Autonomic Systems and ebusiness Pla?orms Jordi Torres [email protected] Talk outline! We talk about Petabyte?
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data
FP-Hadoop: Efficient Execution of Parallel Jobs Over Skewed Data Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez To cite this version: Miguel Liroz-Gistau, Reza Akbarinia, Patrick Valduriez. FP-Hadoop:
Accelerating BIRCH for Clustering Large Scale Streaming Data Using CUDA Dynamic Parallelism
Accelerating BIRCH for Clustering Large Scale Streaming Data Using CUDA Dynamic Parallelism Jianqiang Dong, Fei Wang and Bo Yuan Intelligent Computing Lab, Division of Informatics Graduate School at Shenzhen,
A Novel Cloud Based Elastic Framework for Big Data Preprocessing
School of Systems Engineering A Novel Cloud Based Elastic Framework for Big Data Preprocessing Omer Dawelbeit and Rachel McCrindle October 21, 2014 University of Reading 2008 www.reading.ac.uk Overview
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries
Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries Shin Morishima 1 and Hiroki Matsutani 1,2,3 1Keio University, 3 14 1 Hiyoshi, Kohoku ku, Yokohama, Japan 2National Institute
NSEC3 Hash Breaking. GPU-based. Matthäus Wander, Lorenz Schwittmann, Christopher Boelmann, Torben Weis IEEE NCA 2014. <matthaeus.wander@uni-due.
GPU-based NSEC3 Hash Breaking Matthäus Wander, Lorenz Schwittmann, Christopher Boelmann, Torben Weis IEEE NCA 2014 Cambridge, August 22, 2014 NSEC3 FUNDAMENTALS Matthäus Wander
MapGraph. A High Level API for Fast Development of High Performance Graphic Analytics on GPUs. http://mapgraph.io
MapGraph A High Level API for Fast Development of High Performance Graphic Analytics on GPUs http://mapgraph.io Zhisong Fu, Michael Personick and Bryan Thompson SYSTAP, LLC Outline Motivations MapGraph
Secure Computation Martin Beck
Institute of Systems Architecture, Chair of Privacy and Data Security Secure Computation Martin Beck Dresden, 05.02.2015 Index Homomorphic Encryption The Cloud problem (overview & example) System properties
CSE-E5430 Scalable Cloud Computing Lecture 2
CSE-E5430 Scalable Cloud Computing Lecture 2 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 14.9-2015 1/36 Google MapReduce A scalable batch processing
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
Document Similarity Measurement Using Ferret Algorithm and Map Reduce Programming Model
Document Similarity Measurement Using Ferret Algorithm and Map Reduce Programming Model Condro Wibawa, Irwan Bastian, Metty Mustikasari Department of Information Systems, Faculty of Computer Science and
Data Centric Systems (DCS)
Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems
Lecture 10 - Functional programming: Hadoop and MapReduce
Lecture 10 - Functional programming: Hadoop and MapReduce Sohan Dharmaraja Sohan Dharmaraja Lecture 10 - Functional programming: Hadoop and MapReduce 1 / 41 For today Big Data and Text analytics Functional
Luncheon Webinar Series May 13, 2013
Luncheon Webinar Series May 13, 2013 InfoSphere DataStage is Big Data Integration Sponsored By: Presented by : Tony Curcio, InfoSphere Product Management 0 InfoSphere DataStage is Big Data Integration
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs
Optimizing a 3D-FWT code in a cluster of CPUs+GPUs Gregorio Bernabé Javier Cuenca Domingo Giménez Universidad de Murcia Scientific Computing and Parallel Programming Group XXIX Simposium Nacional de la
Research on Clustering Analysis of Big Data Yuan Yuanming 1, 2, a, Wu Chanle 1, 2
Advanced Engineering Forum Vols. 6-7 (2012) pp 82-87 Online: 2012-09-26 (2012) Trans Tech Publications, Switzerland doi:10.4028/www.scientific.net/aef.6-7.82 Research on Clustering Analysis of Big Data
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
SQL Server 2005 Features Comparison
Page 1 of 10 Quick Links Home Worldwide Search Microsoft.com for: Go : Home Product Information How to Buy Editions Learning Downloads Support Partners Technologies Solutions Community Previous Versions
HiBench Introduction. Carson Wang ([email protected]) Software & Services Group
HiBench Introduction Carson Wang ([email protected]) Agenda Background Workloads Configurations Benchmark Report Tuning Guide Background WHY Why we need big data benchmarking systems? WHAT What is
Delivering Smart Answers!
Companion for SharePoint Topic Analyst Companion for SharePoint All Your Information Enterprise-ready Enrich SharePoint, your central place for document and workflow management, not only with an improved
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
Data Warehousing und Data Mining
Data Warehousing und Data Mining Multidimensionale Indexstrukturen Ulf Leser Wissensmanagement in der Bioinformatik Content of this Lecture Multidimensional Indexing Grid-Files Kd-trees Ulf Leser: Data
Email Spam Detection Using Customized SimHash Function
International Journal of Research Studies in Computer Science and Engineering (IJRSCSE) Volume 1, Issue 8, December 2014, PP 35-40 ISSN 2349-4840 (Print) & ISSN 2349-4859 (Online) www.arcjournals.org Email
Accelerating variant calling
Accelerating variant calling Mauricio Carneiro GSA Broad Institute Intel Genomic Sequencing Pipeline Workshop Mount Sinai 12/10/2013 This is the work of many Genome sequencing and analysis team Mark DePristo
Protein Protein Interaction Networks
Functional Pattern Mining from Genome Scale Protein Protein Interaction Networks Young-Rae Cho, Ph.D. Assistant Professor Department of Computer Science Baylor University it My Definition of Bioinformatics
High Productivity Data Processing Analytics Methods with Applications
High Productivity Data Processing Analytics Methods with Applications Dr. Ing. Morris Riedel et al. Adjunct Associate Professor School of Engineering and Natural Sciences, University of Iceland Research
Exploiting Cloud Heterogeneity for Optimized Cost/Performance MapReduce Processing
Exploiting Cloud Heterogeneity for Optimized Cost/Performance MapReduce Processing Zhuoyao Zhang University of Pennsylvania, USA [email protected] Ludmila Cherkasova Hewlett-Packard Labs, USA [email protected]
Analytics on Big Data
Analytics on Big Data Riccardo Torlone Università Roma Tre Credits: Mohamed Eltabakh (WPI) Analytics The discovery and communication of meaningful patterns in data (Wikipedia) It relies on data analysis
