Selected Parallel and Scalable Methods for Scientific Big Data Analytics Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader, Juelich Supercomputing Centre Adjunct Associated Professor, University of Iceland ZIH Kolloquium, 21th May 2015 Technical University of Dresden
Research Centre Juelich JUELICH in Numbers Institutes at JUELICH Area: 2.2 km 2 Staff: 5236 Scientists: 1658 Technical staff: 1662 Trainees: 303 Budget: 557 Mio. incl. 172 Mio. third party funding Located in Germany, Koeln Aachen Area Institute of Complex Systems Institute for Advanced Simulation Juelich Supercomputing Center Juelich Center for Neutron Science Peter-Grünberg Institute Institute for Neuroscience and Medicine Institute for Nuclear Physics Institute for Bio and Geosciences Institute for Energy and Climate Research Central Institute for Engineering, Electronics, and Analytics Research for generic key technologies of the next generation Scientific & Engineering Application-driven Problem Solving 2/ 70
University of Iceland Schools of the University Faculties of the School School of Education School of Humanities School of Engineering and Natural Sciences School of Social Sciences School of Health Sciences Interdisciplinary Studies Full programmes taught in English Staff: ~ 1259 Students: ~14.000 Located in Reykjavik Capital Center, Iceland Civil and Environmental Engineering Earth Sciences Electrical and Computer Engineering Industrial Engineering Mechanical Engineering Computer Science Life and Environmental Sciences Physical Sciences Teaching of key technologies in engineering & sciences University Courses: Statistical Data Mining & HPC-A/B 3/ 70
Outline 4/ 70
Outline Data Analytics @ Juelich Driven by Scientific & Engineering Demands Understanding of Terms & Key Focus Scalable & Parallel Tools Clustering DBSCAN Classification SVM Scientific Applications in Context Recent Research Directions Brain Analytics Deep Learning Conclusions References & Backup Slides 5/ 70
Data Analytics @ Juelich 6/ 70
Data Analytics Context JSC Tackle Inverse Problems Research data-intensive science and engineering applications Explore computing that is more intertwined with data analysis Big Data Federated Data Management, Preservation, Security & Access Sharing, re-use, towards reproducability 7/ 70
Data Analytics Term Clarification Data Analytics is an interesting mix of different approaches Analytics: Whole methodology; Analysis: data investigation process itself Big requires scalable processing methods and underlying infrastructure Concrete big data : large medical data Concrete big data : large earth science data Clustering++ Regression++ Classification++ 8/ 70
Data Analytics Research Key Focus Classification++ Clustering++ Scientific Computing Scientific Applications using Big Data Traditional Scientific Computing Methods HPC and HTC Paradigms & Parallelization Emerging Data Analytics Approaches Optimized Data Access & Management Statistical Data Mining & Machine Learning Inverse Problems & their Research Questions Regression++ Big Data Methods Scientific Applications (Big) Data from various data science applications Systematic & Automated Analytics guided by CRISP DM 9/ 70
Data Analytics Selected Research Group Activities John von Neumann Institute for Computing (NIC) Peer-review of scientific big data analytics (SBDA) proposels Jointly work with SBDA users (first projects starting, prototyping process) Research Data Alliance (RDA) Chairing activities of the Big Data Analytics Interest Group Collaboration with a variety of EU and US partners Geoffrey Fox, UoIndiana (map-reduce), Kuo Kwo-Sen (NASA, SciDB) Smart Data Innovation Lab (SDIL) Driving activities in the personalised medicine community (with Bayer) Collaboration with partners from industry (e.g. IBM, SAP, Siemens, etc.) 10 / 70
Data Analytics Selected Research Expertise Key expertise making algorithms parallel & scalable for big data Driven by scientific and engineering cases, e.g. understanding the human brain, remote sensing applications, marine measurements analysis, Automate and/or support the data analysis process Example codes: Density-based Spatial Clustering of Applications with Noise (DBSCAN), Support Vector Machines (SVMs), Parallel & Scalable DBSCAN clustering tool Problem: Automatic outlier detection for data quality Tailor solution for community Scalability towards Big Data Design and improve automatic data analytics approaches Parallel & Scalable SVM classification tool Problem: Classification of buildings from multi-spectral images Enable smooth transition from manual Matlab SVM scripts Research on parallel SVM methods (map-reduce, HPC) [1] R. Huber, M. Riedel et al., Research data enters scholarly communication and big data analysis, EGU2014 [2] G. Cavallaro, M. Riedel et al., Smart Data Analytics for Remote Sensing Applications, IGARSS2014 11 / 70
Scalable & Parallel Tools: Clustering 12 / 70
Learning From Data Clustering Technique Classification? Clustering Regression Groups of data exist New data classified to existing groups No groups of data exist Create groups from data close to each other Identify a line with a certain slope describing the data 13 / 70
Selected Clustering Methods K-Means Clustering Centroid based clustering Partitions a data set into K distinct clusters (centroids can be artificial) K-Medoids Clustering Centroid based clustering (variation) Partitions a data set into K distinct clusters (centroids are actual points) Sequential Agglomerative hierarchic nonoverlapping (SAHN) Hiearchical Clustering (create tree-like data structure dendrogram ) Clustering Using Representatives (CURE) Select representative points / cluster; as far from one another as possible Density-based spatial clustering of applications + noise (DBSCAN) Reasoning: density similiarity measure helpful in our driving applications Assumes clusters of similar density or areas of higher density in dataset 14 / 70
Technology Review of Open & Available Tools M. Goetz, M. Riedel et al., 6 th Workshop on Data Mining in Earth System Science, International Conference of Computational Science (ICCS), Reykjavik, to be published 15 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (1) DBSCAN Algorithm Introduced 1996 by Martin Ester et al. [4] Ester et al. Groups number of similar points into clusters of data Similarity is defined by a distance measure (e.g. euclidean distance) Distinct Algorithm Features Clusters a variable number of clusters Forms arbitrarily shaped clusters Identifies outliers/noise Unclustered Data Understanding Parameters for MPI/OpenMP tool Looks for a similar points within a given search radius Parameter epsilon A cluster consist of a given minimum number of points Parameter minpoints Clustered Data [3] M.Goetz & C. Bodenstein, HPDBSCAN Tool 16 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (2) Parallelization Strategy Smart Big Data Preprocessing into Spatial Cells OpenMP standalone MPI (+ optional OpenMP hybrid) Preprocessing Step Spatial indexing and redistribution according to the point localities Data density based chunking of computations Computational Optimizations [3] M.Goetz & C. Bodenstein, HPDBSCAN Tool Caching of point neighborhood searches Cluster merging based on comparisons instead of zone reclustering 17 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (3) Performance Comparisons With another open-source parallel DBSCAN implementation (aka NWU ) 3.7056.351 data points (2 dimensions) [5] Patwary et al. Use of Hierarchical Data Format (HDF) v.5 for scalable input/output of big data 18 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (4) Selected Big Data Applications London twitter data (goal: find density centers of tweets) Bremen thermo point cloud data (goal: noise reduction) PANGAEA earth science datasets (goal: automated outlier detection) [6] Open PANGAEA Earth Science Data Collection 19 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (5) Free tool available Public bitbucket account open-source Tool Website with more information Maintained on best effort basis [3] M.Goetz & C. Bodenstein, HPDBSCAN Tool Usage via simple jobscripts 3D Point Cloud of Bremen/Germany Usage module load hdf5/1.8.13 mpiexec -np 1./dbscan -e 300 -m 100 -t 12 bremensmall.h5 Parameter epsilon Parameter minpoints 20 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (6) Usage via jobscript Using MOAB job scheduler Important: module load hdf5/1.8.13 Important: library gcc-4.9.2/lib64 np = number of processors t = number of threads JUDGE @ Juelich DBSCAN Parameters 21 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (7) Output with various information Run-times of different stages Clustering task information (e.g. number of identified clusters) Noise identification Data volume (small Bremen): ~72 MB Data volume (large Bremen): ~1.9 GB JUDGE @ Juelich Output results written in same input data: cluster number & noise label (depends on parameters) # 22 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (8) Visualization Example Using Point Cloud Library (PDL) toolset Transformation of Data to PCD format (python script on the right) H5toPCD.py python script Take advantage of NumPy library Usage python H5toPCD.py bremensmall.h5 pcl_viewer bremenclustered.pcd 23 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (9) Earth Science Application Automated outlier detection in time series Collaboration with MARUM, Bremen (work in progress) Example: water quality data of Koljoefjords Connected underwater device Measurements: oxygen, temperature, salinity, Use of HPBSCAN algorithm Detect outliers and anomalies/events (e.g. water mixing) Compare with manually annotated data by domain-scientist Needs automation 24 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (10) Neuroscience Application Cell nuclei detection and tissue clustering Scientific Case: Detect various layers (colored) Layers seem to have different density distribution of cells Extract cell nuclei into 2D/3D point cloud Cluster different brain areas by cell density # Use of HPBSCAN algorithm First 2d results detect various clusters Work in progress, not very good results Approach: Several iterations (with 3D) with potentially different parameter values Investigate other methods (e.g. OPTICS) Research activities jointly with T. Dickscheid et al. (Juelich Institute of Neuroscience & Medicine) 25 / 70
Parallel & Scalable DBSCAN MPI/OpenMP Tool (11) Neuroscience Application Work in progress (e.g. 3120x3288) Cell nuclei detection and tissue clustering varying parameters Research activities jointly with T. Dickscheid et al. (Juelich Institute of Neuroscience & Medicine) 26 / 70
Scalable & Parallel Tools: Classification 27 / 70
Learning From Data Classification Technique Classification? Clustering Regression Groups of data exist New data classified to existing groups No groups of data exist Create groups from data close to each other Identify a line with a certain slope describing the data 28 / 70
Selected Classification Methods Perceptron Learning Algorithm simple linear classification Enables binary classification with a line between classes of seperable data Support Vector Machines (SVMs) non-linear ( kernel ) classification Enables non-linear classification with maximum margin (best out-of-the-box ) Reasoning: achieves often better results than other methods in remote sensing application Decision Trees & Ensemble Methods tree-based classification Grows trees for class decisions, ensemble methods average n trees Artificial Neural Networks (ANNs) brain-inspired classification Combine multiple linear perceptrons to a strong network for non-linear tasks Naive Bayes Classifier probabilistic classification Use of the Bayes theorem with strong/naive independence between features 29 / 70
Technology Review of Open & Available Tools M. Goetz, M. Riedel et al., 6 th Workshop on Data Mining in Earth System Science, International Conference of Computational Science (ICCS), Reykjavik, to be published 30 / 70
Parallel & Scalable SVM MPI Tool (1) [7] C. Cortes and V. Vapnik et al. SVM Algorithm Approach Introduced 1995 by C.Cortes & V. Vapnik et al. Creates a maximal margin classifier to get future points ( more often ) right Uses quadratic programming & Lagrangian method with N x N (use of soft-margin approach for better generalization ) (linear example) ( maximal margin clasifier example) (maximizing hyperplane turned into optimization problem, minimization, dual problem) (max. hyperplane dual problem, using quadratic programming method) (kernel trick, quadratic coefficients Computational Complexity & Big Data Impact) 31 / 70
Parallel & Scalable SVM MPI Tool (2) True Support Vector Machines are Support Vector Classifiers combined with a non-linear kernel Non-linear kernels exist - mostly known are polynomial & Radial Basis Function (RBF) kernels [8] An Introduction to Statistical Learning Understanding the MPI tool parameters Selecting non-linear kernel function K type as RBF parameter t 2 Setting RBF Kernel configuration parameter e.g. parameter g 16 Setting SVM allowed errors parameter e.g. parameter c 10000 Major benefit of Kernels: Computing done in original space Linear Kernel (linear in features) Polynomial Kernel (polynomial of degree d) RBF Kernel (large distance, small impact) 32 / 70
Parallel & Scalable SVM MPI Tool (3) Original parallel pisvm tool 1.2 Open-source and based on libsvm library, C, 2011 Message Passing Interface (MPI) [9] pisvm Website, 2011/2014 code New version appeared 2014-10 v. 1.3 (no major improvements) Lack of big data support (memory, layout, etc.) Tuned scalable parallel pisvm tool 1.2.1 Open-source (repository to be created) Based on pisvm tool 1.2 Optimizations: load balancing; MPI collectives Contact: m.richerzhagen@fz-juelich.de 33 / 70
Parallel & Scalable SVM MPI Tool (4) Sattelite Data Parallel Support Vector Machines (SVM) HPC / MPI (Quickbird) Classification Study of Land Cover Types Reference Data Analytics for reusability & learning CRISP- DM Report Openly Shared Datasets Running Analytics Code [10] Rome Image dataset 34 / 70
Parallel & Scalable SVM MPI Tool (5) Example dataset: Geographical location: Image of Rome, Italy Remote sensor data obtained by Quickbird satellite High-resolution (0.6m) panchromatic image Pansharpened (UDWT) low-resolution (2.4m) multispectral images [10] Rome Image dataset 35 / 70
Parallel & Scalable SVM MPI Tool (6) Labelled data available for train/test data Groundtruth data of 9 different land-cover classes available Data preparation We generated a set of training samples by randomly selecting 10% of the reference samples (with labelled data) Generated set of test samples from the remaining labels (labelled data, 90% of reference samples) [10] Rome Image dataset Training Image (10% pixels/class) 36 / 70
Parallel & Scalable SVM MPI Tool (7) Based on LibSVM data format (using feature extraction method) Add Self-Dual Attribute Profile (SDAP) on Area on all images training file [11] G. Cavallaro, M. Mura, J.A. Benediktsson, L. Bruzzone et al. Area Std Dev Moment of Inertia Class Number Feature Gray Level Each line is a training vector with gray levels each line is a pixel 3 1:0.105882 2:0.109804 3:0.101961... 54:0.121569 55:0.130952 2 1:0.364706 2:0.360784 3:0.356863... 54:0.356863 55:0.349206 6 1:0.152941 2:0.34902 3:0.454902... 54:0.466667 55:0.460317............ 9 1:0.247059 2:0.247059 3:0.227451... 54:0.227451 55:0.218254 7 1:0.411765 2:0.411765 3:0.415686... 54:0.415686 55:0.40873 #77542 samples 55 features [10] Rome Image dataset 37 / 70
Parallel & Scalable SVM MPI Tool (8) Usage via jobscript Using MOAB job scheduler np = number of processors; o/q partitioning JUDGE @ Juelich Usage via simple jobscripts SVM Parameters [12] Rome Analytics Results & job scripts 38 / 70
Parallel & Scalable SVM MPI Tool (9) Training speed-up is possible when number of features is high Serial Matlab: ~1277 sec (~21 minutes) Parallel (16) Analytics: 220 sec (3:40 minutes) Accuracy remains Training vector 77542 samples 10 filtered 10 filtered Manual SDAP Manual work: Obtain the SDAP for all image bands using attribute area (10 thresholds) X geolocation [1D] 10 filtered 10 filtered 10 filtered SDAP = bands + filtered images [3D] SUM = 55 Features y geolocation [2D] [12] Rome Analytics Results & job scripts 39 / 70
Parallel & Scalable SVM MPI Tool (10) Another more challenging dataset: high number of classes Parallelization challenges: unbalanced class representations remote sensing cube & ground reference G. Cavallaro, M. Riedel et al., Remote Sensing Journal Big Data Special Issue, to be published challenges in automation [20] Indian pines dataset, processed and raw 40 / 70
Parallel & Scalable SVM MPI Tool (11) Another example dataset: high number of classes Parallelization benefits: major speed-ups, ~interactive (<1 min) possible manual & serial activities (in minutes) big data is not always better data [21] Analytics Results (raw) [22] Analytics Results (processed) Manual WORK Manual work: Trade-off between raw data processing and using feature extraction methods Can we automate feature extraction mechanism to some degree? G. Cavallaro, M. Riedel et al., Remote Sensing Journal Big Data Special Issue, to be published 41 / 70
Parallel & Scalable SVM MPI Tool (12) 2x benefits of parallelization (shown in n-fold cross validation) Evaluation between Matlab (aka serial) and parallel pisvm 10x cross-validation (RBF kernel parameter and C, gridsearch) raw dataset (serial) processed dataset (serial) raw dataset (parallel, 80 cores) processed dataset (parallel, 80 cores) G. Cavallaro, M. Riedel et al., Remote Sensing Journal Big Data Special Issue, to be published [23] Analytics 10 fold cross-validation Results (raw) [24] Analytics 10 fold cross-validation Results (processed) 42 / 70
Recent Research Directions 43 / 70
Recent Research Directions Brain Data Classification Build reconstructed brain (one 3d volume) that matches with sections & block images Understanding the sectioning of the brain and support automation of reconstruction 1. Some pattern exists Image content classification (e.g. SVMs, RandomForst, etc.) 2. No exact mathematical formula exists No precise formula for contour of the brain 3. Dataset (next: 5 brains, >100.000 pixels, 2PB raw) Block face images (of frozen brain tissue) Every 20 micron (cut size), resolution: 3272 x 2469 ~ 14 MB / RGB image ~ 8 MB / corresponding mask image ( groundtruth ) ~700 images ~40 GB dataset [2] [1] [2] [2] [2] class label Research activities jointly with T. Dickscheid et al. (Juelich Institute of Neuroscience & Medicine) [2] [1] [2] 44 / 70
Recent Research Directions Deep Learning Investigate a pipeline for cell nuclei detection and tissue clustering 1. Some pattern exists Image content classification & clustering 2. No exact mathematical formula exists No precise formula for brain layers 3. Dataset raw images exist Needs to be properly prepared Generate labeled data to learn from (manual tool supporting scientists) Use Deep Learning (deep convolutional neural network, GPGPUs) to classify cell nuclei Extract cell nuclei into 2D/3D point cloud [13] Deep Learning Architecture Cluster different brain areas by cell density (parallel DBSCAN) Research activities jointly with T. Dickscheid et al. (Juelich Institute of Neuroscience & Medicine) 45 / 70
Conclusions 46 / 70
Conclusions Scientific Peer Review is essential to progress in the field Work in the field needs to be guided & steered by communities NIC Scientific Big Data Analytics (SBDA) first step (learn from HPC) Towards enabling reproducability by uploading runs and datasets Selected SBDA benefit from parallelization Statistical data mining techniques able to reduce big data (e.g. PCA, etc.) Benefits in n-fold cross-validation & raw data, less on preprocessed data Two codes available to use and maintained @JSC: HPDBSCAN, pisvm Number of Data Analytics et al. Technologies incredible high Thorough analysis and evaluation hard (needs different infrastructures) (Less) open source & working versions available, often paper studies Still evaluating approaches: HPC, map-reduce, Spark, SciDB, MaTex, 47 / 70
References 48 / 70
References (1) [1] R. Huber, M. Riedel et al., PANGAEA Data Publisher for Earth & Environmental Science - Research data enters scholarly communication and big data analysis, presentation at EGU 2014, Vienna [2] G. Cavallaro, J.A. Benediktsson and M. Riedel et al. Smart Data Analytics Methods for Remote Sensing Applications, in proceedings of IGARSS 2014 [3] M.Goetz & C. Bodenstein, Clustering Highly Parallelizable DBSCAN Algorithm, JSC, Online: http://www.fz-juelich.de/ias/jsc/en/research/distributedcomputing/dataanalytics/clustering/clustering_node.html [4] Ester, Martin, et al. "A density-based algorithm for discoveringclusters in large spatial databases with noise." Kdd. Vol. 96. 1996. [5] Patwary, Md Mostofa Ali, et al. "A new scalable parallel dbscanalgorithm using the disjoint-set data structure." High Performance Computing, Networking, Storage and Analysis (SC), 2012 InternationalConference for. IEEE, 2012 [6] PANGAEA Earth Science Data Collection, Online: http://www.pangaea.de/ [7] C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, vol. 20(3), pp. 273 297, 1995 [8] An Introduction to Statistical Learning with Applications in R, Online: http://www-bcf.usc.edu/~gareth/isl/index.html [9] Original pisvm tool, online: http://pisvm.sourceforge.net/ [10] B2SHARE data collection Rome Dataset, online: http://hdl.handle.net/11304/4615928c-e1a5-11e3-8cd7-14feb57d12b9 [11] G. Cavallaro, M. Mura, J.A. Benediktsson, L. Bruzzone A Comparison of Self-Dual Attribute Profiles based on different filter rules for classification, IEEE IGARSS2014, Quebec, Canada [12] B2SHARE data collection pisvm1.2 Analytics JUDGE Cluster Rome Images 55 Features, online: http://hdl.handle.net/11304/6880662c-1edf-11e4-81ac-dcbd1b51435e 49 / 70
References (2) [13] Deep Learning Architecture, Online: http://parse.ele.tue.nl/cluster/2/cnnarchitecture.jpg [14] UNICORE, online: http://www.unicore.eu [15] DOE ASCAC Report, Synergistic Challenges in Data-Intensive Science and. Exascale Computing [16] M. Riedel and P. Wittenburg et al. A Data Infrastructure Reference Model with Applications: Towards Realization of a ScienceTube Vision with a Data Replication Service, 2013 [17] Shearer C., The CRISP-DM model: the new blueprint for data mining, J Data Warehousing (2000); 5:13 22. [18] Jeremy Ginsburg et al., Detecting influenza epidemics using search engine query data, Nature 457, 2009 [19] David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani, The Parable of Google Flu: Traps in Big Data Analysis, Science Vol (343), 2014 [20] B2SHARE data collection, remote sensing indian pines images, Online: http://hdl.handle.net/11304/7e8eec8e-ad61-11e4-ac7e-860aa0063d1f [21] B2SHARE data collection, pisvm remote sensing indian pines analytics results (raw), Online: http://hdl.handle.net/11304/c06a8c7e-fe6c-11e4-8a18-f31aa6f4d448 [22] B2SHARE data collection, pisvm remote sensing indian pines analytics results (processed), Online: http://hdl.handle.net/11304/c528998e-ff7c-11e4-8a18-f31aa6f4d448 [23] B2SHARE data collection, Analytics 10 fold cross-validation (raw), Online: http://hdl.handle.net/11304/163ba8e8-fe60-11e4-8a18-f31aa6f4d448 [24] B2SHARE data collection, Analytics 10 fold cross-validation (processed), Online: http://hdl.handle.net/11304/5bba8e36-fe63-11e4-8a18-f31aa6f4d448 50 / 70
Acknowledgements PhD Student Gabriele Cavallaro, University of Iceland Tómas Philipp Runarsson, University of Iceland Kristján Jonasson, University of Iceland Timo Dickscheid, Markus Axer, Stefan Köhnen, Tim Hütz, Institute of Neuroscience & Medicine, Juelich Selected Members of the Research Group on High Productivity Data Processing Ahmed Shiraz Memon Mohammad Shahbaz Memon Markus Goetz Christian Bodenstein Philipp Glock Matthias Richerzhagen 51 / 70
Thanks Slides available at http://www.morrisriedel.de/talks 52 / 70
Selected Backup Slides for Discussions 53 / 70
Distributed Large-scale Data Management [14] UNICORE.eu 54 / 70
In-Situ Analytics for HPC & Exascale correlations visual analytics e.g. map-reduce jobs, R-MPI In-situ correlations & data reduction e.g. clustering, classification In-situ statistical data mining exascale application scientific visualization & beyond steering key-value pair DB analytics part visualization part Scalable I/O interactive computational simulation part distributed archive in-memory Exascale computer with access to exascale storage/archives [15] Inspired by ASCAC DOE report 55 / 70
Tools for Large-scale Distributed Data Management Useful tools for data-driven scientists & HPC users [16] M. Riedel & P. Wittenburg et al. 56 / 70
Need for Sharing & Reproducability in HPC Example Professor research & PhD thesis activities & papers, and. PhD Student Bachelor thesis Student bachelor thesis activities, e.g. improving code (same data) Sharing different datasets is key One tend to loose the overview of which data is stored on which platform How do we gain trust to delete data when duplicates on different systems exist another collaborator Teach class with good AND bad examples! Student Classes 57 / 70
Smart Data Analytics Process Big Data Analytics Traditional Data Analysis Concete Datasets (& source/sensor) (parallel) Algorithms & Methods Technologies & Ressources Simple Data Preprocessing Automated Parallel Data Analytics choose choose choose Manual Manual Manual Feature Reduction Feature Extraction Feature Selection Scientific Data Applications Data Postprocessing [17] C. Shearer, CRISP-DM model, Journal Data Warehousing, 5:13 Data Analysis Reference Data Analytics for reusability & learning Report for joint Usage time to solution CRISP-DM report Openly Shared Datasets Combine both: Smart Data Analytics Running Analytics Code 58 / 70
Selected Research Data Alliance (RDA) Activities Big Data Analytics Interest Group Establish something like UCI machine learning repository, but for big data analytics [2] G. Cavallaro and M. Riedel et al. Smart Data Analytics Methods for Remote Sensing Applications, IGARSS 2014 Parallel Brain Data Analytics Sattelite Data(Quickbird) Parallel Support Vector Machines (SVM) HPC & MPI Classification Study of Land Cover Types Best Practices Classification++ JUDGE system @ JSC Reference Data Analytics for reusability & learning CRISP- DM Report Openly Shared Datasets Running Analytics Code Research activities with Gabriele Cavallaro (PhD thesis, UoIceland) on Self Dual Attribute Profile 59 / 70
Reproducability Example in Data-driven Science (1) Having this tool available on the Web helps tremendously to save time for no research tasks Using the tool enables to focus better on the research tasks 60 / 70
Reproducability Example in Data-driven Science (2) Sharing preprocessed data LibSVM format Training and Testing Datasets Different setups for analysis (SDAP on All or SDAP on Panchromatic) 61 / 70
Reproducability Example in Data-driven Science (3) Simple download from http using the wget command other open B2SHARE datasets Simple Download from http using wget Well defined directory structures before adopting B2SHARE regularly 62 / 70
Reproducability Example in Data-driven Science (4) Make a short note in your directory linking back to B2SHARE Enables the trust to delete data if necessary (working against big data) Link back to B2SHARE for quick checks and file that links back fosters trust 63 / 70
Reproducability Example in Data-driven Science (5) a bachelor project different versions of a parallel neural network code (another classification technique) different versions of a parallel support vector machine code True reproducability needs: (1) datasets; (2) technique parameters (here for SVM); and (3) correct versions of algorithm code 64 / 70
Deep Learning (1) Classical Machine Learning Dealing with Big Data in traditional Machine Learning Define Features to learn from?! Transform data into supported format?! How to reduce dimensions?! How to parallelize?! 65 / 70
Deep Learning (2) Deep Learning Dealing with Big Data in Deep Learning Define Features to learn from Automatically learn how to define features Transform data into supported format Adopt the model to your data How to reduce dimensions Automatically reduce dimensions in every hidden layer How to parallelize Naturally the brain is parallel, so Artificial Neural Networks are! A. Ng, Google Brain 66 / 70
Deep Learning (3) Deep Learning in Computational Biomedicine Genome Analysis Find high level features on low level omics data Medical Image Analysis Use 2D (or 3D) structure of the data for classification Unstructured Data Analysis Use DL for text analysis to classify patient data, drug recommendations by users, Etc 67 / 70
Deep Learning (4) Deep Learning Packages There exists several frameworks for deep neural networks Pylearn2 Python tool on the top of the Theano python library Easy configuration of data, model, learning via YAML files CUDA support for accelerated calculations Jobman for parallel cross validation Caffe C++ implementation with python & matlab wrappers CUDA acceleration DL4J Java implementation of Deep Learning CUDA + Hadoop support 68 / 70
Chances and Pitfalls for Scientific Big Data Analytics ~2009 H1N1 Virus Made Headlines Nature paper from Google employees Explains how Google is able to predict fast winter flus Not only on national scale, but down to regions Possible via logged big data search queries ~2014 The Parable of Google Flu Large errors in flu prediction & lessons learned (1) Dataset: Transparency & replicability impossible (2) Study the algorithm since they keep changing (3) It s not just about size of the data [18] Jeremy Ginsburg et al., Detecting influenza epidemics using search engine query data, Nature 457, 2009 [19] David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani, The Parable of Google Flu: Traps in Big Data Analysis, Science Vol (343), 2014 Big data is not always better data Think about difference of causality vs. correlation 69 / 70
Location-based Social Network-based Health Analytics Scientific Domain Area Smart Cities approaches compined with Health Analytics Research Scientific Outcome Traffic density estimation Network emission model Location-based Social Networks (LBSN) Data Open data sources: Twitter & Foursquare Plan: Validation with real measurements in cities Tweets Check-ins Research activities with Markus Goetz (PhD thesis) Juelich Supercomputing Centre, UoIceland 70 / 70