New advanced solutions for Genomic big data Analysis and Visualization
|
|
|
- Peter Paul
- 10 years ago
- Views:
Transcription
1 New advanced solutions for Genomic big data Analysis and Visualization Ignacio Medina (Nacho) Head of Computational Biology Lab HPC Service, University of Cambridge Cambridge, UK Team lead for Bioinformatics Software Development Bioinformatics Department, Genomics England London, UK
2 Overview Introduction OpenCB initiative Advanced Computing Technologies OpenCB Projects CellBase Genome Maps, Big data visualization High-Performance Genomics (HPG) OpenCGA Q&A
3 Introduction Big data in Genomics, a new scenario in biology Next-Generation Sequencing (NGS) is a high-throughput technology for DNA sequencing that is changing the way how researchers are performing genomic experiments. Most new experiments are being conducted by sequencing: re-sequencing, RNA-seq, Meth-seq, ChIP-seq, Experiments have increased data size by more than 5000x when compared with microarrays. Surprisingly, most of the existing software solutions are not very different. Sequencing costs keep falling, today a whole genome can be sequenced by just $1000, so much more data is expected. Data acquisition is highly distributed and involves heterogeneous formats. A single HiSeq X Ten System can sequence ~20,000 human genomes a year
4 Introduction Standard NGS experiment data analysis Illumina NGS sequencer series: HiSeq 2500 provides high-quality 2x125bp: Gb in 1-6 days, 90.2% bases above Q30. One human genome at ~60x coverage HiSeq 4000 provides high-quality 2x150bp: Gb in 1-4 days, >75% bases above Q30. Up to 12 human genomes at ~40x coverage Each sample produces a FASTQ file ~1TB size containing ~1-2B reads New Illumina X Ten: Consists if 10 ultrahigh-throughput HiSeq X sequencers. First $1000 human genome sequencer, it can sequence up to 20,000 genomes per year Variant Calling pipeline
5 Introduction Big data in Genomics Some current projects: At EMBL-EBI and Sanger European Variation Archive (EVA): open archive for all public genomic variation data for all species A new project with only a few TBs of data so far 1000G Phase 3: about 2500 individuals from 26 populations, a few hundreds of TBs Other big data projects European Genome-phenome Archive (EGA): stores human datasets under controlled access Current size about 2PB, it is expected to increase 2x-3x over the next few years. Now new functionality is being implemented. NIHR BRIDGE: 10,000 whole genomes from rare diseases, ~1-2PB of data expected Genomics England (GEL): is sequencing 100,000 whole genomes from UK, several rare diseases and cancers being studied, data estimation: ~20PB of BAM and ~400TB of VCF data are expected! About 100 whole genomes/day, ~5-10TB/day International Cancer Genome Consortium (ICGC): store more than 10,000 sequenced cancers, few PB of data And of course many medium-sized projects. Data acquisition in Genomics is highly distributed, there is not a single huge project Square Kilometer Array (SKA) project will produce 750 terabytes/second by 2025 Data encryption and security is a major concern in many of them Data stored in different data centers
6 Introduction Estimated sequencing data size projection for human Big Data: Astronomical or Genomical? Stephens ZD. et al., PLOS Biology, July 2015 ABSTRACT - Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a four-headed beast it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis
7 Introduction Genomic Variant Dataset, big and complex Logical view of genomic variation dataset, data come from different VCF files. Hundreds of millions of mutations, some meta data needed: Variant annotation Clinical info Consequence types Conservation scores Population frequencies... Genomics England project: 200M variants x 100K samples, about 20 trillion points With different layers of data, about trillion points a lot of meta data for variants and samples about 400TB to be indexed Meta data: Sample annotation Phenotype Family and population pedigree Clinical variables... Heterogeneous data analysis and algorithms, different technologies and solutions required: Search and filter using data and meta data Data mining, correlation Statistic tests Machine learning Interactive analysis Network-based analysis Visualization Encryption... Different layers of information: Genotype for samples Allele counts Quality scores... Applications: Personalized medicine...
8 Introduction Big data analysis challenges Data Analysis and visualization: Real-time and Interactive graphical data analysis and visualization is needed. Data mining: Complex queries, aggregations, correlations,... Security: sometimes data access require authentication, authorization, encryption,... Performance and scalability: software must be high-performance and scalable Data Integration: different types of data such as variation, expression, ChIP,... Share and collaboration: data models to ease the collaboration among different groups. Avoid moving data. Knowledge base and sample annotations: many of the visual analytic tools need genome and sample annotations Do current bioinformatic tools solve these problems?
9 Introduction Current status in bioinformatics Many bioinformatic tools are great but, in general, not designed and implemented for processing and analyzing big data. Tools usually don't exploit the parallelism of modern hardware and current highperformance and scalable technologies. Poor performance and scalability. We need to develop new generation of software and methodologies to: Improve performance and scalability of analysis Store data efficiently and secured to be queried and visualized Easy to adapt to new technologies and uses cases to be useful to researchers So, are bioinformaticians doing things right? Are researchers happy with their current tools? Is needed a paradigm change in bioinformatics? Most tools are designed and implemented to run in a single workstation, but PB scale data require more advanced computing technologies
10 OpenCB Open source initiative for Computational Biology Software in Biology is still usually developed in small teams, we must learn to collaborate in bigger projects to solve bigger problems. OpenCB tries to engage people to work in big problems: Shared, collaborative and well designed platforms to build more advanced solutions to solve current biology problems are needed. OpenCB aims to design and develop high-performance and scalable solutions for genomic big data analysis using most modern computing technologies. No one computing programming language oriented: BioPerl, BioPython, Bioconductor,... Good software solutions may use different languages and technologies to solve different problems and use cases So far, is where all the software we develop is being released. About 15 active committers. Available as open-source at GitHub OpenCB is a collaborative project with more than 15 actives developers and data analysts and more than 12 repositories Many papers published during last two years, very good adoption Many different technologies used: HPC, Hadoop, web applications, NoSQL databases,...
11 OpenCB Some relevant projects biodata ( ) and ga4gh ( ) CellBase ( ) The biological knowledge-base for OpenCB project (ensembl core, variation and regulatory, uniprot, clinvar, cosmic, expression, conservation, ) A Variant Annotation tool implemented. A high-performance NoSQL implementation, a CLI and web services implemented HPG BigData ( ) Hadoop-based implementation of data converters (avro, parquet) and bioinformatic tools (ie. samtools) Simple indexing for HBase, Hive and Impala developed C code embedded using JNI to speed-up processing OpenCGA ( ) Integrates most of the OpenCB projects to provide a scalable and high-performance platform OpenCGA Catalog provides an authenticated environment, files and sample annotations, system audit,... OpenCGA Storage is a plugin-oriented framework that allows to index hundreds of millions of variants for thousands of samples in different storage engines. Stats and annotation implemented.. Genome Maps ( ) Contain all data models, parsers and converters (avro, protobuf) for all OpenCB projects A web-based NGS and genome browser: Many other related projects for big data analysis and visualization, check
12 OpenCB A big data friendly architecture Client Rich Web applications and visualization. New HTML5 web technologies: SVG, IndexedDB, WebWorkers, WebGL, SIMD.js,... Server Distributed and HPC technologies for real-time and interactive data analysis and visualization: NoSQL, Hadoop, HPC, Search, filter and aggregate the data needed by researchers
13 Advanced Computing Technologies Web technologies, HTML5 standard and RESTful WS HTML5 brings many new standards and libraries to JavaScript allowing developing amazing web applications in a easy way Canvas/SVG inline: browsers can render dynamic content and inline static SVG easily IndexedDB: client-side indexed storage for high-performance query WebWorkers: simple mean for creating OS threads WebGL: hardware-accelerated 3D graphics to web, based on OpenGL ES Many new web tools, frameworks and libraries being developed to build great and bigger web applications: Performance: asm.js, SIMD.js,... Web Components: Polymer Build: Grunt, Bower, Yeoman, Visualization: d3.js, Three.js, Highcharts, RESTful web services and JSON ease the development of light and fast RPC
14 Advanced Computing Technologies High-Performance Computing (HPC) HPC hardware: Intel MIC architecture: Intel Xeon and Intel Xeon Phi coprocessor, 1.01Tflops DP and more than 50 cores Nvidia Tesla: Tesla K20X almost 1.31Tflops DP and 2688 CUDA cores Some HPC frameworks available: Shared-memory parallel: OpenMP, OpenCL GPGPU computing: CUDA, OpenCL Message passing Interface (MPI) SIMD: SSE4 instructions extended to AVX2 with a 256-bit SIMD Heterogeneous HPC in a shared-memory CPU (OpenMP+AVX2) + GPU (CUDA) Hybrid approach: CPU OpenMP + SSE/AVX MPI GPU Nvidia CUDA Hadoop MapReduce
15 Advanced Computing Technologies Big data analysis and NoSQL databases Apache Hadoop ( is currently de facto standard for big data processing and analysis: Core: HDFS, MapReduce, HBase Spark: SparkML, SparkR NoSQL databases, four main families of high-performance distributed and scalable databases: Column store: Apache Hadoop HBase... Document store: MongoDB, Solr,... Key-Value: Redis,... Graph: Neo4J,... New solutions for PB scale interactive analysis: Google Dremel (Google BigQuery) and similar implementations: new Hive, Cloudera Impala Nested data, and comma and tab-separated data, SQL queries allowed
16 CellBase An integrative database and RESTful Web Service API CellBase is a comprehensive integrative NoSQL database and a RESTful Web Service API, designed to provide a highperformance a scalable solution. Currently contains more than 1TB of data: Ensembl Core features: genome sequence, genes, transcripts, exons, gene expression, conservation,... Protein: UniProt, Interpro Variation: dbsnp and Ensembl Variation, Cosmic, ClinVar,... Functional: OBO ontologies(gene Ontology), Interpro domains, drug interactions,... Regulatory: TFBS, mirna targets, conserved regions, CTCF, histones, Systems biology: Reactome, Interactome (IntAct) Published at NAR 2012: Used by EMBL-EBI, ICGC, GEL, among others Project: Wiki:
17 CellBase New features in version 3.x and 4.x CellBase v3.0 moved to MongoDB NoSQL database to provide a much higher performance and scalability, most queries run in <40ms. Installed at University of Cambridge and EMBL-EBI, some links: GitHub Wiki Official domain and Swagger Variant annotation integrated: nnotation Coming features in v4.0: Focus on Clinical data Many more species Aggregation and stats R and Python clients Data customization Richer and scalable API
18 Genome Maps A big data HTML5+SVG genome browser Genome scale data visualization is an important part of the data analysis: Do not move data! Main features of Genome Maps ( published at NAR 2013) 100% HTML5 web based: HTML5+SVG, and other JavaScript libraries. Always updated, no browser plugins needed Genome data is mainly consumed from CellBase and OpenCGA database through RESTful web services. JSON data is parsed and SVG is rendered, making server lighter and improve network transfers Other features: NGS data viewer, Multi species, Feature caches, API oriented, embeddable, key navigation, Beta:
19 Genome Maps Software Architecture and Design Genome Maps design goals are provide a high-performance visualization JSON data fetched remotely: CellBase, OpenCGA, EVA, no images sent from the server easy to integrate: event system and JavaScript API developed memory efficient: integrated cached (IndexedDB), reduces the number of calls OpenCB JSorolla JS library Genome Viewer Menu Region Overview DataAdapter (cache) Renderer Tracks
20 Genome Maps New features and coming releases Version v3.2 (summer 2015) More species (~20) and a more efficient IndexedDB based FeatureCache, less memory footprint and less remote queries More NGS data friendly, better rendering and features for BAM and VCF files More secure, uses HTTPS and can read and cache encrypted data (ie. AES) Future version v4.0, tentative release date during 1Q16 JSCircos for structural variation and other visualizations ( ) RNA-seq and 3D WebGL visualization Local data browsing using Docker New Hadoop-based server features being added Used by some projects: EMBL-EBI EVA: ICGC data portal: New genome browser at Peer Bork group Lens Beta website:
21 HPG Suite NGS pipeline, a HPC and big data implementation NGS sequencer Fastq file, up to hundreds of GB per run HPG suite High-Performance Genomics QC and preprocessing QC stats, filtering and preprocessing options HPG Aligner, short read aligner Double mapping strategy: Burrows-Wheeler Transform (GPU Nvidia CUDA) + Smith-Waterman (CPU OpenMP+SSE/AVX) More info at: SAM/BAM file /hpg/doku.php QC and preprocessing Other analysis (HPC4Genomics consortium) RNA-seq (mrna sequenced) DNA assembly (not a real analysis) Meth-seq Copy Number Transcript isoform Variant VCF viewer... HTML5+SVG Web based viewer QC stats, filtering and preprocessing options Variant calling analysis GATK and SAM mpileup HPC Implementation. Statistics genomic tests VCF file QC and preprocessing QC stats, filtering and preprocessing options HPG Variant, Variant analysis Consequence type, GWAS, regulatory variants and system biology information
22 HPG Aligner Why another NGS read aligner There are more than 70 aligners This project began as a experimental project, could a well designed algorithm implemented using modern HPC technologies speed up NGS data mapping? It's hardware that makes a machine fast. It's software that makes a fast machine slow Craig Bruce Focus on performance and sensitivity. Current software is designed for standard workstations, but new projects are processed in clusters. During these two years we have developed one of the fastest and one of the most sensitive NGS aligners for DNA and RNA by using different HPC technologies Some computing papers Bioinformatics paper: 3.long
23 HPG Aligner DNA aligner results, quality similar to new BWA-MEM Similar sensitivity than BWAMEM but 3-5x times faster Other features: Adaptor support INDEL realignment Base recalibration
24 HPG Aligner Main and coming features Part of the HPG Aligner suite ( with other tools: hpgfastq, hpg-bam and hpg-aligner Usability: the two aligners under the same binary, only one execution is needed to generate the BAM output file, faster index creator, multi-core implementation Focused on providing the best sensitivity and the best performance by using HPC technologies: multicore, SSE4/AVX2, GPUs, MPI, Coming features Smith-Waterman in AVX2 implementation and Xeon PHI Initial support for GRCh38 with ALTs BS-seq released: for methylation analysis (being tested) Apache Hadoop implementation will allow to run it in a distributed environment New SA index (not BWT) in version 2.0 for performance improvements, more memory needed but first results show a speed-up of 2-4x. i.e. more than 10 billion exact reads of 100nt aligned in 1 hour in a 12-cores node.
25 HPG Variant A suite of tools for variant analysis HPG Variant, a suite of tools for HPC-based genomic variant analysis Three tools are already implemented: vcf, gwas and effect. Implemented using OpenMP, SSE/AVX, Nvidia CUDA and MPI for large clusters. Hadoop version coming soon. VCF: C library and tool: allows to analyze large VCFs files with a low memory footprint: stats, filter, split, merge, (paper in preparation) VARIANT = VARIant ANalysis Tool Example: hpg-variant vcf stats vcf-file ceu.vcf GWAS: suite of tools for gwas variant analysis (~Plink) Genetic tests: association, TDT, Hardy-Weinberg,... Epistasis: HPC implementation with SSE4 and MPI, 2-way 420K SNPs epistasis in 9 days in a 12-core node. Example: hpg-variant gwas tdt vcf-file tumor.vcf EFFECT: A CLI and web application, it's a cloud-based genomic variant effect predictor tool has been implemented ( published in NAR 2012)
26 HPG BigData A suite of tools for working in genomics with Hadoop New project to provide big data analysis in genomics based on Hadoop and Spark Main features: Hadoop-based implementation of data converters (avro, parquet) and some bioinformatic tools (ie. samtools) Simple VCF indexing for HBase, Hive and Impala developed C code embedded using JNI to speed-up processing Java libraries and command line implemented Many new NGS data analysis being implemented with MapReduce and Spark Available at
27 OpenCGA Overview and goals Open-source Computational Genomics Analysis (OpenCGA) aims to provide to researchers and clinicians a high performance and scalable solution for genomic big data processing and analysis OpenCGA is built on OpenCB: CellBase, Genome Maps, Cell Maps, HPG Aligner, HPG BigData, Variant annotation Project at GitHub:
28 OpenCGA Metadata Catalog OpenCGA Catalog provides user authentication, authorization, sample annotation, file and job tracking, audit, Allow to the different storage engines to perform optimizations Sample annotations is one of the main feature: Allow complex queries and aggregations Allow to detect bias and other problems with the data R prototypes, interactive web-based plots being implemented
29 OpenCGA Storage Engines OpenCGA Storage provides a pluggable Java Two default implementations: MongoDB and framework for storing and querying alignment Hadoop for huge performance and scalability and variant data (~500k inserts/second) Data is processed, normalized, annotated Highly customizable and easy to extend (ie. only two and indexed, also some stats are Java classes are needed to be implemented) precomputed and indexed Variant pipeline Precomputed data allows interactive analysis
30 HPCS-Dell Genomic Collaboration New projects coming soon New collaboration between HPCS and Dell to port some big data processing and algorithms to new Dell Statistica software Dell Statistica provides an amazing platform for data analysis in many different areas, genomics can take advantage of this Proof of concept being developed for next year
31 Acknowledgements Bosses Paul Calleja, Director of HPC Service, University of Cambridge Augusto Rendon, Director of Bioinformatics of Genomics England Team and core OpenCB developers: Collaborators Joaquin Dopazo (CIPF) Justin Paschall (EMBL-EBI) Stefan Gräf (Addenbrooke's) UJI: Enrique Quintana, Héctor Martinez, Sergio Barrachina, Maribel Castillo Jacobo Coll (Genomics England) Javier Lopez (EMBL-EBI) Dell Joaquin Tarraga (CIPF) Cloudera Cristina González (EMBL-EBI) Tom White Jose M. Mut (EMBL-EBI) Uri Laserson Francisco Salavert (CIPF) Matthias Haimel (Addenbrooke's) Marta Bleda (Addenbrooke's)
New advanced solutions for Genomic big data Analysis and Visualization
New advanced solutions for Genomic big data Analysis and Visualization Ignacio Medina (Nacho) [email protected] http://bioinfo.cipf.es/imedina Head of Computational Biology Lab HPC Service, University of
New solutions for Big Data Analysis and Visualization
New solutions for Big Data Analysis and Visualization From HPC to cloud-based solutions Barcelona, February 2013 Nacho Medina [email protected] http://bioinfo.cipf.es/imedina Head of the Computational Biology
OpenCB a next generation big data analytics and visualisation platform for the Omics revolution
OpenCB a next generation big data analytics and visualisation platform for the Omics revolution Development at the University of Cambridge - Closing the Omics / Moore s law gap with Dell & Intel Ignacio
HPC pipeline and cloud-based solutions for Next Generation Sequencing data analysis
HPC pipeline and cloud-based solutions for Next Generation Sequencing data analysis HPC4NGS 2012, Valencia Ignacio Medina [email protected] Scientific Computing Unit Bioinformatics and Genomics Department
OpenCB development - A Big Data analytics and visualisation platform for the Omics revolution
OpenCB development - A Big Data analytics and visualisation platform for the Omics revolution Ignacio Medina, Paul Calleja, John Taylor (University of Cambridge, UIS, HPC Service (HPCS)) Abstract The advent
CSE-E5430 Scalable Cloud Computing. Lecture 4
Lecture 4 Keijo Heljanko Department of Computer Science School of Science Aalto University [email protected] 5.10-2015 1/23 Hadoop - Linux of Big Data Hadoop = Open Source Distributed Operating System
Accelerating variant calling
Accelerating variant calling Mauricio Carneiro GSA Broad Institute Intel Genomic Sequencing Pipeline Workshop Mount Sinai 12/10/2013 This is the work of many Genome sequencing and analysis team Mark DePristo
Delivering the power of the world s most successful genomics platform
Delivering the power of the world s most successful genomics platform NextCODE Health is bringing the full power of the world s largest and most successful genomics platform to everyday clinical care NextCODE
GeneProf and the new GeneProf Web Services
GeneProf and the new GeneProf Web Services Florian Halbritter [email protected] Stem Cell Bioinformatics Group (Simon R. Tomlinson) [email protected] December 10, 2012 Florian Halbritter
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
Hadoop-BAM and SeqPig
Hadoop-BAM and SeqPig Keijo Heljanko 1, André Schumacher 1,2, Ridvan Döngelci 1, Luca Pireddu 3, Matti Niemenmaa 1, Aleksi Kallio 4, Eija Korpelainen 4, and Gianluigi Zanetti 3 1 Department of Computer
Data Analysis & Management of High-throughput Sequencing Data. Quoclinh Nguyen Research Informatics Genomics Core / Medical Research Institute
Data Analysis & Management of High-throughput Sequencing Data Quoclinh Nguyen Research Informatics Genomics Core / Medical Research Institute Current Issues Current Issues The QSEQ file Number files per
European Genome-phenome Archive database of human data consented for use in biomedical research at the European Bioinformatics Institute
European Genome-phenome Archive database of human data consented for use in biomedical research at the European Bioinformatics Institute Justin Paschall Team Leader Genetic Variation / EGA ! European Genome-phenome
Three data delivery cases for EMBL- EBI s Embassy. Guy Cochrane www.ebi.ac.uk
Three data delivery cases for EMBL- EBI s Embassy Guy Cochrane www.ebi.ac.uk EMBL European Bioinformatics Institute Genes, genomes & variation European Nucleotide Archive 1000 Genomes Ensembl Ensembl Genomes
Shouguo Gao Ph. D Department of Physics and Comprehensive Diabetes Center
Computational Challenges in Storage, Analysis and Interpretation of Next-Generation Sequencing Data Shouguo Gao Ph. D Department of Physics and Comprehensive Diabetes Center Next Generation Sequencing
G E N OM I C S S E RV I C ES
GENOMICS SERVICES THE NEW YORK GENOME CENTER NYGC is an independent non-profit implementing advanced genomic research to improve diagnosis and treatment of serious diseases. capabilities. N E X T- G E
A Performance Analysis of Distributed Indexing using Terrier
A Performance Analysis of Distributed Indexing using Terrier Amaury Couste Jakub Kozłowski William Martin Indexing Indexing Used by search
ESS event: Big Data in Official Statistics. Antonino Virgillito, Istat
ESS event: Big Data in Official Statistics Antonino Virgillito, Istat v erbi v is 1 About me Head of Unit Web and BI Technologies, IT Directorate of Istat Project manager and technical coordinator of Web
Scalable Cloud Computing Solutions for Next Generation Sequencing Data
Scalable Cloud Computing Solutions for Next Generation Sequencing Data Matti Niemenmaa 1, Aleksi Kallio 2, André Schumacher 1, Petri Klemelä 2, Eija Korpelainen 2, and Keijo Heljanko 1 1 Department of
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Preparing the scenario for the use of patient s genome sequences in clinic. Joaquín Dopazo
Preparing the scenario for the use of patient s genome sequences in clinic Joaquín Dopazo Computational Medicine Institute, Centro de Investigación Príncipe Felipe (CIPF), Functional Genomics Node, (INB),
Hadoop Ecosystem B Y R A H I M A.
Hadoop Ecosystem B Y R A H I M A. History of Hadoop Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open
Monitis Project Proposals for AUA. September 2014, Yerevan, Armenia
Monitis Project Proposals for AUA September 2014, Yerevan, Armenia Distributed Log Collecting and Analysing Platform Project Specifications Category: Big Data and NoSQL Software Requirements: Apache Hadoop
Automated and Scalable Data Management System for Genome Sequencing Data
Automated and Scalable Data Management System for Genome Sequencing Data Michael Mueller NIHR Imperial BRC Informatics Facility Faculty of Medicine Hammersmith Hospital Campus Continuously falling costs
Focusing on results not data comprehensive data analysis for targeted next generation sequencing
Focusing on results not data comprehensive data analysis for targeted next generation sequencing Daniel Swan, Jolyon Holdstock, Angela Matchan, Richard Stark, John Shovelton, Duarte Mohla and Simon Hughes
Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging
Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging
BIG DATA Alignment of Supply & Demand Nuria de Lama Representative of Atos Research &
BIG DATA Alignment of Supply & Demand Nuria de Lama Representative of Atos Research & Innovation 04-08-2011 to the EC 8 th February, Luxembourg Your Atos business Research technologists. and Innovation
Constructing a Data Lake: Hadoop and Oracle Database United!
Constructing a Data Lake: Hadoop and Oracle Database United! Sharon Sophia Stephen Big Data PreSales Consultant February 21, 2015 Safe Harbor The following is intended to outline our general product direction.
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview
Programming Hadoop 5-day, instructor-led BD-106 MapReduce Overview The Client Server Processing Pattern Distributed Computing Challenges MapReduce Defined Google's MapReduce The Map Phase of MapReduce
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Leading Genomics. Diagnostic. Discove. Collab. harma. Shanghai Cambridge, MA Reykjavik
Leading Genomics Diagnostic harma Discove Collab Shanghai Cambridge, MA Reykjavik Global leadership for using the genome to create better medicine WuXi NextCODE provides a uniquely proven and integrated
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child
Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.
Integrated Rule-based Data Management System for Genome Sequencing Data
Integrated Rule-based Data Management System for Genome Sequencing Data A Research Data Management (RDM) Green Shoots Pilots Project Report by Michael Mueller, Simon Burbidge, Steven Lawlor and Jorge Ferrer
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop
SeqPig: simple and scalable scripting for large sequencing data sets in Hadoop André Schumacher, Luca Pireddu, Matti Niemenmaa, Aleksi Kallio, Eija Korpelainen, Gianluigi Zanetti and Keijo Heljanko Abstract
Processing NGS Data with Hadoop-BAM and SeqPig
Processing NGS Data with Hadoop-BAM and SeqPig Keijo Heljanko 1, André Schumacher 1,2, Ridvan Döngelci 1, Luca Pireddu 3, Matti Niemenmaa 1, Aleksi Kallio 4, Eija Korpelainen 4, and Gianluigi Zanetti 3
The Future of Big Data SAS Automotive Roundtable Los Angeles, CA 5 March 2015 Mike Olson Chief Strategy Officer, Cofounder @mikeolson
The Future of Big Data SAS Automotive Roundtable Los Angeles, CA 5 March 2015 Mike Olson Chief Strategy Officer, Cofounder @mikeolson 1 A New Platform for Pervasive Analytics Multiple big data opportunities
Chapter 7. Using Hadoop Cluster and MapReduce
Chapter 7 Using Hadoop Cluster and MapReduce Modeling and Prototyping of RMS for QoS Oriented Grid Page 152 7. Using Hadoop Cluster and MapReduce for Big Data Problems The size of the databases used in
Overview of HPC Resources at Vanderbilt
Overview of HPC Resources at Vanderbilt Will French Senior Application Developer and Research Computing Liaison Advanced Computing Center for Research and Education June 10, 2015 2 Computing Resources
The Future of Data Management
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class
Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase
Architectural patterns for building real time applications with Apache HBase Andrew Purtell Committer and PMC, Apache HBase Who am I? Distributed systems engineer Principal Architect in the Big Data Platform
UCLA Team Sequences Cell Line, Puts Open Source Software Framework into Production
Page 1 of 6 UCLA Team Sequences Cell Line, Puts Open Source Software Framework into Production February 05, 2010 Newsletter: BioInform BioInform - February 5, 2010 By Vivien Marx Scientists at the department
GC3 Use cases for the Cloud
GC3: Grid Computing Competence Center GC3 Use cases for the Cloud Some real world examples suited for cloud systems Antonio Messina Trieste, 24.10.2013 Who am I System Architect
BioInterchange 2.0 CODAMONO. Integrating and Scaling Genomics Data
BioInterchange 2.0 Integrating and Scaling Genomics Data by CODAMONO www.codamono.com @CODAMONO +1 647 780 3927 5-121 Marion Street, Toronto, Ontario, M6R 1E6, Canada Genomics Today: Big Data Challenge
Hadoopizer : a cloud environment for bioinformatics data analysis
Hadoopizer : a cloud environment for bioinformatics data analysis Anthony Bretaudeau (1), Olivier Sallou (2), Olivier Collin (3) (1) [email protected], INRIA/Irisa, Campus de Beaulieu, 35042,
Architectures for Big Data Analytics A database perspective
Architectures for Big Data Analytics A database perspective Fernando Velez Director of Product Management Enterprise Information Management, SAP June 2013 Outline Big Data Analytics Requirements Spectrum
AGILENT S BIOINFORMATICS ANALYSIS SOFTWARE
ACCELERATING PROGRESS IS IN OUR GENES AGILENT S BIOINFORMATICS ANALYSIS SOFTWARE GENESPRING GENE EXPRESSION (GX) MASS PROFILER PROFESSIONAL (MPP) PATHWAY ARCHITECT (PA) See Deeper. Reach Further. BIOINFORMATICS
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related
Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related Summary Xiangzhe Li Nowadays, there are more and more data everyday about everything. For instance, here are some of the astonishing
.nl ENTRADA. CENTR-tech 33. November 2015 Marco Davids, SIDN Labs. Klik om de s+jl te bewerken
Klik om de s+jl te bewerken Klik om de models+jlen te bewerken Tweede niveau Derde niveau Vierde niveau.nl ENTRADA Vijfde niveau CENTR-tech 33 November 2015 Marco Davids, SIDN Labs Wie zijn wij? Mijlpalen
Embedded Analytics & Big Data Visualization in Any App
Embedded Analytics & Big Data Visualization in Any App Boney Pandya Marketing Manager Greg Harris Systems Engineer Follow us @Jinfonet Our Mission Simplify the Complexity of Reporting and Visualization
Cloud-based Analytics and Map Reduce
1 Cloud-based Analytics and Map Reduce Datasets Many technologies converging around Big Data theme Cloud Computing, NoSQL, Graph Analytics Biology is becoming increasingly data intensive Sequencing, imaging,
Comparing Methods for Identifying Transcription Factor Target Genes
Comparing Methods for Identifying Transcription Factor Target Genes Alena van Bömmel (R 3.3.73) Matthew Huska (R 3.3.18) Max Planck Institute for Molecular Genetics Folie 1 Transcriptional Regulation TF
Global Alliance. Ewan Birney Associate Director EMBL-EBI
Global Alliance Ewan Birney Associate Director EMBL-EBI Our world is changing Research to Medical Research English as language Lightweight legal Identical/similar systems Open data Publications Grant-funding
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
Crosswalk: build world class hybrid mobile apps
Crosswalk: build world class hybrid mobile apps Ningxin Hu Intel Today s Hybrid Mobile Apps Application HTML CSS JS Extensions WebView of Operating System (Tizen, Android, etc.,) 2 State of Art HTML5 performance
Building Bioinformatics Capacity in Africa. Nicky Mulder CBIO Group, UCT
Building Bioinformatics Capacity in Africa Nicky Mulder CBIO Group, UCT Outline What is bioinformatics? Why do we need IT infrastructure? What e-infrastructure does it require? How we are developing this
Presenters: Luke Dougherty & Steve Crabb
Presenters: Luke Dougherty & Steve Crabb About Keylink Keylink Technology is Syncsort s partner for Australia & New Zealand. Our Customers: www.keylink.net.au 2 ETL is THE best use case for Hadoop. ShanH
Big Data Visualization with JReport
Big Data Visualization with JReport Dean Yao Director of Marketing Greg Harris Systems Engineer Next Generation BI Visualization JReport is an advanced BI visualization platform: Faster, scalable reports,
Cisco Data Preparation
Data Sheet Cisco Data Preparation Unleash your business analysts to develop the insights that drive better business outcomes, sooner, from all your data. As self-service business intelligence (BI) and
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Data-intensive HPC: opportunities and challenges. Patrick Valduriez
Data-intensive HPC: opportunities and challenges Patrick Valduriez Big Data Landscape Multi-$billion market! Big data = Hadoop = MapReduce? No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard,
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected]
Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o [email protected] Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) Mul(ple Socket
Using the Grid for the interactive workflow management in biomedicine. Andrea Schenone BIOLAB DIST University of Genova
Using the Grid for the interactive workflow management in biomedicine Andrea Schenone BIOLAB DIST University of Genova overview background requirements solution case study results background A multilevel
SQL + NOSQL + NEWSQL + REALTIME FOR INVESTMENT BANKS
Enterprise Data Problems in Investment Banks BigData History and Trend Driven by Google CAP Theorem for Distributed Computer System Open Source Building Blocks: Hadoop, Solr, Storm.. 3548 Hypothetical
Oracle Big Data SQL Technical Update
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
Ad Hoc Analysis of Big Data Visualization
Ad Hoc Analysis of Big Data Visualization Dean Yao Director of Marketing Greg Harris Systems Engineer Follow us @Jinfonet #BigDataWebinar JReport Highlights Advanced, Embedded Data Visualization Platform:
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Luncheon Webinar Series May 13, 2013
Luncheon Webinar Series May 13, 2013 InfoSphere DataStage is Big Data Integration Sponsored By: Presented by : Tony Curcio, InfoSphere Product Management 0 InfoSphere DataStage is Big Data Integration
Introduction. Xiangke Liao 1, Shaoliang Peng, Yutong Lu, Chengkun Wu, Yingbo Cui, Heng Wang, Jiajun Wen
DOI: 10.14529/jsfi150104 Neo-hetergeneous Programming and Parallelized Optimization of a Human Genome Re-sequencing Analysis Software Pipeline on TH-2 Supercomputer Xiangke Liao 1, Shaoliang Peng, Yutong
Big Data Visualization and Dashboards
Big Data Visualization and Dashboards Boney Pandya Marketing Manager Greg Harris Systems Engineer Follow us @Jinfonet #BigDataWebinar JReport Highlights Advanced, Embedded Data Visualization Platform:
Datenverwaltung im Wandel - Building an Enterprise Data Hub with
Datenverwaltung im Wandel - Building an Enterprise Data Hub with Cloudera Bernard Doering Regional Director, Central EMEA, Cloudera Cloudera Your Hadoop Experts Founded 2008, by former employees of Employees
MADOCA II Data Logging System Using NoSQL Database for SPring-8
MADOCA II Data Logging System Using NoSQL Database for SPring-8 A.Yamashita and M.Kago SPring-8/JASRI, Japan NoSQL WED3O03 OR: How I Learned to Stop Worrying and Love Cassandra Outline SPring-8 logging
A CLOUD-BASED FRAMEWORK FOR ONLINE MANAGEMENT OF MASSIVE BIMS USING HADOOP AND WEBGL
A CLOUD-BASED FRAMEWORK FOR ONLINE MANAGEMENT OF MASSIVE BIMS USING HADOOP AND WEBGL *Hung-Ming Chen, Chuan-Chien Hou, and Tsung-Hsi Lin Department of Construction Engineering National Taiwan University
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected]
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected] Agenda The rise of Big Data & Hadoop MySQL in the Big Data Lifecycle MySQL Solutions for Big Data Q&A
Big Data Challenges in Bioinformatics
Big Data Challenges in Bioinformatics BARCELONA SUPERCOMPUTING CENTER COMPUTER SCIENCE DEPARTMENT Autonomic Systems and ebusiness Pla?orms Jordi Torres [email protected] Talk outline! We talk about Petabyte?
Euro-BioImaging European Research Infrastructure for Imaging Technologies in Biological and Biomedical Sciences
Euro-BioImaging European Research Infrastructure for Imaging Technologies in Biological and Biomedical Sciences WP11 Data Storage and Analysis Task 11.1 Coordination Deliverable 11.2 Community Needs of
Introduction to NGS data analysis
Introduction to NGS data analysis Jeroen F. J. Laros Leiden Genome Technology Center Department of Human Genetics Center for Human and Clinical Genetics Sequencing Illumina platforms Characteristics: High
The Future of Data Management with Hadoop and the Enterprise Data Hub
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah Cofounder & CTO, Cloudera, Inc. Twitter: @awadallah 1 2 Cloudera Snapshot Founded 2008, by former employees of Employees
Hadoop & Spark Using Amazon EMR
Hadoop & Spark Using Amazon EMR Michael Hanisch, AWS Solutions Architecture 2015, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Agenda Why did we build Amazon EMR? What is Amazon EMR?
DNS Big Data Analy@cs
Klik om de s+jl te bewerken Klik om de models+jlen te bewerken! Tweede niveau! Derde niveau! Vierde niveau DNS Big Data Analy@cs Vijfde niveau DNS- OARC Fall 2015 Workshop October 4th 2015 Maarten Wullink,
Client Overview. Engagement Situation. Key Requirements
Client Overview Our client is one of the leading providers of business intelligence systems for customers especially in BFSI space that needs intensive data analysis of huge amounts of data for their decision
Big Data and Data Science: Behind the Buzz Words
Big Data and Data Science: Behind the Buzz Words Peggy Brinkmann, FCAS, MAAA Actuary Milliman, Inc. April 1, 2014 Contents Big data: from hype to value Deconstructing data science Managing big data Analyzing
Challenges for Data Driven Systems
Challenges for Data Driven Systems Eiko Yoneki University of Cambridge Computer Laboratory Quick History of Data Management 4000 B C Manual recording From tablets to papyrus to paper A. Payberah 2014 2
An example of bioinformatics application on plant breeding projects in Rijk Zwaan
An example of bioinformatics application on plant breeding projects in Rijk Zwaan Xiangyu Rao 17-08-2012 Introduction of RZ Rijk Zwaan is active worldwide as a vegetable breeding company that focuses on
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January 2015. Email: [email protected] Website: www.qburst.com
Lambda Architecture Near Real-Time Big Data Analytics Using Hadoop January 2015 Contents Overview... 3 Lambda Architecture: A Quick Introduction... 4 Batch Layer... 4 Serving Layer... 4 Speed Layer...
Practical Solutions for Big Data Analytics
Practical Solutions for Big Data Analytics Ravi Madduri Computation Institute ([email protected]) Paul Dave ([email protected]) Dinanath Sulakhe ([email protected]) Alex Rodriguez ([email protected])
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Big Data on Microsoft Platform
Big Data on Microsoft Platform Prepared by GJ Srinivas Corporate TEG - Microsoft Page 1 Contents 1. What is Big Data?...3 2. Characteristics of Big Data...3 3. Enter Hadoop...3 4. Microsoft Big Data Solutions...4
A Brief Outline on Bigdata Hadoop
A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is
Fast, Low-Overhead Encryption for Apache Hadoop*
Fast, Low-Overhead Encryption for Apache Hadoop* Solution Brief Intel Xeon Processors Intel Advanced Encryption Standard New Instructions (Intel AES-NI) The Intel Distribution for Apache Hadoop* software
