Addressing Big Data Challenges in Simulation-based Science

Size: px
Start display at page:

Download "Addressing Big Data Challenges in Simulation-based Science"

Transcription

1 Addressing Big Data Challenges in Simulation-based Science Manish Parashar* Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer Engineering *Hoang Bui, Tong Jin, Qian Sun, Fan Zhang and other ex-students and collaborators. Manish Parashar, Rutgers University

2 Outline Data Grand Challenges Data challenges of simulation-based science Rethinking the simulations -> insights pipeline The ADIOS/DataSpaces Project Conclusion

3 Modern Science & Society Transformed by Compute & Data New paradigms and prac/ces in science and engineering Inherently mul/- disciplinary Data- driven, data and compute- intensive Collabora/ve (university, na/onal, global) Many Challenges Computing, Data, Software, People

4 Many Challenges. Computing Multicore; large and increasing core counts, deep memory hierarchies (TH-2: 54.9 PF, 3.12 M Cores, 1.4 PB, 25 MW) New prgm. model, concerns (fault tolerance, energy, etc) New models & technologies: Clouds, grids, hybrid manycore, accelerators, deep storage hierarchies, Data Generating more data than in all of human history: preserve, mine, share? How do we create data scientists/engineers? Software Complex applications on coupled compute-data-networked environments, tools needed Modern apps: lines, many groups contribute, take decades to develop, very long lifetimes People Multidisciplinary expertise essential! Appropriate academic program, career tracks 7

5 Bytes per day The Era of escience and Big Data Exa Bytes Peta Bytes Tera Bytes Giga Bytes TeraGrid/ XSEDE, Blue Waters Genomics LHC Climate, Environment Square Kilometer Array Clearly, Volume/Growth/Complexity modern scientific network/ instruments/experiments/ Genomics are Climate, Environment Many smaller datasets Genome sequencing are doubling in output every 9 months! Distribution Climate Data > 36TB > 2PB Keep increasing! producing Big Data!! But what about HPC? LHC LHC will produce roughly 15PB data/ year! SKA project will generate 1 EB data/day in 2020! LSST 100PB data from LSST by the end of this decade!! O Data Access Credit: R. Pennington/A. Blatecky

6 Advanced Computing Infrastructure Large scale, distributed, heterogeneous, multicore/manycore, accelerators, deep storage hierarchies, experimental systems. TH-2 Worldwide LHC 54.9 PF / 3.12 M cores 32K Xeon + 48K Phi TH Express-2 1PB memory Computing Grid >140 sites; ~250k cores; ~100 PB disk Titan - Cray XK7 20+ PF / 300 K CPU cores 18,688 GPUs Gemini 3D torus 710TB memory Modern Datacenters 1M servers MW Sequoia IBM BG/Q 20 PF / 1.5 M cores 18-core processor 5D torus 1.5PB memory Special Purpose HW (Anton) > 100 time acceleration of MD simulations

7 Scientific Discovery through Simulations Scientific simulations running on high-end computing systems generate huge amounts of data! If a single core produces 2MB/minute on average, one of these machines could generate simulation data between ~170TB per hour -> ~700PB per day -> ~1.4EB per year Successful scientific discovery depends on a comprehensive understanding of this enormous simulation data How we enable the computation scientists to efficiently manage and explore extreme scale data: find the needles in haystack??

8 Scientific Discovery through Simulations Complex workflows integrating coupled models, data management/ processing, analytics Tight / loose coupling, data driven, ensembles Advanced numerical methods (E.g., Adaptive Mesh Refinement) Integrated (online) uncertainty quantification, analytics Complex, heterogeneous components Large data volumes and data rates Data re-distribution (MxNxP), data transformations Dynamic data exchange patterns Strict performance/overhead constraints

9 Traditional Simulation -> Insight Pipelines Break Down Figure. Traditional data analysis pipeline Traditional simulation -> insight pipeline: Run large-scale simulation workflows on large supercomputers Dump data on parallel disk systems Export data to archives Move data to users sites usually selected subsets Perform data manipulations and analysis on mid-size clusters Collect experimental / observational data Move to analysis sites Perform comparison of experimental/observational to validate simulation data

10 Challenges Faced by Traditional HPC Data Pipelines Data analysis challenge Can current data mining, manipulation and visualization algorithms still work effectively on extreme scale machine? I/O challenge Increasing performance gap: disks are outpaced by computing speed Figure. Traditional data analysis pipeline Data movement challenge Lots of data movement between simulation and analysis machines, between coupled mutli-physics simulation components -> longer latencies Improving data locality is critical: do work where the data resides! Energy challenge Future extreme systems are designed to have low-power chips however, much greater power consumption will be due to memory and data movement! The costs of data movement are increasing and dominating!

11 The Cost of Data Movement Moving data between node memory and persistent storage is slow! The energy cost of moving data is a significant concern performance gap Energy _ move _ data = bitrate * length 2 cross_section_area_of_wire From K. Yelick, Software and Algorithms for Exascale: Ten Ways to Waste an Exascale Computer "

12 Challenges Faced by Traditional HPC Data Pipelines Traditional data analysis pipeline The costs of data movement (power and performance) are increasing and dominating! We need to Rethink the Data Management Pipeline! Reduce data movement Move computation/analytics closer to the data Add value to simulation data along the IO path

13 Rethinking the Data Management Pipeline Hybrid Staging + In-Situ & In-Transit Execution Issues/Challenges Programming abstractions/systems Mapping and scheduling Control and data flow Autonomic runtime

14 Design space of possible workflow architectures Loca%on of the compute resources Same cores as the simula/on (in situ) Some (dedicated) cores on the same nodes Some dedicated nodes on the same machine Dedicated nodes on an external resource Data access, placement, and persistence Direct access to simula/on data structures Shared memory access via hand- off / copy Shared memory access via non- vola/le near node storage (NVRAM) Data transfer to dedicated nodes or external resources Synchroniza%on and scheduling Execute synchronously with simula/on every n th simula/on /me step Execute asynchronously Analysis Tasks Simulation Visualization DRAM DRAM Node 2 Node 1 Sharing cores with the simulation Using distinct cores on same node CPUs DRAM NVRAM SSD Hard Disk... Network Node N DRAM DRAM Simulation Node Network Communication Staging Node Processing data on remote nodes CPUs DRAM NVRAM Staging option 3 SSD Hard Disk Staging option 1 Staging option 2

15 DataSpaces In-situ/In-transit Data Management & Analytics ADIOS/DataSpaces dataspaces.org l l l l Virtual shared-space programming abstraction l Simple API for coordination, interaction and messaging Distributed, associative, in-memory object store l Online data indexing, flexible querying Adaptive cross-layer runtime management l Hybrid in-situ/in-transit execution Efficient, high-throughput/low-latency asynchronous data transport

16 DataSpaces: A Scalable Shared Space Abstraction for Hybrid Data Staging [HPDC10, JCC12] l l l Virtual shared-space abstraction l l l l Simple API for coordination, interaction and messaging Provides a global-view programming abstraction consistent with PGAS Distributed, associative, in-deepmemory object store Online data indexing, flexible querying Adaptive cross-layer runtime management l l Hybrid in-situ/in-transit execution Data-centric mappings High-throughput/low-latency memory-tomemory asynchronous data transport Dynamic coordination and interaction patterns between the coupled applications Transparent data redistribution Complex geometry-based queries In-space (online) data transformation and manipulations

17 DataSpaces: Scalability on ORNL Titan Time(s) 2GB 4GB 8GB 16GB 32GB 64GB 128GB 256GB Data redistribution time K/2K 16K/1K 8192/ / / /64 512/32 64K/4K Throughput(GB/s) 2GB 4GB 8GB 16GB 32GB 64GB 128GB 256GB 120 Aggregate data redistribution throughput K/2K 16K/1K 8192/ / / /64 512/32 64K/4K Evaluate weak scaling with an increasing number of processors Applications redistribute data through DataSpaces Application 1 runs on M processors and insert data in the space Application 2 runs on N processors and retrieve data from the space Result: A 128 fold increase in the application sizes from 512 to 64K writers, total data size exchanged per step is increased from 2GB to 256GB

18 DataSpaces: Enabling Coupled Scientific Workflows at Extreme Scales Multphysics Code Coupling at Extreme Scales [CCGrid10] Data-centric Mappings for In-Situ Workflows [IPDPS12] PGAS Extensions for Code Coupling [CCPE13] data_kernels.o 0x69 0x20 0x61 0x6d 0x20 0x63 0x6f 0x6f 0x6c 0x0 gcc -c data_kernels.c kernel_min { for i = 1, n for j = 1, m for k = 1, p if (min > A(i, j, k)) min = A(i, j, k) } Link Applications.o Applications executable aprun Compute nodes return() Staging nodes aprun rexec() Load Runtime execution system (Rexec) data_kernels.lua for i = 0, ni-1 do for j = 0, nj-1 do for k = 0, nk-1 do val = input:get_val(i,j,k) if min > val then min = val end end end Dynamic Code Deployment In-Staging [IPDPS11]

19 In-Situ Feature Extraction and Tracking using Decentralized Online Clustering (DISC 12, ICAC 10) DOC workers executed in- situ on simula%on machines DOC Overlay Simula/on Compute Nodes Processor core runs simula/on Processor core runs DOC worker Benefits of runtime feature extraction and tracking (1) Scientists can follow the events of interest (or data of interest) (2) Scientists can do real-time monitoring of the running One compute node

20 In-situ viz. and monitoring with staging" Pixie3D 1024 cores Pixplot 8 cores ParaView Server 4 cores record.bp record.bp pixie3d.bp pixie3d.bp DataSpaces Pixmon 1 core (login node)

21 AMR-as-a-Service using DataSpaces FEM-AMR workflow FEM Grid GridField DataSpaces Grid GridField AMR pgrid pgridfield pgrid pgridfield Components: FEM: model uniform 3D mesh of near-realistic engineering problems such as heat-transfer, fluid flow and phase transformation(grid and GridField) AMR: localize areas of interest where the physics is important (pgrid and pgridfield) to allow truly realistic simulations Goals: Enable in-memory data coupling between FEM and AMR code Allow multiple AMR codes to be plugged in and read Grid/GridField data as FEM progresses.

22 Overview of Research RU Programming abstractions / system DataSpaces: Interaction, coordination and messaging abstractions for coupled scientific workflow [HPDC10, CCGrid10, HiPC12, JCC12] XpressSpaces: PGAS extensions for coupling using DataSpaces [CCGrid11, CCPE13] ActiveSpaces: Dynamic code deployment for in-staging data processing [IPDPS11] Runtime mechanisms Data-centric Task Mapping: Reduce data movement and increase intra-node data sharing [IPDPS12, DISC12] In-situ & In-transit Data Analytics: Simulation-time analysis of large volume data by combining in-situ and in-transit execution [SC12, DISC12] Cross-layer Adaptation: Adaptive cross-layer approach for dynamic data management in large scale simulation-analysis workflow [SC13] Value-based Data Indexing and Querying : Use FastBit to build in-situ, in-memory valuebased indexing and query support in the staging area Power/performance Tradeoffs: Characterizing power/performance tradeoffs for dataintensive simulations workflows [SC13] Data Staging over Deep Memory Hierarchy: Build distributed associative object store over hierarchical memory storage, e.g. DRAM/NVRAM/SSD [HiPC 13] High-throughput/low-latency asynchronous data transport DART: Network independent transport library for high speed asynchronous data extraction and transfer [HPDC08]

23 Integrating In-Situ and In-Transit Analytics (SC 12) S3D: First-principles direct numerical simulation Simulation resolves features on the order of 10 simulation time steps Currently on the order of every 400 th time step can be written to disk Temporal fidelity is compromised when analysis is done as a postprocess Recent data sets generated by S3D, developed at the Combus/on Research Facility, Sandia Na/onal Laboratories

24 In-situ Topological Analysis as Part of S3D* Sta/s/cs Volume Rendering Topology Identify features of interest *J. C. Bennett et al., Combining In-Situ and In-Transit Processing to Enable Extreme-Scale Scientific Analysis, SC 12, Salt Lake City, Utah, November,

25 Integrating In-Situ and In-Transit Analytics (SC 12) Primary resources execute the main simulation and in situ computations Secondary resources provide a staging area whose cores act as buckets for in transit computations 4896 cores total (4480 simula/on/in situ; 256 in transit; 160 task scheduling/data movement) Simula/on size: 1600x1372x430 All measurements are per simula/on /me step

26 Simulation case study with S3D: Timing results for 4896 cores and analysis every 10 th simulation time step S3D& in>situ& data&movement& in>transit&& in>situ&sta1s1cs& hybrid&sta1s1cs& in>situ&visualiza1on& hybrid&visualiza1on& 1.0% 1.0%.43%.004% hybrid&topology& & 1.61% simula1on& 168.5& 0& 20& 40& 60& 80& 100& 120& 140& 160& seconds&

27 Simulation case study with S3D: Timing results for 4896 cores and analysis every 100 th simulation time step S3D% in<situ% data%movement% in<transit%% in<situ%sta/s/cs% hybrid%sta/s/cs% in<situ%visualiza/on% hybrid%visualiza/on% hybrid%topology%.1%.1%.04%.004%.16% simula/on% 1685% 0% 200% 400% 600% 800% 1000% 1200% 1400% 1600% seconds%

28 Cross-Layer Adaptation for Dynamic Data Management (SC13) Coupled simulation-analytics workflow based on dynamic formulations such as Adaptive Mesh Refinement (AMR) running at extreme scales present new challenges for in-situ/intransit data management Large and dynamically changing volume of data Dynamic imbalanced data distribution Heterogeneous resource (memory, CPU, etc.) requirements

29 Cross-Layer Adaptation for Dynamic Data Management Dynamic cross-layer adaptations that can respond at runtime to the dynamic data management and data processing requirements Application layer: Adaptive spatial-temporal data resolution Middleware layer: Dynamic insitu/in-transit placement and scheduling Resource layer: Dynamic allocation of in-transit resources Coordinated approaches: Combine mechanisms towards a specific objective (e.g. minimized time-to-solution)

30 Cross-Layer Adaptation for Dynamic Data Management Memory Consumption with MIN data spatial resolution Memory Consumption with MAX data spatial resolution Memory Consumption with Adaptive data spatial resolution Real-time Memory Availability 140 Memory Size (MB) Time step Application layer adaptation of the spatial resolution of data using userdefined down-sampling based on runtime memory availability. Application layer adaptation of the spatial resolution of the data using entropy based data down-sampling. (Top: full-resolution; Bottom: adaptive resolution)

31 Size of aggregated insitu-intransit data transfers (GB) Cross-Layer Adaptation for Dynamic Data Management Intransit placement Adaptive placement 2K 4K 8K 16K AMR simulation cores Amount of data transfer using static placement, adaptive placement and combined adaption (adaptive data resolution + adaptive placement)

32 Size of aggregated insitu-intransit data transfers (GB) Cross-Layer Adaptation for Dynamic Data Management Intransit placement Adaptive placement 2K 4K 8K 16K AMR simulation cores a. Data transfer with/without middleware adaptation at different scales Adapt the placement on-the-fly, utilizing the flexibility of in-situ (less data movement). b. Comparison of cumulative end-to-end execution time between static placement (in-situ/in-transit) and adaptive placement. End-to-end overhead includes data processing time, data transfer time, and other system overhead.

33 Cross-Layer Adaptation for Dynamic Data Management a. Data transfer comparison between performing adaptive placement and performing combined cross-layer adaption (adaptive data resolution + adaptive placement) b. Comparison of cumulative end-toend execution time between adaptive placement and combined cross-layer adaption (adaptive data resolution + adaptive placement).

34 Scalable In-Memory Data Indexing and Querying for Scientific Simulation Workflows Motivation Query-driven data analysis is an important technique for analyzing the enormous amounts of data produced by large-scale scientific simulations Flame front tracking in combustion simulations: scientists need to formulate a set of queries to discover the data points whose values lie within a certain range Problems of traditional file-based approach Parallel I/O operations become the dominating cost factor, and introduces significant overhead to both index building and query processing Only data from selected steps is written for post-processing, thus some highly intermittent and transient phenomena could be lost Goal Enable parallel in-memory indexing and querying to support online query-driven data analysis for large scale scientific simulations

35 Scalable In-Memory Data Indexing and Querying for Scientific Simulation Workflows Highlight Parallel in-memory indexing and querying on dedicated staging nodes Scalable and efficient online indexing/query performance Support SQL-like query syntax and simple querying APIs Flexible framework that can be integrated with different index techniques (current implementation uses FastBit compressed bitmap index) Figure. Conceptual overview of the presented framework, and interaction between scientific simulations and querying applications

36 Scalable In-Memory Data Indexing and Querying for Scientific Simulation Workflows Performance comparison with file-based approach I/O overhead is the dominating cost for file-based approach DataSpaces approach is significantly faster for both index building and querying Figure. (Left) Breakdown of index building time; (Right) Query processing time

37 Scalable In-Memory Data Indexing and Querying for Scientific Simulation Workflows Performance with increasing number of cores Increase the number of index/query servers from 32 to 1k The index building time reduces from 30.7s to 2.1s for 128GB data set. The query processing time decreases significantly for different selectivities. Figure. (Left) Index building time for different data size; (Right) Breakdown of query processing time for different query selectivities

38 Scalable In-Memory Data Indexing and Querying for Scientific Simulation Workflows Scaling Performance Increase both the total data size and number of index/query servers Our approach shows overall good scalability for both index building and query processing Figure. (Left) Index building time; (Right) Query processing time

39 Data Staging over Deep Memory Hierarchy Motivation Given small DRAM capacity per core, even aggregated memory on dedicated nodes will not be sufficient for staging data Hybrid Staging Spans horizontally across the memory of compute nodes of both primary and secondary resources Spans vertically across the multi-level memory hierarchy, e.g., DRAM/ NVRAM/SSD, to extend the capacity of in-memory data staging Primary resources Data Secondary resources

40 Rutgers -- Objective Enable tightly coupled XGC1 and XGCa workflow using hybrid staging -- XGC1 and XGCa processes on the same compute node, and exchange data through on-node memory XGC1 Explore two different execution model Concurrent coupling: execute XGC1 and XGCa concurrently on different set of processor cores of each compute node Sequential coupling: execute XGC1 and XGCa sequentially on the same set of processor cores of each compute node XGCa a processor core multi-core compute node Illustration of tightly coupled execution of XGC1 and XGCa: (Left) concurrent coupling; (Right) sequential coupling Timeline

41 EPSI XGC1-XGCa Coupled Workflow Preliminary evaluation results on ORNL Titan File-based: ADIOS/BP method, Memory-based: ADIOS/DataSpaces Total&wallclock&*me&(seconds)& & XGC1:&turbulence&wri*ng& 2.50E+03% 2.00E+03% 1.50E+03% 1.00E+03% 5.00E+02% 0.00E+00% File.based% Memory.based% Total&wallclock&*me&(seconds)& XGCa:&turbulence&reading& 7.00E+04% 6.00E+04% 5.00E+04% 4.00E+04% 3.00E+04% 2.00E+04% 1.00E+04% 0.00E+00% File1based% Memory1based% Comparison between file-based and memory-based coupling. (Left) XGC1 turbulence writing; (Right) XGCa turbulence reading. Note: Y-axis is total wallclock time (seconds), which is the accumulated sum over all processes and all time steps.

42 Summary & Conclusions Complex applications running on high-end systems generate extreme amounts of data that must be managed and analyzed to get insights Data costs (performance, latency, energy) are quickly dominating Traditional data management/analytics pipelines are breaking down Hybrid data staging, In-situ workflow execution, & Dynamic code deployment can address this challenges Users to efficiently intertwine applications, libraries, middleware for complex analytics Many challenges; Programming, mapping and scheduling, control and data flow, autonomic runtime management. The ADIOS/DataSpaces project explores solutions at various levels: High-level programming abstractions for in-situ / in-transit workflows for code coupling, online analytics, UQ, etc. Efficient adaptive runtime mechanisms for hybrid staging, locality-aware mapping and location-aware data movement, performance/energy tradeoffs Support for dynamic code deployment and execution for moving code to data

43 Thank You!! EPSi Edge Physics Simulation Manish Parashar, Ph.D. Prof., Dept. of Electrical & Computer Engr. Rutgers Discovery Informatics Institute (RDI 2 ) Cloud & Autonomic Computing Center (CAC) Rutgers, The State University of New Jersey WWW: rdi2.rutgers.edu WWW: dataspaces.org

Big Data Challenges in Simulation-based Science

Big Data Challenges in Simulation-based Science Big Data Challenges in Simulation-based Science Manish Parashar* Rutgers Discovery Informatics Institute (RDI2) NSF Center for Cloud and Autonomic Computing (CAC) Department of Electrical and Computer

More information

Big Data Challenges in Simulation-based Science

Big Data Challenges in Simulation-based Science Big Data Challenges in Simulation-based Science Manish Parashar* Rutgers Discovery Informatics Institute (RDI 2 ) Department of Computer Science *Hoang Bui, Tong Jin, Qian Sun, Fan Zhang, ADIOS Team, and

More information

Big Data Challenges in Simulation-based Science

Big Data Challenges in Simulation-based Science Big Data Challenges in Simulation-based Science Manish Parashar Rutgers Discovery Informatics Institute (RDI 2 ) NSF Center for Cloud and Autonomic Computing (CAC) http://parashar.rutgers.edu *Hoang Bui,

More information

New Jersey Big Data Alliance

New Jersey Big Data Alliance Rutgers Discovery Informatics Institute (RDI 2 ) New Jersey s Center for Advanced Computation New Jersey Big Data Alliance Manish Parashar Director, Rutgers Discovery Informatics Institute (RDI 2 ) Professor,

More information

Addressing Crosscutting Compute/Data Challenges

Addressing Crosscutting Compute/Data Challenges Rutgers Discovery Informatics Institute (RDI 2 ) New Jersey s Center for Advanced Computation Addressing Crosscutting Compute/Data Challenges Manish Parashar parashar@rutgers.edu Manish Parashar, Rutgers

More information

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk HPC and Big Data EPCC The University of Edinburgh Adrian Jackson Technical Architect a.jackson@epcc.ed.ac.uk EPCC Facilities Technology Transfer European Projects HPC Research Visitor Programmes Training

More information

Data Centric Systems (DCS)

Data Centric Systems (DCS) Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems

More information

In-Situ Bitmaps Generation and Efficient Data Analysis based on Bitmaps. Yu Su, Yi Wang, Gagan Agrawal The Ohio State University

In-Situ Bitmaps Generation and Efficient Data Analysis based on Bitmaps. Yu Su, Yi Wang, Gagan Agrawal The Ohio State University In-Situ Bitmaps Generation and Efficient Data Analysis based on Bitmaps Yu Su, Yi Wang, Gagan Agrawal The Ohio State University Motivation HPC Trends Huge performance gap CPU: extremely fast for generating

More information

Exploring Software Defined Federated Infrastructures for Science

Exploring Software Defined Federated Infrastructures for Science Exploring Software Defined Federated Infrastructures for Science Manish Parashar NSF Cloud and Autonomic Computing Center (CAC) Rutgers Discovery Informatics Institute (RDI 2 ) Rutgers, The State University

More information

PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D.

PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD. Natasha Balac, Ph.D. PACE Predictive Analytics Center of Excellence @ San Diego Supercomputer Center, UCSD Natasha Balac, Ph.D. Brief History of SDSC 1985-1997: NSF national supercomputer center; managed by General Atomics

More information

Building Platform as a Service for Scientific Applications

Building Platform as a Service for Scientific Applications Building Platform as a Service for Scientific Applications Moustafa AbdelBaky moustafa@cac.rutgers.edu Rutgers Discovery Informa=cs Ins=tute (RDI 2 ) The NSF Cloud and Autonomic Compu=ng Center Department

More information

Visualization and Data Analysis

Visualization and Data Analysis Working Group Outbrief Visualization and Data Analysis James Ahrens, David Rogers, Becky Springmeyer Eric Brugger, Cyrus Harrison, Laura Monroe, Dino Pavlakos Scott Klasky, Kwan-Liu Ma, Hank Childs LLNL-PRES-481881

More information

Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED

Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory. The Nation s Premier Laboratory for Land Forces UNCLASSIFIED Dr. Raju Namburu Computational Sciences Campaign U.S. Army Research Laboratory 21 st Century Research Continuum Theory Theory embodied in computation Hypotheses tested through experiment SCIENTIFIC METHODS

More information

Big Data Management in the Clouds and HPC Systems

Big Data Management in the Clouds and HPC Systems Big Data Management in the Clouds and HPC Systems Hemera Final Evaluation Paris 17 th December 2014 Shadi Ibrahim Shadi.ibrahim@inria.fr Era of Big Data! Source: CNRS Magazine 2013 2 Era of Big Data! Source:

More information

Data Semantics Aware Cloud for High Performance Analytics

Data Semantics Aware Cloud for High Performance Analytics Data Semantics Aware Cloud for High Performance Analytics Microsoft Future Cloud Workshop 2011 June 2nd 2011, Prof. Jun Wang, Computer Architecture and Storage System Laboratory (CASS) Acknowledgement

More information

The Fusion of Supercomputing and Big Data. Peter Ungaro President & CEO

The Fusion of Supercomputing and Big Data. Peter Ungaro President & CEO The Fusion of Supercomputing and Big Data Peter Ungaro President & CEO The Supercomputing Company Supercomputing Big Data Because some great things never change One other thing that hasn t changed. Cray

More information

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1)

COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) COMP/CS 605: Intro to Parallel Computing Lecture 01: Parallel Computing Overview (Part 1) Mary Thomas Department of Computer Science Computational Science Research Center (CSRC) San Diego State University

More information

Big Data Challenges in Bioinformatics

Big Data Challenges in Bioinformatics Big Data Challenges in Bioinformatics BARCELONA SUPERCOMPUTING CENTER COMPUTER SCIENCE DEPARTMENT Autonomic Systems and ebusiness Pla?orms Jordi Torres Jordi.Torres@bsc.es Talk outline! We talk about Petabyte?

More information

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK

HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK HETEROGENEOUS HPC, ARCHITECTURE OPTIMIZATION, AND NVLINK Steve Oberlin CTO, Accelerated Computing US to Build Two Flagship Supercomputers SUMMIT SIERRA Partnership for Science 100-300 PFLOPS Peak Performance

More information

Cloud Computing for Research Roger Barga Cloud Computing Futures, Microsoft Research

Cloud Computing for Research Roger Barga Cloud Computing Futures, Microsoft Research Cloud Computing for Research Roger Barga Cloud Computing Futures, Microsoft Research Trends: Data on an Exponential Scale Scientific data doubles every year Combination of inexpensive sensors + exponentially

More information

Data-Intensive Science and Scientific Data Infrastructure

Data-Intensive Science and Scientific Data Infrastructure Data-Intensive Science and Scientific Data Infrastructure Russ Rew, UCAR Unidata ICTP Advanced School on High Performance and Grid Computing 13 April 2011 Overview Data-intensive science Publishing scientific

More information

Hank Childs, University of Oregon

Hank Childs, University of Oregon Exascale Analysis & Visualization: Get Ready For a Whole New World Sept. 16, 2015 Hank Childs, University of Oregon Before I forget VisIt: visualization and analysis for very big data DOE Workshop for

More information

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical

Write a technical report Present your results Write a workshop/conference paper (optional) Could be a real system, simulation and/or theoretical Identify a problem Review approaches to the problem Propose a novel approach to the problem Define, design, prototype an implementation to evaluate your approach Could be a real system, simulation and/or

More information

In situ data analysis and I/O acceleration of FLASH astrophysics simulation on leadership-class system using GLEAN

In situ data analysis and I/O acceleration of FLASH astrophysics simulation on leadership-class system using GLEAN In situ data analysis and I/O acceleration of FLASH astrophysics simulation on leadership-class system using GLEAN Venkatram Vishwanath 1, Mark Hereld 1, Michael E. Papka 1, Randy Hudson 2, G. Cal Jordan

More information

Beyond Embarrassingly Parallel Big Data. William Gropp www.cs.illinois.edu/~wgropp

Beyond Embarrassingly Parallel Big Data. William Gropp www.cs.illinois.edu/~wgropp Beyond Embarrassingly Parallel Big Data William Gropp www.cs.illinois.edu/~wgropp Messages Big is big Data driven is an important area, but not all data driven problems are big data (despite current hype).

More information

Cluster, Grid, Cloud Concepts

Cluster, Grid, Cloud Concepts Cluster, Grid, Cloud Concepts Kalaiselvan.K Contents Section 1: Cluster Section 2: Grid Section 3: Cloud Cluster An Overview Need for a Cluster Cluster categorizations A computer cluster is a group of

More information

Towards Scalable Visualization Plugins for Data Staging Workflows

Towards Scalable Visualization Plugins for Data Staging Workflows Towards Scalable Visualization Plugins for Data Staging Workflows David Pugmire Oak Ridge National Laboratory Oak Ridge, Tennessee James Kress University of Oregon Eugene, Oregon Jeremy Meredith, Norbert

More information

RevoScaleR Speed and Scalability

RevoScaleR Speed and Scalability EXECUTIVE WHITE PAPER RevoScaleR Speed and Scalability By Lee Edlefsen Ph.D., Chief Scientist, Revolution Analytics Abstract RevoScaleR, the Big Data predictive analytics library included with Revolution

More information

Workload Characterization and Analysis of Storage and Bandwidth Needs of LEAD Workspace

Workload Characterization and Analysis of Storage and Bandwidth Needs of LEAD Workspace Workload Characterization and Analysis of Storage and Bandwidth Needs of LEAD Workspace Beth Plale Indiana University plale@cs.indiana.edu LEAD TR 001, V3.0 V3.0 dated January 24, 2007 V2.0 dated August

More information

Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007

Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007 Data Management in an International Data Grid Project Timur Chabuk 04/09/2007 Intro LHC opened in 2005 several Petabytes of data per year data created at CERN distributed to Regional Centers all over the

More information

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time

How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first

More information

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

Seeking Opportunities for Hardware Acceleration in Big Data Analytics Seeking Opportunities for Hardware Acceleration in Big Data Analytics Paul Chow High-Performance Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Toronto Who

More information

Parallel Large-Scale Visualization

Parallel Large-Scale Visualization Parallel Large-Scale Visualization Aaron Birkland Cornell Center for Advanced Computing Data Analysis on Ranger January 2012 Parallel Visualization Why? Performance Processing may be too slow on one CPU

More information

Performance Monitoring of Parallel Scientific Applications

Performance Monitoring of Parallel Scientific Applications Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure

More information

BSC vision on Big Data and extreme scale computing

BSC vision on Big Data and extreme scale computing BSC vision on Big Data and extreme scale computing Jesus Labarta, Eduard Ayguade,, Fabrizio Gagliardi, Rosa M. Badia, Toni Cortes, Jordi Torres, Adrian Cristal, Osman Unsal, David Carrera, Yolanda Becerra,

More information

Trends in High-Performance Computing for Power Grid Applications

Trends in High-Performance Computing for Power Grid Applications Trends in High-Performance Computing for Power Grid Applications Franz Franchetti ECE, Carnegie Mellon University www.spiral.net Co-Founder, SpiralGen www.spiralgen.com This talk presents my personal views

More information

Reference Architecture, Requirements, Gaps, Roles

Reference Architecture, Requirements, Gaps, Roles Reference Architecture, Requirements, Gaps, Roles The contents of this document are an excerpt from the brainstorming document M0014. The purpose is to show how a detailed Big Data Reference Architecture

More information

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals

High Performance Computing. Course Notes 2007-2008. HPC Fundamentals High Performance Computing Course Notes 2007-2008 2008 HPC Fundamentals Introduction What is High Performance Computing (HPC)? Difficult to define - it s a moving target. Later 1980s, a supercomputer performs

More information

Hadoop on the Gordon Data Intensive Cluster

Hadoop on the Gordon Data Intensive Cluster Hadoop on the Gordon Data Intensive Cluster Amit Majumdar, Scientific Computing Applications Mahidhar Tatineni, HPC User Services San Diego Supercomputer Center University of California San Diego Dec 18,

More information

High Performance Computing OpenStack Options. September 22, 2015

High Performance Computing OpenStack Options. September 22, 2015 High Performance Computing OpenStack PRESENTATION TITLE GOES HERE Options September 22, 2015 Today s Presenters Glyn Bowden, SNIA Cloud Storage Initiative Board HP Helion Professional Services Alex McDonald,

More information

Kriterien für ein PetaFlop System

Kriterien für ein PetaFlop System Kriterien für ein PetaFlop System Rainer Keller, HLRS :: :: :: Context: Organizational HLRS is one of the three national supercomputing centers in Germany. The national supercomputing centers are working

More information

Cray: Enabling Real-Time Discovery in Big Data

Cray: Enabling Real-Time Discovery in Big Data Cray: Enabling Real-Time Discovery in Big Data Discovery is the process of gaining valuable insights into the world around us by recognizing previously unknown relationships between occurrences, objects

More information

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging

Outline. High Performance Computing (HPC) Big Data meets HPC. Case Studies: Some facts about Big Data Technologies HPC and Big Data converging Outline High Performance Computing (HPC) Towards exascale computing: a brief history Challenges in the exascale era Big Data meets HPC Some facts about Big Data Technologies HPC and Big Data converging

More information

Distributed Operating Systems Introduction

Distributed Operating Systems Introduction Distributed Operating Systems Introduction Ewa Niewiadomska-Szynkiewicz and Adam Kozakiewicz ens@ia.pw.edu.pl, akozakie@ia.pw.edu.pl Institute of Control and Computation Engineering Warsaw University of

More information

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste

for my computation? Stefano Cozzini Which infrastructure Which infrastructure Democrito and SISSA/eLAB - Trieste Which infrastructure Which infrastructure for my computation? Stefano Cozzini Democrito and SISSA/eLAB - Trieste Agenda Introduction:! E-infrastructure and computing infrastructures! What is available

More information

BMW11: Dealing with the Massive Data Generated by Many-Core Systems. Dr Don Grice. 2011 IBM Corporation

BMW11: Dealing with the Massive Data Generated by Many-Core Systems. Dr Don Grice. 2011 IBM Corporation BMW11: Dealing with the Massive Data Generated by Many-Core Systems Dr Don Grice IBM Systems and Technology Group Title: Dealing with the Massive Data Generated by Many Core Systems. Abstract: Multi-core

More information

Jean-Pierre Panziera Teratec 2011

Jean-Pierre Panziera Teratec 2011 Technologies for the future HPC systems Jean-Pierre Panziera Teratec 2011 3 petaflop systems : TERA 100, CURIE & IFERC Tera100 Curie IFERC 1.25 PetaFlops 256 TB ory 30 PB disk storage 140 000+ Xeon cores

More information

System Models for Distributed and Cloud Computing

System Models for Distributed and Cloud Computing System Models for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Classification of Distributed Computing Systems

More information

The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets

The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets The Data Grid: Towards an Architecture for Distributed Management and Analysis of Large Scientific Datasets!! Large data collections appear in many scientific domains like climate studies.!! Users and

More information

2009 Oracle Corporation 1

2009 Oracle Corporation 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,

More information

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller

In-Memory Databases Algorithms and Data Structures on Modern Hardware. Martin Faust David Schwalb Jens Krüger Jürgen Müller In-Memory Databases Algorithms and Data Structures on Modern Hardware Martin Faust David Schwalb Jens Krüger Jürgen Müller The Free Lunch Is Over 2 Number of transistors per CPU increases Clock frequency

More information

Data-intensive HPC: opportunities and challenges. Patrick Valduriez

Data-intensive HPC: opportunities and challenges. Patrick Valduriez Data-intensive HPC: opportunities and challenges Patrick Valduriez Big Data Landscape Multi-$billion market! Big data = Hadoop = MapReduce? No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard,

More information

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010

Flash Memory Arrays Enabling the Virtualized Data Center. July 2010 Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,

More information

Distributed communication-aware load balancing with TreeMatch in Charm++

Distributed communication-aware load balancing with TreeMatch in Charm++ Distributed communication-aware load balancing with TreeMatch in Charm++ The 9th Scheduling for Large Scale Systems Workshop, Lyon, France Emmanuel Jeannot Guillaume Mercier Francois Tessier In collaboration

More information

MCA Standards For Closely Distributed Multicore

MCA Standards For Closely Distributed Multicore MCA Standards For Closely Distributed Multicore Sven Brehmer Multicore Association, cofounder, board member, and MCAPI WG Chair CEO of PolyCore Software 2 Embedded Systems Spans the computing industry

More information

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren

News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren News and trends in Data Warehouse Automation, Big Data and BI Johan Hendrickx & Dirk Vermeiren Extreme Agility from Source to Analysis DWH Appliances & DWH Automation Typical Architecture 3 What Business

More information

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca

Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Evoluzione dell Infrastruttura di Calcolo e Data Analytics per la ricerca Carlo Cavazzoni CINECA Supercomputing Application & Innovation www.cineca.it 21 Aprile 2015 FERMI Name: Fermi Architecture: BlueGene/Q

More information

FPGA-based Multithreading for In-Memory Hash Joins

FPGA-based Multithreading for In-Memory Hash Joins FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded

More information

Big Data. George O. Strawn NITRD

Big Data. George O. Strawn NITRD Big Data George O. Strawn NITRD Caveat auditor The opinions expressed in this talk are those of the speaker, not the U.S. government Outline What is Big Data? NITRD's Big Data Research Initiative Big Data

More information

Impact of Big Data growth On Transparent Computing

Impact of Big Data growth On Transparent Computing Impact of Big Data growth On Transparent Computing Michael A. Greene Intel Vice President, Software and Services Group, General Manager, System Technologies and Optimization 1 Transparent Computing (TC)

More information

HPC Programming Framework Research Team

HPC Programming Framework Research Team HPC Programming Framework Research Team 1. Team Members Naoya Maruyama (Team Leader) Motohiko Matsuda (Research Scientist) Soichiro Suzuki (Technical Staff) Mohamed Wahib (Postdoctoral Researcher) Shinichiro

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

Mr. Apichon Witayangkurn apichon@iis.u-tokyo.ac.jp Department of Civil Engineering The University of Tokyo

Mr. Apichon Witayangkurn apichon@iis.u-tokyo.ac.jp Department of Civil Engineering The University of Tokyo Sensor Network Messaging Service Hive/Hadoop Mr. Apichon Witayangkurn apichon@iis.u-tokyo.ac.jp Department of Civil Engineering The University of Tokyo Contents 1 Introduction 2 What & Why Sensor Network

More information

Data Centric Interactive Visualization of Very Large Data

Data Centric Interactive Visualization of Very Large Data Data Centric Interactive Visualization of Very Large Data Bruce D Amora, Senior Technical Staff Gordon Fossum, Advisory Engineer IBM T.J. Watson Research/Data Centric Systems #OpenPOWERSummit Data Centric

More information

A Scalable Messaging System for Accelerating Discovery from Large Scale Scientific Simulations

A Scalable Messaging System for Accelerating Discovery from Large Scale Scientific Simulations A Scalable Messaging System for Accelerating Discovery from Large Scale Scientific Simulations Tong Jin, Fan Zhang, Manish Parashar The NSF Center for Cloud and Autonomic Computing Rutgers University,

More information

So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell

So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell So#ware Tools and Techniques for HPC, Clouds, and Server- Class SoCs Ron Brightwell R&D Manager, Scalable System So#ware Department Sandia National Laboratories is a multi-program laboratory managed and

More information

Scientific Computing's Productivity Gridlock and How Software Engineering Can Help

Scientific Computing's Productivity Gridlock and How Software Engineering Can Help Scientific Computing's Productivity Gridlock and How Software Engineering Can Help Stuart Faulk, Ph.D. Computer and Information Science University of Oregon Stuart Faulk CSESSP 2015 Outline Challenges

More information

On the Role of Indexing for Big Data in Scientific Domains

On the Role of Indexing for Big Data in Scientific Domains On the Role of Indexing for Big Data in Scientific Domains Arie Shoshani Lawrence Berkeley National Lab BIGDATA and EXTREME-SCALE COMPUTING April 3-May, 23 Outline q Examples of indexing needs in scientific

More information

Principles and characteristics of distributed systems and environments

Principles and characteristics of distributed systems and environments Principles and characteristics of distributed systems and environments Definition of a distributed system Distributed system is a collection of independent computers that appears to its users as a single

More information

ICT4 - Customised and low power computing

ICT4 - Customised and low power computing ICT4 - Customised and low power computing Sandro D'Elia European Commission Directorate-general CONNECT Unit "Complex Systems & Advanced Computing" sandro.delia@ec.europa.eu Excellent Science: HPC Strategy

More information

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression

WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA

More information

Data Centric Computing Revisited

Data Centric Computing Revisited Piyush Chaudhary Technical Computing Solutions Data Centric Computing Revisited SPXXL/SCICOMP Summer 2013 Bottom line: It is a time of Powerful Information Data volume is on the rise Dimensions of data

More information

Accelerating Hadoop MapReduce Using an In-Memory Data Grid

Accelerating Hadoop MapReduce Using an In-Memory Data Grid Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for

More information

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain

More information

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007

More information

Big Data Mining Services and Knowledge Discovery Applications on Clouds

Big Data Mining Services and Knowledge Discovery Applications on Clouds Big Data Mining Services and Knowledge Discovery Applications on Clouds Domenico Talia DIMES, Università della Calabria & DtoK Lab Italy talia@dimes.unical.it Data Availability or Data Deluge? Some decades

More information

Data Requirements from NERSC Requirements Reviews

Data Requirements from NERSC Requirements Reviews Data Requirements from NERSC Requirements Reviews Richard Gerber and Katherine Yelick Lawrence Berkeley National Laboratory Summary Department of Energy Scientists represented by the NERSC user community

More information

Mission Need Statement for the Next Generation High Performance Production Computing System Project (NERSC-8)

Mission Need Statement for the Next Generation High Performance Production Computing System Project (NERSC-8) Mission Need Statement for the Next Generation High Performance Production Computing System Project () (Non-major acquisition project) Office of Advanced Scientific Computing Research Office of Science

More information

EOFS Workshop Paris Sept, 2011. Lustre at exascale. Eric Barton. CTO Whamcloud, Inc. eeb@whamcloud.com. 2011 Whamcloud, Inc.

EOFS Workshop Paris Sept, 2011. Lustre at exascale. Eric Barton. CTO Whamcloud, Inc. eeb@whamcloud.com. 2011 Whamcloud, Inc. EOFS Workshop Paris Sept, 2011 Lustre at exascale Eric Barton CTO Whamcloud, Inc. eeb@whamcloud.com Agenda Forces at work in exascale I/O Technology drivers I/O requirements Software engineering issues

More information

HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK

HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK HPC & Big Data THE TIME HAS COME FOR A SCALABLE FRAMEWORK Barry Davis, General Manager, High Performance Fabrics Operation Data Center Group, Intel Corporation Legal Disclaimer Today s presentations contain

More information

In-Memory Computing for Iterative CPU-intensive Calculations in Financial Industry In-Memory Computing Summit 2015

In-Memory Computing for Iterative CPU-intensive Calculations in Financial Industry In-Memory Computing Summit 2015 In-Memory Computing for Iterative CPU-intensive Calculations in Financial Industry In-Memory Computing Summit 2015 June 29-30, 2015 Contacts Alexandre Boudnik Senior Solution Architect, EPAM Systems Alexandre_Boudnik@epam.com

More information

Scientific Computing Programming with Parallel Objects

Scientific Computing Programming with Parallel Objects Scientific Computing Programming with Parallel Objects Esteban Meneses, PhD School of Computing, Costa Rica Institute of Technology Parallel Architectures Galore Personal Computing Embedded Computing Moore

More information

InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice

InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice InfiniBand Strengthens Leadership as the High-Speed Interconnect Of Choice Provides the Best Return-on-Investment by Delivering the Highest System Efficiency and Utilization TOP500 Supercomputers June

More information

IBM WebSphere Distributed Caching Products

IBM WebSphere Distributed Caching Products extreme Scale, DataPower XC10 IBM Distributed Caching Products IBM extreme Scale v 7.1 and DataPower XC10 Appliance Highlights A powerful, scalable, elastic inmemory grid for your business-critical applications

More information

Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011

Scalable Data Analysis in R. Lee E. Edlefsen Chief Scientist UserR! 2011 Scalable Data Analysis in R Lee E. Edlefsen Chief Scientist UserR! 2011 1 Introduction Our ability to collect and store data has rapidly been outpacing our ability to analyze it We need scalable data analysis

More information

Bringing Big Data Modelling into the Hands of Domain Experts

Bringing Big Data Modelling into the Hands of Domain Experts Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks david.willingham@mathworks.com.au 2015 The MathWorks, Inc. 1 Data is the sword of the

More information

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance

More information

Alternative Deployment Models for Cloud Computing in HPC Applications. Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix

Alternative Deployment Models for Cloud Computing in HPC Applications. Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix Alternative Deployment Models for Cloud Computing in HPC Applications Society of HPC Professionals November 9, 2011 Steve Hebert, Nimbix The case for Cloud in HPC Build it in house Assemble in the cloud?

More information

BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON

BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON Overview * Introduction * Multiple faces of Big Data * Challenges of Big Data * Cloud Computing

More information

HPC technology and future architecture

HPC technology and future architecture HPC technology and future architecture Visual Analysis for Extremely Large-Scale Scientific Computing KGT2 Internal Meeting INRIA France Benoit Lange benoit.lange@inria.fr Toàn Nguyên toan.nguyen@inria.fr

More information

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study

DISTRIBUTED SYSTEMS AND CLOUD COMPUTING. A Comparative Study DISTRIBUTED SYSTEMS AND CLOUD COMPUTING A Comparative Study Geographically distributed resources, such as storage devices, data sources, and computing power, are interconnected as a single, unified resource

More information

Using In-Memory Computing to Simplify Big Data Analytics

Using In-Memory Computing to Simplify Big Data Analytics SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed

More information

The Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT): A Vision for Large-Scale Climate Data

The Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT): A Vision for Large-Scale Climate Data The Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT): A Vision for Large-Scale Climate Data Lawrence Livermore National Laboratory? Hank Childs (LBNL) and Charles Doutriaux (LLNL) September

More information

HPC data becomes Big Data. Peter Braam peter.braam@braamresearch.com

HPC data becomes Big Data. Peter Braam peter.braam@braamresearch.com HPC data becomes Big Data Peter Braam peter.braam@braamresearch.com me 1983-2000 Academia Maths & Computer Science Entrepreneur with startups (5x) 4 startups sold Lustre emerged Held executive jobs with

More information

Distribution transparency. Degree of transparency. Openness of distributed systems

Distribution transparency. Degree of transparency. Openness of distributed systems Distributed Systems Principles and Paradigms Maarten van Steen VU Amsterdam, Dept. Computer Science steen@cs.vu.nl Chapter 01: Version: August 27, 2012 1 / 28 Distributed System: Definition A distributed

More information

Big Data and Cloud Computing for GHRSST

Big Data and Cloud Computing for GHRSST Big Data and Cloud Computing for GHRSST Jean-Francois Piollé (jfpiolle@ifremer.fr) Frédéric Paul, Olivier Archer CERSAT / Institut Français de Recherche pour l Exploitation de la Mer Facing data deluge

More information

Large-Data Software Defined Visualization on CPUs

Large-Data Software Defined Visualization on CPUs Large-Data Software Defined Visualization on CPUs Greg P. Johnson, Bruce Cherniak 2015 Rice Oil & Gas HPC Workshop Trend: Increasing Data Size Measuring / modeling increasingly complex phenomena Rendering

More information

Qlik Sense scalability

Qlik Sense scalability Qlik Sense scalability Visual analytics platform Qlik Sense is a visual analytics platform powered by an associative, in-memory data indexing engine. Based on users selections, calculations are computed

More information

GigaSpaces Real-Time Analytics for Big Data

GigaSpaces Real-Time Analytics for Big Data GigaSpaces Real-Time Analytics for Big Data GigaSpaces makes it easy to build and deploy large-scale real-time analytics systems Rapidly increasing use of large-scale and location-aware social media and

More information