MAKING THE BUSINESS CASE
|
|
- Janice Jennings
- 8 years ago
- Views:
Transcription
1 MAKING THE BUSINESS CASE LUSTRE FILE SYSTEMS ARE POISED TO PENETRATE COMMERCIAL MARKETS table of contents + Considerations in Building the A TechTarget White Paper by
2 Long the de facto parallel file system for science and research, Lustre is poised to penetrate the center of the enterprise for big data and high-performance computing. But buyer take note: Lustre implementations vary widely. As commercial enterprises outgrow network-attached storage (NAS) for performance, IT leaders are considering and deploying scalable parallel file systems. Big data workloads requiring high bandwidth and capacity may be an ideal fit for the capabilities of parallel file systems such as Lustre. Lustre is well suited to applications and workloads requiring large-block sequential I/O or massively concurrent read/write to a single file system. About two-thirds of the world s fastest supercomputers and compute clusters rely on Lustre. Until recently, its strengths and technical merits received little attention among commercial storage buyers, historically served by NAS and an assortment of clustered file systems deployed in front of enterprise storage area networks (SANs). But that s changing. Parallel file systems are maturing and big data workloads for a broad range of use cases are pushing beyond the limits of conventional NAS and SAN, forcing IT leaders and research teams to investigate Lustre and other parallel file systems for commercial and technical computing applications. Still, in order to be accepted by skeptical commercial customers, Lustre requires the backing of leading compute and storage vendors. Support for Lustre in parallel applications took a big step forward with the creation of OpenSFS, a consortium working to advance Lustre and innovate through collaboration. The level of Lustre technical expertise, guidance and support provided by storage vendors varies greatly, to say the least the majority of established storage vendors are in the business of shipping boxes and gear, not configuring and innovating on Lustre. Traditional storage vendors will deliver products and, in some cases, install systems, but they lack expertise in deploying Lustre and typically offer single-vendor proprietary solutions tied to a block storage platform or vendor-specific system. Others, however, provide complete, open Lustre solutions, from the storage disks across the network to the client. Considerations in Building the Business Case for Lustre Lustre is open source, and innovation is community-driven across numerous companies and organizations, such as OpenSFS. That means innovation, architecture and capability are not tied to a single vendor. Considering the growing popularity of open source 1
3 software, open systems and specific application frameworks like MapReduce on Hadoop for big data applications, Lustre is poised to complement much of what traditionally was done on file systems like the Hadoop Distributed File System. For Lustre to gain traction in big data and widen its aperture of commercial applicability, successful technology vendors with established track records in highperformance computing (HPC) and big data need to stand behind Lustre and support it. A key goal should be to support open and interoperable multivendor solutions that offer storage choice and perform optimally and attach to common Linux clusters. Some vendors may claim to support Lustre but will not validate all the components in a given solution. The vendor should support the entire configuration from a Lustre client across the network and back to the disks storing the data. Consider the depth of expertise in supporting end-to-end multivendor solutions before deciding on Lustre. Does the vendor support open systems, without locking you in to a single-vendor storage platform? Also, Lustre should be considered if your organization has a big data initiative or workloads requiring large-block sequential I/O. High availability and performance require expertise, and that depends greatly on the vendor supplying Lustre. If you require solutions that can be uniquely adapted or optimized for performance or capacity, consider how important it is to have the flexibility of choice when choosing the storage platform. If you require petascale systems for I/O- or bandwidth-intensive applications, where capacity and performance can be scaled in balanced ratios, keep in mind how important integration and floor space reduction are to your business. When scalability is important where the system grows and changes over time choose your Lustre vendor wisely. The ability to scale capacity from tens of terabytes to multiple petabytes is a big selling point among many providers, but what does that really mean? Reducing complexity and scaling the system consistently and predictably are key tenets of good Lustre file system design. The system should perform optimally and be available, and take full advantage of the underlying storage platform you choose to deploy. Intimate knowledge of high-bandwidth applications is another important requirement, which underscores the importance of selecting a partner with expertise in scaling applications. Specifically, your vendor needs to take a holistic, systemic approach to scalability, rather than a myopic, storage-only focus. Having an understanding of application requirements, performance tuning and system design to achieve optimal results, and having systems and software knowledge particularly in the open source community ecosystem are important benefits in working with a proven, experienced Lustre solutions provider. Not every supplier will have the full understanding of the end-to-end system, supported by a fully optimized I/O delivery system, to ensure systems perform as stated. 2
4 Density is often touted as a benefit among many storage platforms. Yet, with Lustre, this needs to include all componentry, from cards and cables to ports, servers and controllers. Density does not always equate to easier. Take care to choose solutions from vendors offering embedded, integrated systems (rather than what marketing calls appliances ) that actually reduce the number of managed components. This should be measured by the number of cables, ports and other components, as well as by usable square feet of capacity and performance. Some vendors measure raw capacity, while others measure usable space quoting usable capacity. How much space does the entire system utilize for large performance and capacity configurations? Finally, resiliency, robustness, stability and data integrity need to be considered as part of a complete Lustre solution. Big data and HPC infrastructure leaders should take care to understand how solutions are tested, qualified and supported. Are systems robust, resilient and highly available? Is performance measured using real-world test cases and application workloads, or through smoke-and-mirrors marketing benchmarks or marketing claims? At the end of the day, your business counts on the entire system performing for specific applications. Often, this is best measured by understanding the applications and workloads, and by matching the capabilities of the system to the application. Cray s Approach to Few companies can claim as long a commitment to developing and deploying holistic, systemwide Lustre-based solutions as Cray, which builds on its established reputation for scaling systems and its leadership position in supercomputing. Cray deploys Lustrebased storage and data management solutions for big data and HPC applications demanding performance, and for users demanding an optimal experience. Cray offers complete Lustre solutions that are supported from the disk to the client deployed on the compute cluster. For customers that prefer preintegrated systems that come fully Lustre-ready, including all components and software preinstalled and preconfigured, the Cray Sonexion family features modular, scale-out technology for simplified deployment and balanced scalability of performance and capacity. Sonexion offers the ability to scale in a near-linear fashion, supporting massive capacity and performance in a single Lustre file system. Cray Sonexion scales large-block throughput from 5GB per second to more than 1TB per second in a single file system. For customers requiring choice and flexibility combined with Cray s Lustre expertise, the company offers component-based Lustre solutions. Cray s solutions provide a high degree of configurability to deliver performance and capacity in optimal configurations and a range of sizes. Cray s Lustre solutions are also optimized 3
5 for DataDirect Networks (DDN) and NetApp block storage systems, which Cray both sells and supports. Systems range from small to large, from gigabytes to over a terabyte of bandwidth, and from terabytes to petabytes of capacity. Cray s leadership position in HPC continues to broaden to big data and is taking Lustre to commercial enterprises that demand trust, experience and a single point of support for open, multivendor solutions. Through its leadership and involvement with OpenSFS, Cray advocates for the development of features that drive efficient performance at scale, notes Intersect360 Research s Addison Snell in his April 2013 white paper Cray Brings Leadership and Customer Focus to Lustre Scalability. He adds that Cray s leadership role in OpenSFS has allowed it to work closely with many leading Lustre hardware and software providers, and that this technology-inclusive view puts Cray s board-level membership in a position to help steer Lustre in a direction that benefits HPC customers without lock-in to a particular storage solution. Finally, Cray s long history in supercomputing-class applications with tremendous I/O requirements means it understands how Lustre is deployed in real-world applications. Cray s staff of systems engineers and application consultants is well versed in the requirements of customers that count on Lustre to deliver both the capacity and performance levels to make a difference in data-intensive workloads, now including big data and HPC/high-availability computing. Summary Any organization looking for scalable storage solutions in support of big data or HPC should be evaluating Lustre and talking with Cray. With vendors like Cray behind Lustre, open and scalable file systems are making their way into commercial applications. Choose your Lustre vendor wisely. Look for Lustre implementations that offer flexible storage options and avoid vendor lock-in by supporting multiple storage platforms and Linux compute clusters of any type. Those solutions also need to be backed by experts in application scalability and built on the right compute, network and storage solutions. Cray s suite of Lustre solutions offers choice and flexibility, backed by Cray. Commercial organizations looking to deploy complete, end-to-end Lustre solutions for big data and HPC should consider solutions from Cray. 4
Easier - Faster - Better
Highest reliability, availability and serviceability ClusterStor gets you productive fast with robust professional service offerings available as part of solution delivery, including quality controlled
More informationThe Ultimate in Scale-Out Storage for HPC and Big Data
Node Inventory Health and Active Filesystem Throughput Monitoring Asset Utilization and Capacity Statistics Manager brings to life powerful, intuitive, context-aware real-time monitoring and proactive
More informationQuick Reference Selling Guide for Intel Lustre Solutions Overview
Overview The 30 Second Pitch Intel Solutions for Lustre* solutions Deliver sustained storage performance needed that accelerate breakthrough innovations and deliver smarter, data-driven decisions for enterprise
More informationNew Storage System Solutions
New Storage System Solutions Craig Prescott Research Computing May 2, 2013 Outline } Existing storage systems } Requirements and Solutions } Lustre } /scratch/lfs } Questions? Existing Storage Systems
More informationAn Alternative Storage Solution for MapReduce. Eric Lomascolo Director, Solutions Marketing
An Alternative Storage Solution for MapReduce Eric Lomascolo Director, Solutions Marketing MapReduce Breaks the Problem Down Data Analysis Distributes processing work (Map) across compute nodes and accumulates
More informationNetApp High-Performance Computing Solution for Lustre: Solution Guide
Technical Report NetApp High-Performance Computing Solution for Lustre: Solution Guide Robert Lai, NetApp August 2012 TR-3997 TABLE OF CONTENTS 1 Introduction... 5 1.1 NetApp HPC Solution for Lustre Introduction...5
More informationIntroduction. Scalable File-Serving Using External Storage
Software to Simplify and Share SAN Storage Creating Scalable File-Serving Clusters with Microsoft Windows Storage Server 2008 R2 and Sanbolic Melio 2010 White Paper By Andrew Melmed, Director of Enterprise
More informationScala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
More informationPerformance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms. Cray User Group Meeting June 2007
Performance, Reliability, and Operational Issues for High Performance NAS Storage on Cray Platforms Cray User Group Meeting June 2007 Cray s Storage Strategy Background Broad range of HPC requirements
More informationSeagate Lustre Update. Peter Bojanic 2015-04-13
Seagate Lustre Update Peter Bojanic 2015-04-13 Seagate Cloud Systems and Solutions Delivering next-generation workloads with Intelligent Information Infrastructure tion OEM Cloud Services HPC HPC HPC Information
More informationModernizing Hadoop Architecture for Superior Scalability, Efficiency & Productive Throughput. ddn.com
DDN Technical Brief Modernizing Hadoop Architecture for Superior Scalability, Efficiency & Productive Throughput. A Fundamentally Different Approach To Enterprise Analytics Architecture: A Scalable Unit
More informationHadoopTM Analytics DDN
DDN Solution Brief Accelerate> HadoopTM Analytics with the SFA Big Data Platform Organizations that need to extract value from all data can leverage the award winning SFA platform to really accelerate
More informationImproving Time to Results for Seismic Processing with Paradigm and DDN. ddn.com. DDN Whitepaper. James Coomer and Laurent Thiers
DDN Whitepaper Improving Time to Results for Seismic Processing with Paradigm and DDN James Coomer and Laurent Thiers 2014 DataDirect Networks. All Rights Reserved. Executive Summary Companies in the oil
More informationDDN updates object storage platform as it aims to break out of HPC niche
DDN updates object storage platform as it aims to break out of HPC niche Analyst: Simon Robinson 18 Oct, 2013 DataDirect Networks has refreshed its Web Object Scaler (WOS), the company's platform for efficiently
More informationALPS Supercomputing System A Scalable Supercomputer with Flexible Services
ALPS Supercomputing System A Scalable Supercomputer with Flexible Services 1 Abstract Supercomputing is moving from the realm of abstract to mainstream with more and more applications and research being
More informationScaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
More informationData management challenges in todays Healthcare and Life Sciences ecosystems
Data management challenges in todays Healthcare and Life Sciences ecosystems Jose L. Alvarez Principal Engineer, WW Director Life Sciences jose.alvarez@seagate.com Evolution of Data Sets in Healthcare
More informationBusiness-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000
Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000 Clear the way for new business opportunities. Unlock the power of data. Overcoming storage limitations Unpredictable data growth
More informationSQL Server 2012 Parallel Data Warehouse. Solution Brief
SQL Server 2012 Parallel Data Warehouse Solution Brief Published February 22, 2013 Contents Introduction... 1 Microsoft Platform: Windows Server and SQL Server... 2 SQL Server 2012 Parallel Data Warehouse...
More informationMake the Most of Big Data to Drive Innovation Through Reseach
White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability
More informationCollaborative Research Infrastructure Deployments. ddn.com. Accelerate > DDN Case Study
DDN Case Study Accelerate > Collaborative Research Infrastructure Deployments University College London Transforms Research Collaboration and Data Preservation with Scalable Cloud Object Storage Appliance
More informationioscale: The Holy Grail for Hyperscale
ioscale: The Holy Grail for Hyperscale The New World of Hyperscale Hyperscale describes new cloud computing deployments where hundreds or thousands of distributed servers support millions of remote, often
More informationElasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
More informationUtilizing the SDSC Cloud Storage Service
Utilizing the SDSC Cloud Storage Service PASIG Conference January 13, 2012 Richard L. Moore rlm@sdsc.edu San Diego Supercomputer Center University of California San Diego Traditional supercomputer center
More informationData Center Performance Insurance
Data Center Performance Insurance How NFS Caching Guarantees Rapid Response Times During Peak Workloads November 2010 2 Saving Millions By Making It Easier And Faster Every year slow data centers and application
More informationSun Constellation System: The Open Petascale Computing Architecture
CAS2K7 13 September, 2007 Sun Constellation System: The Open Petascale Computing Architecture John Fragalla Senior HPC Technical Specialist Global Systems Practice Sun Microsystems, Inc. 25 Years of Technical
More informationBig data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
More informationiscsi: Accelerating the Transition to Network Storage
iscsi: Accelerating the Transition to Network Storage David Dale April 2003 TR-3241 WHITE PAPER Network Appliance technology and expertise solve a wide range of data storage challenges for organizations,
More informationANY SURVEILLANCE, ANYWHERE, ANYTIME
ANY SURVEILLANCE, ANYWHERE, ANYTIME WHITEPAPER DDN Storage Powers Next Generation Video Surveillance Infrastructure INTRODUCTION Over the past decade, the world has seen tremendous growth in the use of
More informationClustering Windows File Servers for Enterprise Scale and High Availability
Enabling the Always-On Enterprise Clustering Windows File Servers for Enterprise Scale and High Availability By Andrew Melmed Director of Enterprise Solutions, Sanbolic, Inc. April 2012 Introduction Microsoft
More informationNetApp, Standards & Open Source
NetApp, Standards & Open Source Innova&on & Leadership David Dale Director Industry Standards CTO Office January 22, 2013 1 Discussion Topics NetApp in a nutshell Why we invest in standards activities
More informationScalable Windows Server File Serving Clusters Using Sanbolic s Melio File System and DFS
Scalable Windows Server File Serving Clusters Using Sanbolic s Melio File System and DFS (A step-by-step guide) www.sanbolic.com Software to Simplify and Share SAN Storage Introduction Viewed by many as
More informationBig Data Meets High Performance Computing
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Big Data Meets High Performance Computing Intel Enterprise Edition for Lustre* software and Hadoop combine to bring
More informationWrangler: A New Generation of Data-intensive Supercomputing. Christopher Jordan, Siva Kulasekaran, Niall Gaffney
Wrangler: A New Generation of Data-intensive Supercomputing Christopher Jordan, Siva Kulasekaran, Niall Gaffney Project Partners Academic partners: TACC Primary system design, deployment, and operations
More information60% A New Generation of Lustre* Software Expands HPC Into the Commercial Enterprise. Lustre powers over. of the top 100 supercomputers worldwide 1
WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division A New Generation of Lustre* Software Expands HPC Into the Commercial Enterprise Not so long ago, storage for high
More informationMaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
More informationEMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
More informationAmazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
More informationCisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
More information" " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " " "
! WHITE PAPER! The Evolution of High-Performance Computing Storage Architectures in Commercial Environments! Prepared by: Eric Slack, Senior Analyst! May 2014 The Evolution of HPC Storage Architectures
More informationEinsatzfelder von IBM PureData Systems und Ihre Vorteile.
Einsatzfelder von IBM PureData Systems und Ihre Vorteile demirkaya@de.ibm.com Agenda Information technology challenges PureSystems and PureData introduction PureData for Transactions PureData for Analytics
More informationIntegration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the
More informationInfrastructure Matters: POWER8 vs. Xeon x86
Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report
More informationHow In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
More informationPivot3 Desktop Virtualization Appliances. vstac VDI Technology Overview
Pivot3 Desktop Virtualization Appliances vstac VDI Technology Overview February 2012 Pivot3 Desktop Virtualization Technology Overview Table of Contents Executive Summary... 3 The Pivot3 VDI Appliance...
More informationRevolutionizing Storage
Revolutionizing Storage Jef Willemsens Sales Executive BeLux October 007 007 EqualLogic, Inc. All Rights Reserved 1 EqualLogic Profile Company operations Headquarters, R&D and Customer Support - Nashua,
More informationAccelerate > Converged Storage Infrastructure. DDN Case Study. ddn.com. 2013 DataDirect Networks. All Rights Reserved
DDN Case Study Accelerate > Converged Storage Infrastructure 2013 DataDirect Networks. All Rights Reserved The University of Florida s (ICBR) offers access to cutting-edge technologies designed to enable
More informationHigh Performance Computing (HPC)
High Performance Computing (HPC) High Performance Computing (HPC) White Paper Attn: Name, Title Phone: xxx.xxx.xxxx Fax: xxx.xxx.xxxx 1.0 OVERVIEW When heterogeneous enterprise environments are involved,
More informationAccelerating and Simplifying Apache
Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS www.panasas.com Executive Overview The technology requirements for big data vary significantly
More informationWhy Linux and OpenStack Make the Most Sense for Virtualization and Cloud Computing in a Multi-Platform Environment
Why Linux and OpenStack Make the Most Sense for Virtualization and Cloud Computing in a Multi-Platform Environment When IT organizations step up their virtualization efforts and begin transitioning to
More informationScalable Windows Storage Server File Serving Clusters Using Melio File System and DFS
Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS Step-by-step Configuration Guide Table of Contents Scalable File Serving Clusters Using Windows Storage Server Using
More informationSGI Solutions for RDSI/CAUDIT 2013 SGI
SGI Solutions for RDSI/CAUDIT 1 Agenda SGI Company Strategy Overview Product, Solutions and Services SGI Customer Solution Examples SGI s Pricing Model SGI s Value for RDSI/Caudit 2 SGI: The Trusted Leader
More informationEnabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
More informationRFP-MM-1213-11067 Enterprise Storage Addendum 1
Purchasing Department August 16, 2012 RFP-MM-1213-11067 Enterprise Storage Addendum 1 A. SPECIFICATION CLARIFICATIONS / REVISIONS NONE B. REQUESTS FOR INFORMATION Oracle: 1) What version of Oracle is in
More informationEMC BACKUP MEETS BIG DATA
EMC BACKUP MEETS BIG DATA Strategies To Protect Greenplum, Isilon And Teradata Systems 1 Agenda Big Data: Overview, Backup and Recovery EMC Big Data Backup Strategy EMC Backup and Recovery Solutions for
More informationHP StorageWorks D2D Backup Systems and StoreOnce
AUtOMATEyour data protection. HP StorageWorks D2D Backup Systems and StoreOnce The combination that right-sizes your storage capacity. Solution brief Regardless of size and industry, many of today s organizations
More informationWOS Cloud. ddn.com. Personal Storage for the Enterprise. DDN Solution Brief
DDN Solution Brief Personal Storage for the Enterprise WOS Cloud Secure, Shared Drop-in File Access for Enterprise Users, Anytime and Anywhere 2011 DataDirect Networks. All Rights Reserved DDN WOS Cloud
More informationA Survey of Shared File Systems
Technical Paper A Survey of Shared File Systems Determining the Best Choice for your Distributed Applications A Survey of Shared File Systems A Survey of Shared File Systems Table of Contents Introduction...
More informationRAID for the 21st Century. A White Paper Prepared for Panasas October 2007
A White Paper Prepared for Panasas October 2007 Table of Contents RAID in the 21 st Century...1 RAID 5 and RAID 6...1 Penalties Associated with RAID 5 and RAID 6...1 How the Vendors Compensate...2 EMA
More informationTaming Big Data Storage with Crossroads Systems StrongBox
BRAD JOHNS CONSULTING L.L.C Taming Big Data Storage with Crossroads Systems StrongBox Sponsored by Crossroads Systems 2013 Brad Johns Consulting L.L.C Table of Contents Taming Big Data Storage with Crossroads
More informationImplement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
More informationEMC SYMMETRIX VMAX 20K STORAGE SYSTEM
EMC SYMMETRIX VMAX 20K STORAGE SYSTEM The EMC Virtual Matrix Architecture is a new way to build storage systems that transcends the physical constraints of all existing architectures by scaling system
More informationWHITE PAPER. www.fusionstorm.com. Get Ready for Big Data:
WHitE PaPER: Easing the Way to the cloud: 1 WHITE PAPER Get Ready for Big Data: How Scale-Out NaS Delivers the Scalability, Performance, Resilience and manageability that Big Data Environments Demand 2
More informationEnterprise Disk Storage Subsystem Directions
White Paper Enterprise Disk Storage Subsystem Directions Steve Bohac and Andrew Chen, NetApp June 2009 WP-7074-0609 EXECUTIVE SUMMARY Significant changes are coming to enterprise disk technology and disk
More informationHadoop Cluster Applications
Hadoop Overview Data analytics has become a key element of the business decision process over the last decade. Classic reporting on a dataset stored in a database was sufficient until recently, but yesterday
More informationDirect NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
More informationCost Effective Backup with Deduplication. Copyright 2009 EMC Corporation. All rights reserved.
Cost Effective Backup with Deduplication Agenda Today s Backup Challenges Benefits of Deduplication Source and Target Deduplication Introduction to EMC Backup Solutions Avamar, Disk Library, and NetWorker
More informationEXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS
EXPLORATION TECHNOLOGY REQUIRES A RADICAL CHANGE IN DATA ANALYSIS EMC Isilon solutions for oil and gas EMC PERSPECTIVE TABLE OF CONTENTS INTRODUCTION: THE HUNT FOR MORE RESOURCES... 3 KEEPING PACE WITH
More informationSeagate HPC /Big Data Business Tech Talk. December 2014
Seagate HPC /Big Data Business Tech Talk December 2014 Safe Harbor Statement This document contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, and Section
More informationcloud functionality: advantages and Disadvantages
Whitepaper RED HAT JOINS THE OPENSTACK COMMUNITY IN DEVELOPING AN OPEN SOURCE, PRIVATE CLOUD PLATFORM Introduction: CLOUD COMPUTING AND The Private Cloud cloud functionality: advantages and Disadvantages
More informationNetapp HPC Solution for Lustre. Rich Fenton (fenton@netapp.com) UK Solutions Architect
Netapp HPC Solution for Lustre Rich Fenton (fenton@netapp.com) UK Solutions Architect Agenda NetApp Introduction Introducing the E-Series Platform Why E-Series for Lustre? Modular Scale-out Capacity Density
More informationThe Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays
The Evolution of Microsoft SQL Server: The right time for Violin flash Memory Arrays Executive Summary Microsoft SQL has evolved beyond serving simple workgroups to a platform delivering sophisticated
More informationJames Serra Sr BI Architect JamesSerra3@gmail.com http://jamesserra.com/
James Serra Sr BI Architect JamesSerra3@gmail.com http://jamesserra.com/ Our Focus: Microsoft Pure-Play Data Warehousing & Business Intelligence Partner Our Customers: Our Reputation: "B.I. Voyage came
More informationMilestone Solution Partner IT Infrastructure MTP Certification Report Scality RING Software-Defined Storage 11-16-2015
Milestone Solution Partner IT Infrastructure MTP Certification Report Scality RING Software-Defined Storage 11-16-2015 Table of Contents Introduction... 4 Certified Products... 4 Key Findings... 5 Solution
More informationXyratex Update. Michael K. Connolly. Partner and Alliances Development
Xyratex Update Michael K. Connolly Partner and Alliances Development Is Now 2 The Continued Power of Xyratex Global Solutions Provider of High Quality Data Storage Hardware, Software and Services Broad
More informationMicrosoft Analytics Platform System. Solution Brief
Microsoft Analytics Platform System Solution Brief Contents 4 Introduction 4 Microsoft Analytics Platform System 5 Enterprise-ready Big Data 7 Next-generation performance at scale 10 Engineered for optimal
More informationSPEED your path to virtualization.
SPEED your path to virtualization. 2011 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Introducing HP VirtualSystem Chief pillar of
More informationThe Benefit of Migrating from 4Gb to 8Gb Fibre Channel
The Benefit of Migrating from 4Gb to 8Gb Fibre Channel Notices The information in this document is subject to change without notice. While every effort has been made to ensure that all information in this
More informationMinimize cost and risk for data warehousing
SYSTEM X SERVERS SOLUTION BRIEF Minimize cost and risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (55TB) Highlights Improve time to value for your data
More informationWith DDN Big Data Storage
DDN Solution Brief Accelerate > ISR With DDN Big Data Storage The Way to Capture and Analyze the Growing Amount of Data Created by New Technologies 2012 DataDirect Networks. All Rights Reserved. The Big
More informationDesktop Virtualization Solutions Simplified Appliance
Desktop Virtualization Solutions Simplified Appliance Seizing opportunities in today s mobile and social world Today s business models and customer engagements are increasingly real-time, dynamic and contextual.
More informationCopyright 2015 EMC Corporation. All rights reserved.
Copyright 2015 EMC Corporation. All rights reserved. 1 Radically Accelerate Laying The Foundation Of The SDDC & Redefine Simplicity With EMC VSPEX BLUE Based On VMware EVO:RAIL Rob Glanzman Director of
More informationOverview executive SUMMArY
EMC Isilon TCO Benefits for Large-Scale Home Directories Overview EMC Isilon scale-out network-attached storage (NAS) has rapidly gained popularity over the past several years, successfully moving from
More informationBlueArc s Architecture for NFS v4.1 and pnfs. Delivering Performance Through Standards
W H I T E P A P E R BlueArc s Architecture for NFS v4.1 and pnfs Delivering Performance Through Standards W H I T E P A P E R Table of Contents Introduction...1 File I/O: Addressing the (New) HPC Bottleneck...1
More informationInside Track Research Note. In association with. Hyper-Scale Data Management. An open source-based approach to Software Defined Storage
Research Note In association with Hyper-Scale Data Management An open source-based approach to Software Defined Storage January 2015 In a nutshell About this The insights presented in this document are
More informationWell packaged sets of preinstalled, integrated, and optimized software on select hardware in the form of engineered systems and appliances
INSIGHT Oracle's All- Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages Carl W. Olofson IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
More informationPowerful Management of Financial Big Data
Powerful Management of Financial Big Data TickSmith s solutions are the first to apply the processing power, speed, and capacity of cutting-edge Big Data technology to financial data. We combine open source
More informationRed Hat Enterprise Linux solutions from HP and Oracle
Red Hat Enterprise Linux solutions from HP and Oracle Driven by innovation to improve interoperability and scalability, HP, Red Hat, and Oracle deliver a broad and deep range of Linux offerings to enhance
More informationInfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time
White Paper InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time December 2005 Server and storage clusters benefit today from industry-standard InfiniBand s price, performance, stability,
More informationStorage Systems Performance Testing
Storage Systems Performance Testing Client Overview Our client is one of the world s leading providers of mid-range and high-end storage systems, servers, software and services. Our client applications
More informationIntelligent Data Center Solutions
Intelligent Data Center Solutions Panduit s Unified Physical Infrastructure (UPI): a Guiding Vision A unified approach to physical and logical systems architecture is imperative for solutions to fully
More informationBernie Velivis President, Performax Inc
Performax provides software load testing and performance engineering services to help our clients build, market, and deploy highly scalable applications. Bernie Velivis President, Performax Inc Load ing
More informationCHOOSING THE RIGHT STORAGE PLATFORM FOR SPLUNK ENTERPRISE
WHITEPAPER CHOOSING THE RIGHT STORAGE PLATFORM FOR SPLUNK ENTERPRISE INTRODUCTION Savvy enterprises are investing in operational analytics to help manage increasing business and technological complexity.
More informationVirtualizing Exchange
Virtualizing Exchange Simplifying and Optimizing Management of Microsoft Exchange Server Using Virtualization Technologies By Anil Desai Microsoft MVP September, 2008 An Alternative to Hosted Exchange
More informationEssentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS
Essentials Guide CONSIDERATIONS FOR SELECTING ALL-FLASH STORAGE ARRAYS M ost storage vendors now offer all-flash storage arrays, and many modern organizations recognize the need for these highperformance
More informationBIG DATA-AS-A-SERVICE
White Paper BIG DATA-AS-A-SERVICE What Big Data is about What service providers can do with Big Data What EMC can do to help EMC Solutions Group Abstract This white paper looks at what service providers
More informationP R O D U C T P R O F I L E. Gridstore NASg: Network Attached Storage Grid
Gridstore NASg: Network Attached Storage Grid Scale-Out NAS for the SMB Market June 2010 Smart start-up Gridstore has announced their Networked Attached Storage grid product NASg. NASg offers powerful
More informationThe Hardware Dilemma. Stephanie Best, SGI Director Big Data Marketing Ray Morcos, SGI Big Data Engineering
The Hardware Dilemma Stephanie Best, SGI Director Big Data Marketing Ray Morcos, SGI Big Data Engineering April 9, 2013 The Blurring of the Lines Business Applications and High Performance Computing Are
More informationAppro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales
Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes Anthony Kenisky, VP of North America Sales About Appro Over 20 Years of Experience 1991 2000 OEM Server Manufacturer 2001-2007
More information