shaping tomorrow with you ETERNUS - Business-centric Storage Enhancements of ETERNUS DX / SF Global Product Marketing Storage
ETERNUS Business-centric Storage Agenda: 1 Overview of the top 3 innovations 2 Additional new functionalities Overview 3 Technical details of ETERNUS DX / SF innovations 1
Overview of the Top 3 innovations Key Innovations 1. ETERNUS Storage Cluster 2. ETERNUS Fast Recovery 3. ETERNUS Automated Quality of Service management Key Business Values 1. Increase business continuity in case of system failures 2. Reduce risks of data loss when using high capacity disks 3. Automatic performance management by business priority of data 2
Challenge: bigger systems, bigger disks, bigger risks Petabyte-scale storage systems Planned or unplanned downtime has large scale negative impact on business Hard disks with up to 6 TB Several days rebuild time of a failed disk within in a RAID of a productive system The probability of critical RAID situations and finally data loss is constantly growing 3
How ETERNUS innovations solve these challenges ETERNUS Storage Cluster A standby storage takes over the identity of a failed system transparently Enables business continuity without introducing a storage virtualization layer Fast Rebuild Pool ETERNUS Fast Recovery for failed disks Rebuild write operations are parallelized between several disks Rebuild time is reduced from days to hours 4
The Challenge: competition on storage performance Applications compete on storage resources Lower priority tasks can impede the data performance of high priority apps A manual system tuning to allocate storage performance by business prios is complex Need for an easy and automated quality of service management Key for storage consolidation without performance issues for biz critical apps Need for prioritizing storage resources according to business needs 5
Enhanced Automated Quality of Service Management The admin defines priorities or sets response times in milliseconds And ETERNUS DX / SF does the rest Unique: storage is controlled by business priorities of data Technically the system adjusts internal bandwidth to achieve the required response time New: should this not be sufficient to achieve the required target response time, priority data will be moved to faster disks or to SSD using the automated storage tiering (AST) functions Auto Quality of Service Management controls the AST functionality 6
Additional enhancements Support of remote replication by ETERNUS DX100 S3 6 Gb SAS direct attach for ETERNUS DX100 / DX200 Cold data handling Enhancement of eco mode: complete shut down of unused disks (disk spin AND disk controller) Enhanced unified functionalities Quota management of NAS capacity for users Users can recover previous versions of files Multiple file systems Field upgrade of ETERNUS DX system to bigger models without data migration DX100 S3 DX200 S3 DX500 S3 DX600 S3 Thin provisioning pool capacity doubled to 256 TB for Flexible Tier pools (DX100/ DX200) 7
Technical details about the 3 top innovations 8
Business Continuity with Storage Cluster Synchronous mirroring between active and standby system Covers all kinds of array outages (failure, maintenance, disaster) Transparent to servers and applications Triggered automatically and/ or manually Non-stop operation 9
Storage Cluster functions - Auto Failover Auto failover in case of array failure or RAID failure Failover when the path between the Storage Cluster Controller and the primary array and array-array path between both storages fail at the same time. Failover when a RAID failure occurs on the primary array (RAID blocked by a disk failure). Auto failover to address split-brain situation involving the Storage Cluster contr. & ETERNUS DX S3 storage. LAN Business server Storage failure LAN Business server RAID failure I/O Storage Cluster Controller I/O Storage Cluster Controller SAN SAN CA Array failure 運 用 データ Auto failover Path CA Mirror data CA Business data RAID failure Auto failover Path CA Copyright Mirror 2014 FUJITSU data ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) 10
Storage Cluster Failover concept Synchronous mirroring between arrays (REC). Normal operation Failover Business Server I/O SAN CA port paired (identical WWN) Secondary side is Link Down Business Server Access switches on I/O retry from the server. (transparent to the server side) SAN I/O CA port changes to Link Up during failover. CA Path Synchronous remote copy CA CA Failover During normal operation, the CA port on the primary storage is active and Path the CA port on the secondary storage is hidden CA ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) 11
Storage Cluster Failover concept Synchronous mirroring between arrays (REC). Normal operation Failover Business Server I/O CA port paired (identical WWN) Secondary side is Link Down In case of failover the status of the CA port on the primary system changes SAN to Link Down, the CA port on the secondary array changes to status Link Up taking CA over volume information and CA WWN. Business Server Access switches on I/O retry from the server. (transparent to the server side) CA SAN Failover I/O CA port changes to Link Up during failover. CA Path Synchronous remote copy Path ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) ETERNUS DX S3 (Primary) ETERNUS DX S3 (Secondary) 12
Storage Cluster Functions Manual Failover Manual Failover helps in case of Planned power shutdowns Disruptive upgrades (Firmware, array, etc) 13
Fast Recovery speeds up rebuild processes High reliability feature Rebuild takes place in reserved areas of the RAID group disks Simultaneous writes on many targets Extremely shortens rebuild times in exposed RAID groups 14
Easy Auto QoS Further minimizes administrative efforts Easy settings per volume Three priority levels: High Middle Low 15
Auto QoS and AST Integration Combination of QoS and automated tiering Automated relocation to faster media Efficient and automated to get best performance 16
Auto QoS the complete picture New and Unique!!! 17
New: Remote replication available for DX100 S3 Remote Equivalent Copy now available for ETERNUS DX100 S3 Data on a DX100 S3 system can be replicated to any ETERNUS DX systems with REC (DX90/ DX90 S2/DX400 series/dx400 S2 series/dx8700/dx8700 S2/DX8000/DX4000/DX6000) Business benefits Affordable DR concepts for SMB customers Midsize and large enterprises can replicate data from small ETERNUS DX100 systems in branch offices / subsidiary to larger DX systems the central data center identical or new systems. No need to invest in identical HW on all locations 18
Unified Storage end user related features Quota management Limits single user or user group based NAS capacity per file system Event logging for exceeding warning threshold and capacity limit Flexible adjustments according to business needs at any time 19
Unified Storage end user related features File recovery of previous versions protects against file manipulation errors multi-generations snapshots at predefined schedules end-user operability administrator intervention not needed supports all major operating systems 20
Unified Storage - Multiple File Systems With ETERNUS SF 16.1 multiple shared file systems can be created One NAS volume is created for each file system Significant expansion of NAS maximum capacity ETERNUS Model Max # of File Systems/NAS Volumes Max NAS Total capacity DX100 S3 1 128 TB DX200 S3 2 256 TB DX500 S3 4 384 TB DX600 S3 8 768 TB 21
Better Cold Data Handling Cold data data is stored on disk but almost never read again Legal data or backups of third copies of data, archives ETERNUS DX ECO-mode = Maid proven technology Further improved Drives can be completely powered off 22
More value with ETERNUS SF Feature Pack Feature Pack Advantages for service-levels & costs Starter Pack Tiering Pack Big savings ETERNUS SF SC (storage cruiser) Reduced mgmt. costs, operate more storage with existing admins, reduce failures, save energy costs ETERNUS ACM (advanced copy manager) Protect data against disk, system and site failures (ACM Local copy and ACM Remote Copy) ACM for Exchange Server New built-in wizard-based setup of scheduled backup and restore operations ACM for MS SQL Server New built-in wizard-based setup of scheduled backup and restore operations ETERNUS SF SC Optimization Option Better utilization of SSDs, less investments, higher performance at less costs x Recommended add-ons Benefit ETERNUS SF SC Auto QoS Option ETERNUS SF Storage Cluster ETERNUS ESM Aligns performance allocation to business priorities enabling higher system utilization and less investment in hardware Synchronous mirroring between active and standby system covers all kinds of array outages (failure, maintenance, disaster) Protects application data, increases business continuity of databases, virtual environments and business critical apps 23
ETERNUS DX S3 incredible SPC1 Benchmarks Fastest Response Times ever measured* DX600: 0.61 milliseconds at 100% load DX200: 0.63 milliseconds at 100% load Highest IOPS for a midrange system* 320,206.35 SPC-1 IOPS Highest IOPS for an entry system* 200,500.95 SPC-1 IOPS Beats most midrange and many HE systems like IBM XIV, IBM V7000, HDS HUS, HP 3Par Impressive Price Performance $0.77 / SPC-1 IOPS only specialized all-flash-arrays can match ETERNUS DX S3 as of July, 25 2014 see http://www.storageperformance.org/results/benchmark_results_spc1_active/#fujitsu_spc1 24
ETERNUS DX200F all-flash array DX S3 performance architecture Full speed Backend 12 Gb/s SAS3 disk interface 12 Gb/s SAS3 solid state drives (SSD) Choice of capacity up to 38.4 TByte 5-24 MLC SSD 800 or 1,600 GByte drives Choice of connectivity 16 Gb FibreChannel 10 Gb iscsi Mixed configurations possible 25
Call to action Update your customers now! Key Innovations Key Business Values 1. ETERNUS Storage Cluster 2. ETERNUS Fast Recovery 3. ETERNUS Automated Quality of Service management 1. Increase business continuity in case of system failures 2. Reduce risks of data loss when using high capacity disks 3. Automatic performance management by business priority of data 26
ETERNUS Business-centric Storage 27
28