Luxembourg June 3 2014 Said BOUKHIZOU Technical Manager m +33 680 647 866 sboukhizou@datacore.com
SOFTWARE-DEFINED STORAGE IN ACTION What s new in SANsymphony-V 10 2
Storage Market in Midst of Disruption 1985 2000 Today CHALLENGES Server Storage Server Storage Server Storage Flash Converged Systems Too many incompatible devices NAS SAN NAS SAN New software for every device Flash Arrays Hybrid Arrays Silos of storage Cloud Management 3
Enterprise Storage Market 30 years ago Server Storage 5 years ago Now Server Storage Server Storage External Storage External Storage Cloud Storage Drivers of change Server flash Capacity Growth Commiditization Refresh Cycle Cloud economics Software-defined
The DataCore Vision DataCore is committed to creating an enduring and dynamic softwaredriven storage architecture liberating storage from static hardware-based limitations George Teixeira DataCore CEO 1998 5
Why Software-defined Storage? The right software must be able to do a few things 1 Enable different storage devices to communicate with one another 2 Separate advances in software from advances in hardware 3 Pool all storage capacity and provide centralized management 4 Make hardware maintenance, data migrations, and hardware refreshes easy 6
One Software Platform for any Storage Hardware Accelerate Centralize & Automate Pool & Protect 7
Introducing SANsymphony -V10 Cross-device Storage Services Define Your Storage 10 th generation product Runs on standard x86 servers Most comprehensive hardware agnostic storage stack in the industry 25,000+ deployments worldwide Auto-tiering Async Replication Virtual SAN Storage Pooling Storage Load Balancing Centralized Management Analysis & Reporting Sync Mirroring Adaptive Caching Thin Provisioning Data Migration Snapshots Continuous Data Protection NAS/SAN (Unified Storage) 8
Use Cases for SANsymphony-V10 Virtualize your existing storage hardware Create virtual SANs with server-attached storage Integrate Flash/SSDs with existing storage 9
Solution Overview Virtualize your existing storage hardware Create virtual SANs with server-attached storage Integrate Flash/SSDs with existing storage 10
Virtualize External Storage Hardware FUNDAMENTALS OF DATACORE STORAGE VIRTUALIZATION Runs on standard x86 servers One set of common storage services for all storage devices All storage capacity in a single pool eliminating wasted capacity Unlike storage systems communicate seamlessly reducing complexity and preventing downtime Replicates data leaving no single point of failure Seamless scalability with no reason to commit to a single hardware manufacturer 11
Solution Overview Virtualize your existing storage hardware Create virtual SANs with server-attached storage Integrate Flash/SSDs with existing storage 12
Transform DAS, Flash & SSDs into a Virtual SAN Runs as a virtual machine on any application server Pools local storage resources for all to share 13
Full feature functionality with Virtual SAN Dramatically increase performance Share Flash cards between application servers Auto-tier between DRAM, flash and disk Share capacity across cluster of servers Speed up apps with DRAM caching 14
Massive Scale for Tier 1 Apps and VDI 1 2 3 4 5 32 Scale out to 32 nodes Scale up to 32 PB Accelerate up to 50 million IOPS 15
Solution Overview Virtualize your existing storage hardware Create virtual SANs with server-attached storage Integrate Flash/SSDs with existing storage 16
Common Challenges With Flash Storage INTEGRATION AND MANAGEMENT How do I integrate this new technology with my existing environment? What new processes do I need to manage this investment? AVAILABILITY OF SOFTWARE FEATURES How do I get the functionality I need like high-availability and failover? What tools will I have to handle data migrations, tiering and thin provisioning to ensure that I m making the most of my investment? SHARING FLASH STORAGE BETWEEN APPLICATIONS Do I really need to buy dedicated Flash hardware for every application? 17
The Premier Software Stack for Flash One Software Platform for any Storage Hardware Cross-device Storage Services Flash - SSD and Arrays Flash/HDD Hybrid Auto-tiering Async Replication Virtual SAN Storage Pooling Storage Load Balancing Centralized Management Analysis & Reporting Sync Mirroring Adaptive Caching Thin Provisioning Data Migration Snapshots Continuous Data Protection NAS/SAN (Unified Storage) 18
Easy Integration and Sharing of Flash Storage Flash Cards in Virtual SAN Flash Cards in DataCore nodes Flash Arrays in storage pool Share Flash between servers and applications Minimize downtime and risk of integration Complete set of storage services Block level auto-tiering Easily add Flash anytime Realistic path to all Flash environments 19
Putting it all together 20
Federate Virtual SAN with Physical SAN Expand beyond limits of application servers Fast local primary storage & secondary central pool Same set of services across both topologies Leverage central storage resources and services Replication, backups, etc., Managed from same console 21
Centrally Managed Environment Branch Office [Virtual SAN] Disaster Recovery Site [Virtual SAN] Cloud Storage Major Data Centers [Central SANs] Branch Office [Virtual SAN] 22
SANsymphony-V 10 What s New 23
Major Areas of Enhancements Virtual SAN Pool DAS on hosts Scalability Up to 32 nodes Smart Deployment Wizard Self-Tuning Performance Visualization High-performance Networking 24
Deployment tool to simplify SANsymphony-V installation and setup Different templates to chose from: Single HA pair Smart Deployment Wizard Unified Storage Virtual SAN node 25
Self-Tuning Enhancements 26
Disk Pool Auto Tiering Preemptive Tier Space Management A customizable percentage of capacity is kept available for new allocations and promotions Percentage applies to any tier within a pool except the lowest SAUs that heat up may get promoted faster if there is free space available in the tier above New allocations go to the highest tier according to the Storage Profile first SAUs allocated in the reserved space may prompt migrations to keep space clear New allocations Tier1 Tier2 Tier3 27
Auto-Tier Reserve Space
Disk Pool Auto Tiering Write-aware auto tiering Takes WR IOs into account when building the heat map Can be turned on/off at the Virtual Disk level Configurable via the Storage Profile 29
Disk Pool Heatmap display Display change; SAUs are now ordered by hotness in descending order from left to right IO per second and IO latency counters 30
Disk Pool Intelligent Rebalancing Improved rebalancing logic takes Virtual Disks into account when leveling SAUs across spindles Avoid piling up SAUs on few physical spindles that belong to the same Virtual Disk Ensures uniform distribution of SAUs per VDisks 31
Disk Pool Targeted Recovery Eliminates recovery of entire pool in case a pool disk fails A new wizard-driven Purge action figures out which Virtual Disks are affected, what data (SAUs) they are missing, and how to proceed Mirrored VDisks get restored by recovering the missing SAUs from the good side Single and Dual VDisks are brought back online but with unallocated holes in them (no mirror to recover from) Initiating a pool recovery is a manual action due to its potentially destructive nature to single/dual VDisks a well as Snapshots and Rollbacks. 32
Targeted Recovery in Action
Performance Visualization Enhancements 34
Performance Visualization Additional System Health displays to visualize storage allocation and channel utilization Display IO/s and IO latency per Disk in Pool Heatmap Create a physical disk max. latency counter Performance Counters for Host port pending or outstanding commands System wide performance view 35
System Health Bandwidth Display 36
More Insight from Heat Maps 178 14ms 37
IO Poller Process Enhancements Multi-core CPU Thread Scheduling Optimisation Dynamic scaling of poller thread instances Always use at least 2 threads (unless <=4 CPUs) System spins up more threads if front-end workload demands it and more CPU cores are available New performance counters ProductivePolls, UnproductivePolls & TotalThreads 38
Networking Enhancements 39
Faster Gig Ethernet Choices For iscsi and Remote Replication: 10/40 GbE Emulex NIC 10/40/56 GbE Mellanox NIC 40/56 GbE Mellanox Switch NIC Teaming (combine 2 or more NICs as one) Aggregate bandwidth and/or Maintain link when a NIC is down 40
Virtual SAN Optimizations Loopback vdisks now appear in a new DataCore Disks folder under the server Make it easier to serve a vdisk to a SANsymphony-V Node Extend Smart Deployment Wizard to assist with multiple setup 41
Large Groups Up to 32 nodes per Server Group Combine traditional DataCore Servers and Virtual SAN nodes within same group 42
SANsymphony-V 10 What s Next 43
Storage Domains Access Control SSV Storage Features Group and subset definition User controlled configuration changes to defined subsets of the system Storage Policy Storage location ( What pool, where in the pool? ) Quality-of-Service (bandwidth, IOPs) Quota Management ( how much pool space ) Utilization tracking Metrics for reporting and charge back 44
SSV Storage Features (Con t.) A Storage Domain is a subset of resources or a group of objects Domain A Domain B Domain C Individual users are authorized per Storage Domain Users can change the resources/objects inside a Storage Domain The Storage Policy determines: Storage location; which disk pool(s) a domain can use Storage class; what types of storage (tiers) a domain is allowed to use Quotas; how much capacity a domain can consume Quality-of-Service; how much bandwidth a domain can consume Storage Policy Dom A Storage Policy Dom B Shared Resources Storage Policy Dom C Storage Domains track utilization Enables Chargeback 45
SSV Storage Features (Con t.) Continuous Data Protection (CDP) Long-term CDP Retention time only limited by storage capacity Optimize resource utilization Rollback improvements Rollbacks persist without impact to parent Virtual Disk Application-consistent rollback point Mark known-good points in the history log 46
SSV Storage Features (Con t.) Data Mobility Move Virtual Disks across servers Single operation to relocate a VDisk on the fly Move VDisk to another pool if performance requirements change Move VDisk to another node to overcome IO bottlenecks Maintenance Enhancements Transition ownership of VDisks between nodes Maximize availability in maintenance scenarios 47
SSV Storage Features (Con t.) Data Mobility - Maintenance Disk Pools and Virtual Disks can be transitioned to another server within the server group Continued High-Availability during maintenance tasks Software upgrades Hardware repair No dedicated standby node All nodes are active any time Alternative node continues serving IOs Transition ownership of disk pool 48
Software Integration & Certifications Supplemental activities not tied to specific releases: ESX 5.5 Certification SRM 2013 Certification Commvault Simpana Backup Smart Deployment Wizard Performance Trending and Analysis App 49
QUESTIONS? Contact: info@datacore.com www.datacore.com DataCore Confidential Information