Danijel Paulin, danijel.paulin@hr.ibm.com Systems Architect, SEE IBM Croatia IBM Storage Virtualization Cloud enabling technology 11th TF-Storage Meeting, 26-27 September 2012, Dubrovnik, Croatia 9/27/2012
Agenda Introduction Virtualization function and benefits IBM Storage Virtualization Virtualization Appliance SAN Volume Controller Virtual Storage Platform Management Integrated Infrastructure System - Cloud Ready Summary 2
Smarter Computing New approach in designing IT Infrastructures Smarter Computing is realized through an IT infrastructure that is designed for data, tuned to the task, and managed in the cloud... Greater Storage Efficiency & Flexibility Higher Utilization Workload Systems Tuning Virtualization Increased Flexibility Foundation for Cloud Better Economics Building a cloud starts with virtualizing your IT environment
The journey to the cloud begins with virtualization! Virtualize Server, storage & network devices to increase utilization Provision & Secure Automate provisioning of resources Monitor & Manage Provide visibility of performance of virtual machines Orchestrate Workflow Manage the process for approval of usage Meter & Rate Track usage of resources IBM Corporation 2012 4
IBM Virtualization Offerings Server virtualization System p, System i, System z LPARs, VMware ESX, IBM Smart Business Desktop Cloud Virtually consolidate workloads on servers File and File System virtualization Scale Out NAS (SoNAS), DFSMS, IBM General Parallel File System, N-series Virtually consolidate files in one namespace across servers Storage virtualization SAN Volume Controller (the Storage Hypervisor), ProtecTIER Industry leading Storage Virtualization solutions Server and Storage Infrastructure Management Data protection with Tivoli Storage Manager and TSM FastBack Advanced management of virtual environments with TPC, IBM Director VMcontrol, TADDM, ITM, TPM Consolidated management of virtual and physical storage resources IBM Storage Cloud Solutions Smart Business Storage Cloud (SoNAS), IBM SmartCloud Managed Backup Virtualization and automation of storage capacity, data protection, and other storage services
Virtualization functions and benefits Virtual Resources Virtual Resources Sharing Aggregation Resources Examples: LPARs, VMs, virtual disks, VLANs Benefits: Resource utilization, workload mgmt., agility, energy efficiency Resources Examples: Virtual disks, system pools Benefits: Management simplification, investment protection, scalability Resource Type Y Virtual Resources Add or Change Virtual Resources Emulation Insulation Resource Type X Resources Add, Replace, or Change Resources Examples: Arch. emulators, iscsi, FCoE, v. tape Benefits: Compatibility, software investment protection, interoperability, flexibility Examples: Compat. modes, CUOD, appliances Benefits: Agility, investment protection, complexity & change hiding
What is Storage Virtualization? Logical Representation Virtualization Technology that makes one set of resources look and feel like another set of resources A logical representation of physical resources Hides some of the complexity Adds or integrates new function with existing services Can be nested or applied to multiple layers of a system Physical Resources 7
What distinguishes a Storage Cloud from Traditional IT? 1. Storage resources are virtualized from multiple arrays, vendors, and datacenters pooled together and accessed anywhere. (as opposed to physical array-boundary limitations) 2. Storage services are standardized selected from a storage service catalog. (as opposed to customized configuration) 3. Storage provisioning is self-service administrators use automation to allocate capacity from the catalog. (as opposed to manual component-level provisioning) 4. Storage usage is paid per use end users are aware of the impact of their consumption and service levels. (as opposed to paid from a central IT budget)
IBM Storage Virtualization 9
Today's SAN SAN SAN-attached disks look like local disks to the OS & application 10
SAN with Virtualization SAN Virtualization layer Virtual disks start as images of migrated non-virtual disks. Later, modify striping, thin provisioning, etc. 11
Become truly flexible! SAN Virtualization layer Virtual disks remain constant during physical infrastructure changes 12
Enable tiered Storage! SAN Virtualization layer Moving virtual disks between storage tiers requires no downtime 13
Avoid planned Downtime! SAN Upgrade Virtualization layer upgrade or replacement with no downtime! 14
In-band Storage Virtualization - Benefits Isolation Pooling CACHE + SSD Performance 1. Flat interoperability matrix 2. Non-disruptive migrations 3. No-cost multipathing 1. Higher (pool) utilization 2. Cross-pool-striping: IOPS 3. Thin Provisioning: free GB 1. Performance increase 2. Hot-spot elimination 3. Adds SSD to old gear Mirroring License $$ Mirroring 1. License economies 2. Cross-vendor mirror 3. Favorable TCO 15
Migration into Storage Virtualization (and back!) ZONE SAN Virtualization layer Virtual disks in transparent Image Mode, before being converted to Full Striped This works backwards too (no vendor lock-in) 16
Redundant SAN! ZONE SAN A SAN B 1 : 4 Virtualization layer 17
Virtualization Appliance SAN Volume Controller 18
Storage Hypervisor Storage Hypervisor Virtual Server Infrastructure Virtual Storage Infrastructure (SAN Volume Controller) Manage Manage VMControl Tivoli Storage Productivity Center IBM Systems Director Virtual Storage Platform - SAN Volume Controller Common device driver - iscsi or FC host attach Common capabilities I/O caching and cross-site cache coherency Thin provisioning Easy Tier automated tiering to Solid-state Disks Snapshot (FlashCopy) Mirroring (Synchronous and Asynchronous) Data mobility Transparent data migration among arrays and across tiers Snapshot and mirroring across arrays and tiers Virtual Storage Platform Management - Tivoli Storage Productivity Center Manageability Integrated SAN-wide Management with Tivoli Storage Productivity Center Integrated IBM server and storage management (Systems Director Storage Control) Replication Application integrated FlashCopy DR automation High Availability Stretch Cluster HA
Virtualization Appliance : SAN Volume Controller Stand-alone product Clustered 2 8 SVC comes with write cache mirrored in pairs (IOgroups) Multi-use Fibrechannel in & out Linux boot, 100% IBM stack TCA: 1. Hardware 2. per-tb license (tiered) 3. per-tb mirroring license 20
6th Generation... Continuous development Firmware is backwards compatible (64 bit not for 32 bit Hardware) initial Release Replace while online SAN Volume Controller CG8 Firmware v6.4 MODELS : SVC 4F2-4GB cache, 2Gb SAN (Rel.3 / 2006) SVC 8F2-8GB cache, 2Gb SAN (ROHS comp.) SVC 8F4-8GB cache, 4Gb SAN 155.000 SPC-1 IOPS SVC 8G4 - +Dual-core Processor 272.500 SPC-1 IOPS SVC CF8-24GB cache, Quad-core 380.483 6-node SPC-1 IOPS SVC CG8 - +10 GbE approx. 640.000 SPC-1-like IOPS : 21
SVC Model & Code Release History 1999 Almaden Research group publish ComPaSS clustering 2000 SVC lodestone development begins using ComPaSS 2003 SVC 1.1 4F2 Hardware 4 node 2004 SVC 1.2 8 node support 2004 SVC 2.1 8F2 Hardware 2005 SVC 3.1 8F4 Hardware 2006 SVC 4.1 Global Mirror, MTFC 2007 SVC 4.2 8G4 Hardware, FlashCopy enh 2008 SVC 4.3 Thin Provisioning, Vdisk Mirror 8A4 Hdw 2009 SVC 5.1 CF8 Hardware, SSD Support, 4 Site 2010 SVC 6.1 V7000 Hardware, RAID, Easy Tier 2011 SVC 6.2/3 V7000U, 10G iscsi, xtd Split Cluster 2012 SVC 6.4 IBM Real-time Compression, FCoE, Volume mobility... 22
SVC 2145-CG8 Virtualization Appliance Based on IBM System x3550 M3 server (1U) Intel Xeon 5600 (Westmere) 2.53 GHz quad-core processor 24GB of cache Up to 192GB of cache per SVC cluster Four 8Gbps FC ports (support Short-Wave & Long-Wave SFPs) Up to 32 FC ports per SVC cluster For external storage And/or for server attachment And/or Remote Copy/Mirroring Two 1 Gbps iscsi ports Up to 16 GbE ports per SVC cluster Optional 1 to 4 Solid State Drives Up to 32 SSD per SVC cluster Optional two 10 Gbps iscsi/fcoe ports New engines may be intermixed in pairs with other engines in SVC clusters Mixing engine types in a cluster results in Volume throughput characteristics of the engine type in that I/O group Cluster non-disruptive upgrade capability may be used to replace older engines with new CG8 engines
IBM SAN Volume Controller Architecture consistent Driver Stack consistent Driver Stack vdisk here: striped Mode consistent Driver Stack SVC Node with UPS (not depicted) IO Group Managed Disk SAN Volume Controller cluster Storage Pool Storage Pool Storage Pool Array LUNs
IBM SAN Volume Controller Topology SVC Cluster
Virtual-Disk Types Image Mode: A B C Pass thru; Virtual Disk = Physical LUN Sequential Mode: Virtual Disk mapped sequentially to a portion of a managed disk Virtual Disks Striped Mode: Virtual Disk striped A MDG1 B MDG2 MDG3 C C across multiple managed disks. Preferred mode
IBM SAN Volume Controller I/O Stack SVC software has a modular design 100% In-house code path Each function is implemented as an independent component Components bypassed if not in use for a given volume Standard interface between components Easy to add/remove components Components exploit a rich set of libraries and frameworks Minimal Linux base OS to boot-strap and hand control to user space Custom memory management & thread scheduling Optimal I/O code path Clustered "support" processes like GUI, slpd, cimom, easy tier SCSI Frontend Remote Copy Cache Flash Copy Mirroring Space Efficient Virtualization RAID SCSI Backend Drives External SCSI 60us Easy Tier 27
IBM SAN Volume Controller Management Options SVC GUI Completely redesigned Browser based Extremely easy to learn/use fast SVC CLI ssh scripting complete command set Tivoli Productivity Center TPC, TPC-R SMI-S 1.3 Embedded CIMOM VDS VSS vcenter Plugin Storage Control
SAN Volume Controller Features 29
SAN Volume Controller Features - summary Cache partitioning Embedded SMI-S agent Easy to use GUI Built-in real time performance monitoring E-mail, SNMP trap & Syslog error event logging Authentication service for Single Sign-On & LDAP Virtualise data without data-loss Expand or shrink Volumes on-line Thin-provisioned Volumes Reclaim Zero-write space Thick to thin, thin to thick & thin to thin migration On-line Volume Migration Volume Mirroring Volume copy 1 EasyTier: Automatic relocation of hot and cold extents SSDs HDDs SSDs HDDs Hot-spots SVC Automatic Relocation Volume Volume copy 2 MDisk Source SVC Volume Optimized performance and throughput MDisk Target FlashCopy, Point-In-Time copy (optional) Up to 256 target per source Target FC may be source Remote Copy Full (with background copy = clone) Partial (no background copy) Space Efficient Incremental Cascaded Consistency Groups Reverse Vol0 Source Vol1 Vol2 FlashCopy FlashCopy target of Vol0 target of Vol1 Map 1 Map 2 Vol3 FlashCopy target of Vol1 Microsoft Virtual Disk Service & Volume Shadow Copy Services hardware provider Remote Copy (optional) Synchronous & asynchronous remote replication with Consistency groups SVC SVC MM or GM Relationship VMware SVC Storage Replication Adaptor for Site Recovery Manager VAAI support & vcenter Server management plug-in SVC Map 4 Consolidated DR Site Vol4 FlashCopy target of Vol3 MM or GM Relationship MM or GM Relationship Up to 256
Volume Mirroring Back-end high availability & migration SVC stores two copies of a Volume It maintains both copies in sync, reads primary copy and writes to both copies If disk supporting one copy fails, SVC provides continuous data access by using other copy Copies are automatically resynchronized after repair Intended to protect critical data against failure of a disk system or disk array A local high availability function, not a disaster recovery function Copies can be split Either copy can continue as production copy Either or both copies may be thin-provisioned Can be used to convert fully allocated to thin-provisioned volume Thick to thin migration May be used to convert thin-provisioned to fully allocated Thin to thick migration Mirrored Volumes use twice physical capacity of un-mirrored Volumes Base virtualisation licensed capacity must include required physical capacity The user can configure the timeout for each mirrored volume 31 Priority on redundancy: Wait until write completes or times-out finally. Performance impact, but active copies are always synchronized Copy 0 SVC R W Copy 1
IBM EasyTier Hot-spots Optimized performance and throughput Transparent reorganization What is Easy Tier? A function that dynamically re- distributes active data across multiple tiers of storage class based on workload characteristics Automatic storage hierarchy Hybrid storage pool with 2 tiers = Solid-State Drives & Hard Disk Drives I/O Monitor keeps access history for each virtualisation extent (16MiB to 2GiB per extent) every 5 minutes Data Placement Adviser analyses history every 24 hours Data Migration Planner invokes data migration Promote hot extents or demote inactive extents The goal being to reduce response time Users have automatic and semi-automatic extent based placement and migration management SSDs HDDs SSDs HDDs Automatic Relocation 32 Why it matters? Hot-spots Solid State Storage has orders of magnitude better throughput and response time with random reads Full volume allocation to SSD only benefits a small number of volumes or portions of volumes, and use cases Allowing dynamic movement of the hottest extents to be transferred to the highest performance storage enables a small number of SSD to benefit the entire infrastructure Works with Thin-provisioned Volumes Optimized performance and throughput
Thin-provisioning Traditional ( fully allocated ) virtual disks use physical disk capacity for the entire capacity of a virtual disk even if it is not used With thin-provisioning, SVC allocates and uses physical disk capacity when data is written Without thin provisioning, pre-allocated space is reserved whether the application uses it or not Available at no additional charge with base virtualisation license Support all hosts supported with traditional volumes and all advanced features (EasyTier, FlashCopy, etc.) Reclaiming Unused Disk Space When using Volume Mirroring to copy from a fully-allocated volume to a thinprovisioned volume, SVC will not copy blocks that are all zeroes When processing a write request, SVC detects if all zeroes are being written and does not allocate disk space for such requests in the thin-provisioned volumes Helps avoid space utilization concerns when formatting Volumes Dynamic growth With thin provisioning, applications can grow dynamically, but only consume space they are actually using Done at Grain Level (32/64/128/256KiB) If grain contains all zeros don t write 33
Copy Services 34
Business Continuity with SVC Traditional SAN Replication APIs differ by vendor Replication destination must be the same as the source Different multipath drivers for each array Lower-cost disks offer primitive, or no replication services SAN Volume Controller Common replication API, SAN-wide, that does not change as storage hardware changes Common multipath driver for all arrays Replication targets can be on lower-cost disks, reducing the overall cost of exploiting replication services FlashCopy Metro/Global Mirror SAN TimeFinder SRDF SVC SAN SVC IBM DS5000 35 IBM DS5000 EMC Clariion EMC Clariion HDS AMS IBM Storwize V7000 HP EVA EMC Clariion IBM DS5000
Copy Services with SVC Volume Mirroring Volume Mirroring outside the box 2 close sites (<10Km) Warning, there is no consistency group FlashCopy Point-in-Time Copy outside the box 2 close sites (<10Km) Warning, this is not real time replication Metro Mirror Synchronous Mirror Write IO response time doubled + distance latency No data loss 2 close sites (<300 Km) Warning, production performance impact if inter-site links are unavailable, during microcode upgrades, etc. Global Mirror Consistent Asynchronous Mirror Limited impact on write IO response time Data loss All write IOs are sent to the remote site in the same order they were received on source volumes Only 1 source and 1 target volumes 2 remote sites (>300 Km) Vol0 R W Vol0 Vol0 SVC SVC Managed Storage Legacy Storage Managed Storage 36 Source and target can have different characteristics and be from different vendors Source and target can be in the same cluster
Multicluster Mirroring "any-to-any" (up to 4 instances) SAN SAN Volume Volume SAN SAN Volume Volume Controller Controller Controller Controller Datacenter1 Datacenter 2 SAN SAN Volume Volume Controller Controller Datacenter 3 SAN Volume Controller Datacenter 4 37
SVC split cluster solution 38
SVC split cluster - symmetric disk mirroring VM VM VM VM Host High availability + protection for virtual machines VM VM VM VM Host SVC 1 node A One storage system. Two locations. SVC 1 node B LUN1 max.100km recommended max.300km supported LUN1' Appliance functionality, not software-based, no license 39
SVC split cluster & VDM Connectivity Bellow 10Km using passive DWDM You should always have 2 SAN fabrics (A & B), and 2 switches per SAN fabric (one on each site) This diagram is only showing connectivity to a single fabric In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled You should always connect each SVC node in a cluster on the same SAN switches The best is to connect each SVC node to SAN fabric A switch 1 & 2, as well as SAN fabric B switch 1 & 2 You can consider (supported but it is not recommended) connecting all SVC nodes to the switch 1 in the SAN fabric A, and to the switch 2 in the SAN fabric B To avoid fabric re-initialisation in case of link hiccups on the ISL, consider creating a Virtual SAN Fabric on each site and use inter-vsan routing Production room A SW SW SW SAN A Switch 1 LW or SW LW or SW I/O Group LW or SW ISL Candidate Pool 1 Quorum Pool 3 LW or SW Production room C LW or SW Primary Quorum SW SAN A Switch 2 LW or SW SW Pool 2 SW Production room B Candidate Quorum 40
SW SVC split cluster & VDM Connectivity Up to 300Km using active DWDM SW I/O Group Brocade virtual fabric or a Cisco VSAN can be used to isolate Public and Private SANs Dedicated ISLs/Trunks Private SAN A Private SAN A For SVC inter-node traffic SW Enhanced! ISL s/trunks Public SAN A Public SAN A SW SW SW SW LW or SW LW or SW Pool 1 Candidate Quorum Pool 3 Primary Quorum Pool 2 Candidate Quorum Production room A Production room C Production room B 41 You should always have 2 SAN fabrics (A & B) with at least: 2 switches per SAN fabric (1 per site) when using CISCO VSANs or Brocade virtual fabrics to isolate private and public SANs 4 switches per SAN fabric (2 per site) when private and public SANs are on physically dedicated switches This diagram is only showing connectivity to a single fabric A (In reality connectivity is to a redundant SAN fabric and therefore everything should be doubled with also connection to B switches).
HA / Disaster Recovery with SVC Split Cluster 2-site Split Cluster SVC Stretched-cluster Failover Server Cluster 1 Server Cluster 2 Stretched virtual volume Up to 300km Improve availability, load-balance, and deliver real-time remote data access by distributing applications and their data across multiple sites. Seamless server / storage failover when used in conjunction with server or hypervisor clustering (such as VMware or PowerVM) Up to 300km between sites (3x EMC VPLEX) Data center 1 Data center 2 4-site Disaster Recovery Failover Server Cluster 1 Server Cluster 2 Stretched virtual volume Metro or Global Mirror Failover Server Cluster 1 Server Cluster 2 Stretched virtual volume For combined high availability and disaster recovery needs, synchronously or asynchronously mirror data over long distances between two high-availability stretch clusters. Data center 1 Data center 2 High Availability Disaster Recovery Data center 1 Data center 2 High Availability
SVC Split Cluster Considerations The same code is used for all inter-node communication Clustering Write Cache Mirroring Global Mirror & Metro Mirror Advantages No manual intervention required Automatic and fast handling of storage failures Volumes mirrored in both locations Transparent for servers and host based clusters Perfect fit in a virtualized environment (like VMware VMotion, AIX Live Partition Mobility) Disadvantages Mix between HA and DR solution but not a true DR solution Non-trivial implementation involve IBM Services 43
Storwize V7000 : mini SVC with disks 44
V7000 = The ipod of Midrange Storage based on "mini" SVC Delegated complexity "auto optimizing" Easy-Tier SSD enabled Thin provisioning Non-IBM expansion Auto-migration 45
Compatibility 46
SVC 6.4 Supported Environments IBM z/vse Novell NetWare VMware vsphere 4.1., 5 Microsoft Windows Hyper-V IBM Power7 IBM AIX Sun IBM i 6.1 Solaris (VIOS) HP-UX 11i Tru64 OpenVMS Linux (Intel/Power/z Linux) RHEL IBM TS7650G SUSE 11 SGI IRIX Apple Mac OS Citrix Xen Server IBM BladeCenter 1024 Hosts VAAI Point-in-time Copy Full volume, Copy on write 256 targets, Incremental, Cascaded, Reverse, Space-Efficient, FlashCopy Mgr Native iscsi* 1 or 10 Gigabit 8Gbps SAN fabric Continuous Copy Metro/Global Mirror Multiple Cluster Mirror SAN Easy Tier SSD SAN Volume Controller Space-Efficient Virtual Disks SAN Volume Controller Virtual Disk Mirroring TMS RamSan- 620 Compellent Series 20 IBM DS DS3400, DS3500 DS4000 DS5020, DS3950 DS6000 DS8000, DS8800 IBM XIV DCS9550 DCS9900 IBM System Storage SAN Volume Controller IBM Storwize V7000 IBM N series Hitachi Virtual Storage Platform (VSP) Lightning Thunder TagmaStore AMS 2100, 2300, 2500 WMS, USP, USP-V HP EMC 3PAR, VNX StorageWorks VMAX P9500, CLARiiON MA, EMA MSA 2000, XP CX4-960 EVA 6400, 8400 Symmetrix Sun StorageTek NetApp FAS NEC istorage Bull Storeway Fujitsu Eternus DX60, DX80, DX90, DX410 DX8100, DX8300, DX9700 8000 Models 2000 & 1200 4000 models 600 & 400, 3000 Pillar Axiom
Virtual Storage Platform Management 48
Storage Hypervisor Storage Hypervisor Virtual Server Infrastructure Virtual Storage Infrastructure (SAN Volume Controller) Manage Manage VMControl Tivoli Storage Productivity Center IBM Systems Director Virtual Storage Platform - SAN Volume Controller Common device driver - iscsi or FC host attach Common capabilities I/O caching and cross-site cache coherency Thin provisioning Easy Tier automated tiering to Solid-state Disks Snapshot (FlashCopy) Mirroring (Synchronous and Asynchronous) Data mobility Transparent data migration among arrays and across tiers Snapshot and mirroring across arrays and tiers Virtual Storage Platform Management - Tivoli Storage Productivity Center Manageability Integrated SAN-wide Management with Tivoli Storage Productivity Center Integrated IBM server and storage management (Systems Director Storage Control) Replication Application integrated FlashCopy DR automation High Availability Stretch Cluster HA
Tivoli Storage Productivity Center - TPC What You Need to Manage TPC Can Help Servers ESX servers Apps, DB s, file systems Volume managers Host bus adaptors Virtual HBAs Multi-path drivers Storage Networks Switches & Directors Virtual devices Storage Multi-vendor storage Storage array provisioning Virtualization / Vol. mapping Block + NAS, VMFS Tape libraries Start Here TPC 5.1 Single management console Heterogeneous storage Health monitoring Capacity mgmt. Provisioning Fabric management FlashCopy support Storage System Performance Management SAN Fabric Performance management Trend Analysis DR & Business Continuity Applications & Storage Hypervisor (ESX, VIO) Hyperswap Mgmt. and Mature IBM SmartCloud Virtual Storage Center All this and more Advanced SAN Planning and provisioning based on best practices Proactive configuration change management Performance optimization Tiering Optimization Complete SAN fabric performance mgmt. Storage Virtualization Application Aware FlashCopy management 5 Replication FlashCopy Metro Mirror Metro Global Mirror
TPC 5.1 Highlights Fully integrated & Web-based GUI Based on Storwize/XIV success TCR/Cognos-based Reporting & Analytics Enhanced management for virtual environments Integrated Installer Simplified packaging 51
Enhanced management for virtual environments Virtual Machines Clustered Across Hosts Tivoli Storage Productivity Center Hypervisor VM Hypervisor VM Storage (SAN) Helps avoid double counting storage capacity in TPC reporting on VMware Associates storage not only with individual VMs and Hypervisors but also with the clusters VMotion awareness 52
Enhanced management for virtual environments Web-based GUI - Hypervisor related Storage 53
Integrated Infrastructure System Cloud Ready 54
IBM PureSystems Infrastructure & Cloud Application & Cloud Integrated Infrastructure System Factory integration of Compute, Storage, Networking, and management Broad support for x86 and POWER environments Cloud ready for infrastructure Integrated Application Platform Factory integration of infrastructure + middleware (DB2, Websphere) Application ready (Power or x86 with workload deployment capability) Cloud ready application platform 55
PureFlex System is Integrated by design Tightly integrated compute, storage, networking, software, management, and security Expert Integrated Systems Storage Networking Compute Virtualization Security Tools Applications Management Flexible and open choice in a fully integrated system 56
IBM PureSystems What s Inside? An evolution in design, a revolution in experience Expert Integrated Systems IBM Flex System IBM PureFlex System IBM PureApplication System Chassis 14 half-wide bays for nodes Compute Nodes Power 2S/4S* x86 2S/4S Storage Node V7000 Expansion inside or outside chassis Management Appliance Pre-configured, pre-integrated infrastructure systems with compute, storage, networking, physical and virtual management, and entry cloud management with integrated expertise. Pre-configured, pre-integrated platform systems with middleware designed for transactional web applications and enabled for cloud with integrated expertise. Networking 10/40GbE, FCoE, IB 8/16Gb FC Expansion PCIe Storage 57
Summary 58
Why to consider Storage Virtualization? 1. Missing storage "hypervisor" for virtualized servers 2. Too high physical migration effort 3. Compatibility chaos (multipathing, HBA firmware ) 4. Need for transparent campus failover like Unix LVM 5. Need for automatic hotspot elimination ("Easy Tier") SVC 6. Unhappy with storage performance Simplified administration, including copy services: 1 same process Online re-planning flexibility is greatly enhanced "Cloud ready" Storage effectiveness (ongoing optimization) can be maintained over time Move applications up one tier as required, or down one tier when stale Move from performance design "in hardware" to QoS policy management 59
Internet Resources Information Center http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp SVC Support Matrix http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html SVC / Storwize V7000 Documentation http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp 60
Thank you! 61
62