Maquette DB2 PureScale PureScale et technologie Power7 Thierry Desbourdes thierry.desbourdes@fr.ibm.com
DB2 PureScale Cluster Actif / Actif Automatic workload balancing On-Demand Provisioning Cluster de noeuds DB2 actifs sur des serveurs P Gestion des verrous et de la mémoire cache héritée du z/os Cluster Manager intégré Shared Data InfiniBand network & DB2 Cluster Services
DB2 PureScale Clients Single Database View Member Member Member Member CS CS CS CS «Clients connect anywhere, see single database» Les clients peuvent se connecter sur chaque noeuds LoadBalancingautomatique Les moteurs DB2 tournent sur n noeuds Ils coopèrent entre eux pour délivrer un accès cohérent Cluster services intégré Détection de la perte ou de l'ajout d'un membre Recovery automation,en partenariat avec STG & Tivoli CS 2 nd -ary General Parallel File System ( GPFS) Shared Database Cluster Interconnect Log Log Log Log Shared Storage Access CS Primary Low latency, high speed interconnect RDMA-capable interconnects(infiniband) PowerHA PureScale technology: Global locking& buffer management Synchronousduplexingassurant la haute disponibilité Data sharing architecture Accès partagés àla base de données Chaque noeud écrit ses propres log sur des disques partagés
Maquette IIC : Pour réaliser une maquette PureScale IBM 7874-024 24 IBM 7874-040 IBM 7874-120 IBM 7874-240 48 128 288 Switch infiniband Power7 P770 Baie de stockage DS5XXX Switch san
PowerVM Rappels
IBM PowerVM Virtualization Features Processor Shared or dedicated LPARs Capped or uncapped LPARs Multiple shared processor pools Dynamic LPAR operations (add/remove) Shared dedicated LPARs I/O Shared and/or dedicated I/O Virtual Ethernet, virtual SCSI Dynamic LPAR operations (add/remove) Integrated Virtual Ethernet Virtual FC (N_Port ID Virtualization) Virtual Tape Support Dedicated Processor LPARs Shared Processor Pool Sub-Pool A Sub-Pool B OS LPAR OS LPAR WPAR LPAR Virtual I/O Server LPAR LPAR OS LPAR OS LPAR OS LPAR OS LPAR PowerVM Hypervisor Memory Dedicated memory Active Memory Sharing Dynamic LPAR operations (add/remove) Active Memory Expansion (AIX 6.1) Other Integrated Virtualization Manager Live LPAR mobility Workload partitions (AIX 6.1) Workload partition mobility (AIX 6.1) Lx86 for Linux applications (Linux)
N_Port ID Virtualization (Virtual FC)
N_Port ID Virtualization Simplifies Disk Management N_Port ID Virtualization Multiple Virtual World Wide Port Names per FC port PCIe 8 Gb adapter LPARs have direct visibility on SAN (Zoning/Masking) I/O Virtualization configuration effort is reduced Virtual SCSI Model N_Port ID Virtualization AIX Generic SCSI Disks DS8000 HDS AIX FC Adapters FC Adapters SAN SAN DS8000 HDS DS8000 HDS
N_Port ID Virtualization N_Port ID Virtualization Virtualizes FC adapters Virtual WWPNs are attributes of the client virtual FC adapters not physical adapters 64 WWPNs per FC port (128 per dual port HBA) VIOC VIOC Multipath SW Multipath SW Virtual FC - Client Hypervisor Customer Value Can use existing storage management tools and techniques Allows common SAN managers, copy services, backup/restore, zoning, tape libraries, etc Transparent use of storage functions such as SCSI-2 reserve/release and SCSI3 persistent reserve Load balancing across Allows mobility without manual management intervention Tape Virtual FC - Server Physical FC Ports (8 Gb FC) NPIV Enabled SAN
N_Port ID Virtualization Client LPAR 1 A Multi-Path Software Client LPAR 2 B Multi-Path Software fcs0 fcs1 fcs2 fcs3 fcs0 fcs1 fcs2 fcs3 Hypervisor vfchost0 vfchost1 vfchost2 vfchost3 vfchost0 vfchost1 vfchost2 vfchost3 1 fcs0 fcs1 2 fcs0 fcs1 A B PV LUNs
PowerVM Editions with v2.2 PowerVM Express Edition Evaluations, pilots, PoCs Single-server projects PowerVM Standard Edition Production deployments Server consolidation PowerVM Enterprise Edition Multi-server deployments Cloud infrastructure Enhancements PowerVM Editions Maximum VMs Virtual I/O Server PowerVM Lx86 Shared Processor Pools Shared Storage Pools Thin Provisioning Linked Clones Live Partition Mobility Active Memory Sharing Express 2 per server + Standard 10 per core (up to 1000) (Clustered) * New functionality in v2.2 release Enterprise 10 per core (up to 1000) (Clustered)
LPARs IBM Confidential Virtual I/O Server (Classic) IBM Systems Director Core Mgmt Storage Mgmt Inventory Config Health - - - - - - Centralized Platform Mgmt PHYP PHYP PHYP Storage Pool Storage Pool Storage Pool SAN IBM, EMC, Hitachi, SVC, and Other SAN Storage Pool Storage Pool Storage Pool PHYP PHYP PHYP 13
IBM Confidential Virtual I/O Server v2.2 Extending Storage Virtualization Layer Beyond a Single System IBM Systems Director Core Mgmt Storage Mgmt Inventory Provision Config Clone Health Snap - - - - - - Migrate Centralized Platform Mgmt PHYP PHYP PHYP Storage Pool of SAN & NAS PHYP PHYP PHYP
Mobility Solutions on Power Systems Live Partition Mobility Movement of the OS and applications to a different server with no loss of service Virtualized SAN SAN and and Network Infrastructure PowerVM Live Partition Mobility Move running partition from one system to another with almost no impact to end users Requires POWER6,POWER7 PowerVM Enterprise Edition, all I/O must be through the VIO Server AIX V5.3, AIX 6, AIX7 Workload Partition Application Server Live Application Mobility AIX # 1 AIX # 2 Workload Partition Web Workload Partition email Workload Partition Billing Workload Partition QA AIX Live Application Mobility Move running WPAR from one AIX system to another with almost no impact to end users AIX 6.1 & Workload Partitions Manager NFS NFS or or SAN SAN Network
Base Capabilities vscsi (standard vscsi Target, including Persistent Reserve) Storage aggregation / pooling Thin provisioning (including notification framework) Thick provisioning Snapshot / rollback Consistency groups Linked-clones (space-efficient clones) Storage tiering Multiple storage pools Structured / distributed namespace CLI from any node in the cluster
Advanced Capabilities Import existing storage to NextGen Automated provisioning (storage, AMS, Hibernation) Live Storage Mobility Application consistent snapshot framework Consolidated backup / restore framework Virtual optical Pool Mirroring Storage isolation infrastructure for multi-tenancy Server / storage integration (accelerate/offload data ops to SAN) NAS support (NAS filer on the back-end) vscsi device data encryption, compression, de-dup Centralized management console (GUI)
NextGen Phase 1 (12/2010) NextGen Phase 1 Features GA December 2010 vscsi enhanced for persistent reserve Storage aggregation / pooling (Shared storage pool) Thin provisioning CLI management Single node Dual (redundant) config option with LVM mirroring Max physical disks: 128 Max virtual disks in storage pool: 200 Max client LPARsper : 20
2011 NextGen Release 2011 Release 10 node cluster 1024 Virtual disks 40 Client LPARsper (400 max clients / 200 clients with redundant s) 128 physical disks in the pool Snapshot / Rollback (device and consistency group) Linked Clones Live Storage Mobility Thick provisioned devices Image Management, Cluster Management (IBM Systems Director) Legacy capabilities (Client LPAR Mobility, LPM Data Mover, AMS PSP) Non-disruptive cluster upgrade 3rd party multipathing software
Contacts IIC : VannLAM vann.lam@fr.ibm.com SWG : Patrick DIMPRE patrick_dimpre@fr.ibm.com STG : Thierry DESBOURDES thierry.desbourdes@fr.ibm.com