PowerLinux introduction Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 9.0
Unit objectives After completing this unit, you should be able to: Recognize IBM PowerLinux product offerings Evaluate PowerLinux business solutions Summarize PowerVM structure Describe I/O concepts and the required and desired settings Recognize the following terms: Partition, logical partition (LPAR), resource, hypervisor, managed system, Hardware Management Console (HMC), and virtual console
Topic 1 objectives: PowerLinux introduction After completing this topic, you should be able to: Recognize IBM PowerLinux product offerings Classify POWER7 features that benefit Linux Evaluate PowerLinux business solutions
Linux together with IBM Power Linux From humble beginnings to enterprise class offerings Open source heritage Multiple distribution choices IBM Power Built on a rich heritage Continued enhancements through generations of power processors Wide range of server options Express to high performance models
Linux on Power distributions Numerous Linux distributions have provided Power server version Enterprise level distributions include Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2fliaam%2fliaamdistros.htm
Processor technology roadmap: A look back Watch this Space! POWER4 180 nm POWER5 130 nm Dual Core Dual Core Chip Multiprocessing Enhanced scaling SMT Distributed switch Distributed switch + Shared L2 Core Parallelism + Dynamic FP Performance + LPAR (32) Memory bandwidth + Virtualization POWER6 65 nm Dual Core High frequencies Virtualization + Memory subsystem + POWER7 45 nm Multi-core On-Chip edram Power optimized cores Mem sub-sys ++ Altivec SMT ++ Instruction retry Reliability + Dynamic energy mgt VSM and VSK SMT (AltiVec) Protection keys Protection keys ++ POWER8 Conceptual phase 2001 2004 2007 2010 future
POWER7 system feature highlights Industry leading hardware performance and RAS Virtualization through PowerVM Always on Power Hypervisor Large quantity of virtual servers Memory and processor virtualization and pools Virtualized networking and storage Dynamic reconfiguration of virtual servers Relocation of virtual servers Multiple management strategies Centralized management through HMC, FSM, and IBM Systems Director Local management with IVM Extended and cloud-enabled using VMControl Energy management through Active Energy Manager
sampling Linux available across the entire Power Systems portfolio Red Hat and SUSE versions consistent with x86 64 Support available simultaneously with other platforms Power 795 Power 780 Power 770 Power 760 Power 750 Power 710 / 730 Power 740 IBM Flex System p260, p460 Power 720
Introducing PowerLinux servers Linux only one, two and four socket servers: PowerLinux 7R1, 7R2, 7R4 Flex System p24l Integrated Facility for Linux (IFL) A virtual stack Linux engine using Capacity on Demand on Enterprise Servers PowerLinux 7R1 / 7R2 Power 710 / 730 New Power7+ PowerLinux 7R4 Power 740 Power 750 Power 760 IBM Flex System p260, p460, p24l Power 770 Power 780 IFL Statement of Direction Power 795 Power 720
PowerLinux servers: Side by side PowerLinux 7R1 PowerLinux 7R2 PowerLinux 7R4 Planar/ Form factor 1 Socket/ 2U 2 Socket/ 2U 2 or 4 Socket/5U Processor offerings (SCM) DDR3 memory features 4 Core @ 3.6 GHz 6 Core @ 4.2 GHz 8 Core @ 4.2 GHz 4 / 8 / 16 / 32 GB DIMMs 32 GB to 256 GB 16 Core @ 3.6 GHz 16 Core @ 4.2 GHz 4 / 8 / 16 / 32 GB DIMMs 32 GB to 512 GB 16 core @ 3.5GHz or 4.0GHz 32 core @ 3.5GHz or 4.0Ghz 8/16/32 GB 32 GB to 1024 GB Max disk drives ( sys unit +i/o drawer )/ storage 270/243 TB ( L1T ) 378/ 340 TB ( L2T ) 1320/ 1,171 TB Max. PCIe 12XI/O drdw N/A 2 ( L2T ) 4 Max. PCI slots ( system unit + 12X I/O drwrs ) 5 PCIe 5 + 20 PCIe ( L2T) 6 + 40 PCIe GX++ slots One Two Two Integrated Ethernet Required Quad Port 10/100/1000 in PCIe 4x slot 4 @1Gbps or 2 @10Gbps I/O drawer N/A Up to Two 12x Attach I/O Up to Four 12x Attach I/O Max logical partition ( 20 per core ) 160 320 640 Redundant power/cooling Option/Standard Standard/Standard Standard/Standard Integrated split backplane No No Yes EnergyScale Yes Warranty 3 years 3 years 1 year standard warranty + 2 additional years of 9x5 extended warranty service
IBM Reliability Availability Serviceability (RAS) What do Power Systems bring to the Linux community? RAS Feature Power Systems x86 Application/Partition RAS Live Partition Mobility (vmotion) Yes Yes Live Application Mobility Yes (AIX only) No System RAS OS independent First Failure Data Capture Yes No Memory Keys (including OS exploitation) Yes (AIX only) No Processor RAS Processor Instruction Retry Yes No Alternate Processor Recovery Yes No Cache Line Delete Yes No Dynamic Processor Deallocation Yes No Dynamic Processor Sparing Yes No Memory RAS Chipkill Yes Yes Survive Double Memory Failures Yes No Selective Memory Mirroring Yes (Power780/770 only) No Redundant Memory Yes Yes I/O RAS Enhanced Error Handling Yes No I/O Adapter Isolation (PCI-Bus and TCEs) Yes No
What is the market for PowerLinux: Sampling Big data InfoSphere BigInsights, InfoSphere Streams Data services DB2, Informix, InfoSphere Business application middleware WebSphere MQSeries, WebSphere Message Broker, WebSphere Enterprise Service Bus, DB2 Connect, FTP, NFS, DNS, Firewall, Proxy Development and test WAS Liberty Profile, Rational ClearCase/Quality Manager/Team Concert, IBM XL C/C++, XL Fortran, ESSL (optimized math subroutine libraries for POWER7+) Mobile Worklight, WAS Liberty Profile, IBM Mobile Portal Accelerator Social WebSphere Portal, IBM Web Content Manager Enterprise content management IBM Web Content Manager, WebSphere Portal High availability, security Tivoli System Automation for Multiplatforms, IBM Security Identity Manager
Big data analytics Stored data continues to grow, and with storage costs dropping there is the opportunity to retain more data online instead of traditional archival processes? TBs of data every day 12+ TBs of tweet data every day 30+ billion RFID tags today (1.3B in 2005) 5+ billion camera phones world wide 100s of millions of GPS enabled devices sold annually 25+ TBs of log data every day 76 million smart meters in 2009 200M+ by 2014 2+ billion people on the Web by end 2011
Big data: Watson and the transformational era of cognitive computing Hardware 90 x IBM Power 750 servers 2880 POWER7 3.55 GHz cores 500 GBps on-chip bandwidth 15 Terabytes of memory 500 GB of data (in memory) 10 Gb Ethernet interconnect Software SLES11 SP1 Apache Hadoop IBM DeepQA UIMA Domain knowledge Dr. David Ferrucci and the IBM Research unstructured text analytics team
Service and productivity tools Tools available to assist in management of Linux on Power Systems System Resource Controller Service tools Special libraries Performance monitoring Dynamic resource management Error log analysis Energy management And more
IBM Installation Toolkit for PowerLinux Strategic tool for delivering IBM software solutions for PowerLinux servers Easy and comprehensive web interface for installing the RHEL and SLES distributions Ability to automatically install a set of IBM value-added software such as performance, virtualization, energy management, and reliability, availability and serviceability (RAS) Additional software includes IBM Electronic Service Agent for PowerLinux, pseries Energy Management Daemon for POWER7, IBM Advance Toolchain for PowerLinux, IBM Java SDK, several IBM RAS tools, and more Can be used for running manual DVD installation, as well as network installations using BOOTP/DHCP Provides options to migrate additional data, install additional software, and create logical partitions (LPAR) on demand, based on the x86 source machine
IBM Software Development Kit (SDK) for PowerLinux Free, Eclipse-based Integrated Development Environment (IDE) The IBM SDK for PowerLinux provides you with: An all-in-one solution for developing software on PowerLinux servers Integration of important Linux and IBM tools into a single GUI environment, such as oprofile, valgrind, autotools for Linux and Feedback Directed Program Restructuring (FDPR) for IBM http://www-304.ibm.com/webapp/set2/sas/f/lopdiags/sdklop.html
OpenPOWER consortium Establishes POWER technology in the new era of cloud computing New open collaboration business model that will accelerate the pace of industry innovation for cloud data center technology Open-source hardware, software, firmware and tools to enable new, open choice for custom development of advanced data center technology for cloud delivery Members resources and technology, paired with the POWER architecture, fosters creation of a new, broad ecosystem of hardware and software developers to create more powerful, scalable and energy-efficient cloud data centers
Topic 2 objectives: PowerVM After completing this topic, you should be able to: Summarize PowerVM structure Describe I/O concepts and the required and desired settings Recognize the following terms: Partition, logical partition (LPAR), resource, hypervisor, managed system, Hardware Management Console (HMC), and virtual console Define minimum, maximum, desired settings for memory and processors
POWER virtualization Consolidate AIX, IBM i, and Linux workloads on one system Shared system resources Advanced memory management functions Live Partition Mobility with virtual servers of any size up to entire system Drive systems to over 90% utilization for maximum ROI
Virtual servers A virtual server (VS) is the allocation of system resources to create logically separate systems within the same physical footprint. Also referred to as logical partition (LPAR) Each VS is an independent operating environment Resources are processors, memory, and I/O slots A virtual server exists when the isolation is implemented with firmware. Not based on physical system building block Provides configuration flexibility VS 1 VS 2 VS 3 Hardware
POWER Hypervisor The POWER Hypervisor is firmware that provides: Virtual memory management Controls page table and I/O access Manages real memory addresses versus offset memory addresses Virtual console support Security and isolation between partitions VS s are allowed access only to resources that are allocated to them (enforced by the POWER Hypervisor) VS 1 VS 2 VS 3 POWER Hypervisor Hardware (Processors, Memory, I/O)
Virtual server resources Each VS has its own: Resources: CPU, memory, I/O slots Open Firmware Console Operating system And other things expected in a standalone operating system environment Problem logs Data (libraries, objects, file systems) Performance characteristics Network identity Date and time
Dividing system resources Minimum VS configuration: 0.1 processing units if using shared processor pool or one processor if dedicated 0.05 on POWER7+ 128 MB memory Access to necessary I/O devices: Adapter for boot disk Network adapters Smallest granularity for allocating additional resources: 0.01 processing units if shared; one processor if dedicated One logical memory block (LMB) of memory LMB sizes range from 16 to 256 MB One I/O slot Maximum number of VS s depends on system model and available resources: Examples: Maximum for our largest servers is 1024 VS s Maximum for a system with four physical processors is 40 VS s (80 for POWER7+)
Processor resources Processor allocation: Dedicated or shared processors Increments can be in 0.01 processing units (for shared) For each VS, configure: Minimum: VS will not start if this number is not available. VS can be decreased to this number if using dynamic LPAR. Desired: VS will use up to this number upon activation if available. Maximum: VS can be increased to this number if using dynamic LPAR.
Memory resources Memory allocation: 1 LMB (16 MB to 256 MB sized logical memory blocks) For each partition, configure: Minimum: VS will not start if this amount is not available. Partition can be decreased to this amount if using dynamic LPAR. Desired: VS will use up to this amount upon activation if available. Maximum: VS can be increased to this amount if using dynamic LPAR. Used for sizing the page table for the VS
I/O resources A VS requires a minimum of: One boot device in allocated I/O slot One network adapter in allocated I/O slot Other I/O adapters for storage and network access I/O slots can be required or desired. Allocation to VS is by individual slots. Multiple devices connected to a single SCSI adapter are allocated together.
Key virtual I/O resources Virtual SCSI Backing storage might be a logical volume, a physical volume, a file, a tape drive, or an optical media drive Virtual Ethernet Network connectivity provided by virtual Ethernet adapter Some Power servers provide additional virtualization options for IP traffic Virtual Fibre Channel Node Port ID Virtualization (NPIV) WWPN assignment to the virtual server s adapter
Partition and system profiles Partition profile: Describes the resource configuration for a partition to use when it is activated (started) A partition may have more than one profile, but only one is in use at a time System profile: A list of VS s and a profile for each Can be used to activate a set of VS s at power on of the server or after the server is already running Or, can be used to simply validate that there are no resource contentions for a set of VS s Usage is optional
What is Virtual I/O Server? A special virtual server hosting physical resources (adapters) and virtual adapters Installed and used as an appliance Physical devices virtualized for virtual I/O client virtual servers: Client virtual server can use both virtual and physical resources Enables sharing of physical Ethernet adapters This allows for external access of virtual Ethernet network The shared Ethernet adapter (SEA) provides a bridge to the client virtual server s network Enables sharing of physical storage adapters and devices Physical disks, logical volumes, or files (backing devices) can be shared Mapped to vscsi server adapter Appear as vscsi disks in client Fibre channel adapters can be shared using NPIV Mapped as vfcp devices
Virtual I/O Server example The VIOS provides virtual device paths VIOS VS VS VS SCSI ETH FCP vscsi veth vfcp vscsi veth vfcp vscsi veth vfcp vscsi veth vfcp POWER Hypervisor SCSI ETH FCP * Hardware (Processors, Memory, I/O) * Virtual SCSI uses SCSI emulation. The adapter can be SAS, IDE, FC, iscsi, and so on.
Management interfaces Hardware Management Console (HMC) Stand-alone x86 platform used for administration of multiple managed systems Appliance design Flex System Manager (FSM) Flex compute node used for Flex system administration Integrated Virtualization Manager (IVM) Component of VIOS, used for individual managed system operations Systems Director Management Console (SDMC) Appliance that combined features of HMC and IBM Systems Director No longer marketed
Hardware Management Console The Hardware Management Console (HMC) controls managed systems, such as servers and logical partitions Attached to the flexible service processor (FSP) of the managed server via a private or public network, and to each logical partition (LPAR) via a public Ethernet network
Accessing HMC applications Use the navigation bar to go to major applications In this example, there is one managed system max Manage the servers and partitions from here Other applications: Create and access system plans to automate partition configuration Configure and manage the HMC Manage serviceable events Manage HMC and server software and firmware updates Status bar Navigation bar Status bar
Accessing HMC views and tasks Select the server name to view its LPAR Active tasks Task bar Tasks menu Selected menu Applications Expand Taskpad
LPAR table view Default LPAR table view includes status, configured CPU and memory resources, active profile, and current reference code Select a partition to access tasks available for that partition
Opening LPAR console The HMC provides the ability to access a partition s console: Only one permitted for each LPAR at a time To access from the GUI, select a running partition and use the Console Window > Open Terminal Window task. If the window does not open because there is one already open, you can use Close Terminal Connection first.
LPAR console window Java console provided by HMC GUI LPAR name / server name Copy and paste from Edit menu Change font and text size with Font menu
Accessing virtual terminals from HMC Virtual terminals (consoles) for partitions can be accessed from the HMC command-line Use mkvterm or vtmenu commands mkvterm command example hmc:~> mkvterm m managedsystem --id lparid AIX Version 7 Copyright IBM Corporation, 1982, 2010. Console login: # ~. Enter ~. to end (can logout first) Terminate session? [y/n] Use rmvterm command to remove a virtual terminal hmc:~> rmvterm m managedsystem --id lparid hmc:~> rmvterm m managedsystem -p lparname
Creating a logical partition: Review Define requirements for partition How much processor, memory, and I/O will you need? Create needed backing devices Will your partition be using storage from a vscsi device? What about vfcp? Create needed network paths How will your partition be accessing the network? Shared Ethernet Adapter (SEA)? Logical Host Ethernet Adapter (HEA)? Use HMC GUI or CLI to perform task Will you be using the GUI or CLI to create your partition?
Checkpoint 1. True or False: The LPAR virtual console can be accessed from the HMC GUI or from the HMC command line. 2. True or False: Multiple console windows can be open for the same LPAR at the same time. 3. True or False: Different LPARs on the same managed system can be running different operating systems. 4. True or False: If an I/O slot is configured as desired in a partition profile and it is not available when the partition activates, the partition will activate successfully without that I/O slot.
Checkpoint solutions 1. True or False: The LPAR virtual console can be accessed from the HMC GUI or from the HMC command line. The answer is true. 2. True or False: Multiple console windows can be open for the same LPAR at the same time. The answer is false. 3. True or False: Different LPARs on the same managed system can be running different operating systems. The answer is true. 4. True or False: If an I/O slot is configured as desired in a partition profile and it is not available when the partition activates, the partition will activate successfully without that I/O slot. The answer is true.
Exercise introduction In this exercise, you will Create a logical partition Create a partition profile Configure resources for partition profiles View system and partition information using HMC commands Exercise!
Unit summary Having completed this unit, you should be able to: Recognize IBM PowerLinux product offerings Evaluate PowerLinux business solutions Summarize PowerVM structure Describe I/O concepts and the required and desired settings Recognize the following terms: Partition, logical partition (LPAR), resource, hypervisor, managed system, Hardware Management Console (HMC), and virtual console