IBM zseries 800 and z/os Reference Guide

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "IBM zseries 800 and z/os Reference Guide"

Transcription

1 IBM zseries 800 and z/os Reference Guide February 2002

2 zseries Overview The IBM ^ zseries is the first enterpriseclass platform optimized to integrate business applications and to meet the critical transactions and demands of the ebusiness world far into the twentyfirst century. The IBM zseries provides these capabilities through a totally new system design based on z/architecture announced in October 2000 with the IBM ^ z900. The IBM ^ z800 family of servers uses the functional characteristics of the z900 in a package that delivers excellent price performance for those requiring zseries functionality with a capacity entry point less than that offered by z900. The z800 offers eight General Purpose models, from a subuni to a 4way, which can operate independently or as part of a Parallel Sysplex cluster. There is a Coupling Facility model and also a dedicated Linux model on which one to four engines can be enabled for the deployment of Linux solutions. The z800 takes advantage of the robust zseries I/O subsystem. High speed interconnects for TCP/IP communication, known as HiperSockets, let TCP/IP traffic travel between partitions at memory speed rather than network speed. A highperformance Gigabit Ethernet feature is one of the first in the industry capable of achieving line speed: 1 Gb/sec. The availability of Native FICON devices, FICON CTC and soon Fibre Channel Protocol* (FCP) can increase I/O performance, simplify/consolidate channel configurations and help reduce the cost of ownership. reliability. Also, z/os.e invokes the same z800 hardware functionality as you would get from z/os. As a result, unless otherwise specified, zseries hardware functionality described herein is applicable to both z/os and z/os.e running on a z800 server. While z/os.e is unique to the z800, all existing zseries operating systems such as z/os, z/vm and Linux, are supported and will be discussed later. New Tools for Managing ebusiness The IBM ^ product line is backed by a comprehensive suite of offerings and resources that provide value at every stage of IT implementation. These tools can help customers test possible solutions, obtain financing, plan and implement applications and middleware, manage capacity and availability, improve performance and obtain technical support across the entire infrastructure. The result is an easier way to handle the complexities and rapid growth of ebusiness. In addition, IBM Global Services experts and IBM Business Partners can help with business and IT consulting, business transformation, total systems management services, as well as customized ebusiness solutions. The z800 benefits from the Intelligent Resource Director (IRD) function which directs resources to priority work. The IRD function combines the strengths of key technologies: z/os, Workload Manager, Logical Partitioning and Parallel Sysplex clustering. Unique for the z800 is z/os.e, a specially priced offering for z/os, providing select function at an exceptional price. z/os.e is intended to help customers exploit the fast growing world of next generation ebusiness by making the deployment of new applications on the z800 very attractively priced. z/os.e uses the same code as z/os V1R3 customized with new system parameters and invokes an operating environment that is comparable to z/os in service, management, reporting, and Front of z800 I/O Cage Back of z800 I/O Cage *This statement represents IBM's current intent and objectives and is subject to change or withdrawal without notice. 2

3 Ease of Use and SelfManagement To help organizations deal effectively with complexity, IBM has announced Project eliza, a blueprint for autonomic computing which will enable selfmanaging, selfoptimizing, selfprotecting and selfhealing functions for systems. The goal is to use technology to manage technology, creating an intelligent, selfmanaging IT infrastructure that minimizes complexity and gives customers the ability to manage environments that are hundreds of times more complex and more broadly distributed than exist today. This enables increased utilization of technology without the spiraling pressure on critical skills, software and service/support costs. Project eliza represents a major shift in the way the industry approaches reliability, availability and serviceability (RAS). It harnesses the strengths of IBM and its partners to deliver open, standardsbased servers and operating systems that are selfconfiguring, selfprotecting, selfhealing and selfoptimizing. The object of Project eliza technology is to help ensure that critical operations continue without interruption and with minimal need for operator intervention. The goal of Project eliza is to help customers dramatically reduce the cost and complexity of their ebusiness infrastructures, and overcome the challenges of systems management. zseries plays a major role in Project eliza, since the selfmanagement capabilities available for the zseries will function as a model for other IBM ^ platforms, such as IBM ^ xseries, IBM ^ iseries and IBM ^ pseries. zseries servers and z/os provide the ability to configure, connect, extend, operate and optimize the computing resources to efficiently meet the alwayson demands of ebusiness. One of the key functions of z/os is the Intelligent Resource Director (IRD), an exclusive IBM technology that makes the z800/z900 the only servers capable of automatically reallocating processing power to a given application on the fly, based on the workload demands being experienced by the system at that exact moment. This advanced technology, often described as the living, breathing server, allows the z800/z900 with z/os to provide nearly unlimited capacity and nondisruptive scalability to z/os and nonz/os partitions such as Linux, according to priorities determined by the customer. 3

4 z/architecture The zseries is based on the z/architecture, which is designed to eliminate bottlenecks associated with the lack of addressable memory and automatically directs resources to priority work through Intelligent Resource Director (IRD). The z/architecture is a 64bit superset of ESA/390. This architecture has been implemented on the zseries to allow full 64bit real and virtual storage support. A maximum 32 GB of real storage is available on z800 servers. zseries can define any LPAR as having 31bit or 64bit addressability. z/architecture has: 64bit general registers. New 64bit integer instructions. Most ESA/390 architecture instructions with 32bit operands have new 64bit and 32 to 64bit analogs. 64bit addressing is supported for both operands and instructions for both real addressing and virtual addressing. 64bit address generation. z/architecture provides 64bit virtual addressing in an address space, and 64bit real addressing. 64bit control registers. z/architecture control registers can specify regions, segments, or can force virtual addresses to be treated as real addresses. The prefix area is expanded from 4K to 8K bytes. New instructions provide quadword storage consistency. The 64bit I/O architecture allows CCW indirect data addressing to designate data addresses above 2 GB for both format0 and format1 CCWs. IEEE Floating Point architecture adds twelve new instructions for 64bit integer conversion. The 64bit SIE architecture allows a z/architecture server to support both ESA/390 (31bit) and z/architecture (64bit) guests. Zone Relocation is expanded to 64bit for LPAR and VM/ESA. Use of 64bit operands and general registers for all Cryptographic Coprocessors instructions and Peripheral Component Interconnect Cryptographic Coprocessors (PCICC) instructions is added. The implementation of 64bit z/architecture can eliminate any bottlenecks associated with lack of addressable memory by making the addressing capability virtually unlimited (16 Exabytes from the current capability of 2 GB). z/architecture Operating System Support The z/architecture is a trimodal architecture capable of executing in 24bit, 31bit, or 64bit addressing modes. Operating systems and middleware products have been modified to exploit the new capabilities of the z/architecture. Immediate benefit can be realized by the elimination of the overhead of Central Storage to Expanded Storage page movement and the relief provided for those constrained by the 2 GB real storage limit of ESA/390. Application programs will run unmodified on the zseries. Expanded Storage (ES) is still supported for operating systems running in ESA/390 mode (31bit). For z/architecture mode (64bit), ES is supported by z/vm. ES is not supported by z/os in z/architecture mode. Although z/os and z/os.e do not support Expanded Storage when running under the new architecture, all of the Hiperspace and VIO APIs, as well as the Move Page (MVPG) instruction, continue to operate in a compatible manner. There is no need to change products that use Hiperspaces. Some of the exploiters of z/architecture for z/os and OS/390 Release 10 include: DB2 Universal Database Server for OS/390 IMS Hierarchical File System (HFS) Virtual Storage Access Method ( VSAM) Remote Dual Copy (RC) Tape and DASD access methods Operating System Support on z800 ESA/390 (31bit) z/arch. (64bit) z/os.e Version 1 Release 3, 4 and 5* No Yes z/os Version 1 Release 1, 2, 3, 4 and 5* No Yes OS/390 Version 2 Release 10 Yes Yes OS/390 Version 2 Release 8 and 9 Yes No Linux for zseries No Yes Linux for S/390 Yes No z/vm Version 4 Release 1 and 2 Yes Yes z/vm Version 3 Release 1 Yes Yes VM/ESA Version 2 Release 4 Yes No VSE/ESA Version 2 Release 4, 5, 6 and 7 Yes No TPF Version 4 Release 1 (ESA mode only) Yes No *z/os V1R5 is expected to be available 1H2003 4

5 Intelligent Resource Director Exclusive to IBM s z/architecture is Intelligent Resource Director (IRD), a function that optimizes processor and channel resource utilization across Logical Partitions (LPARs) based on workload priorities. IRD combines the strengths of the z800/z900 LPARs, Parallel Sysplex clustering, and z/os Workload Manager. Intelligent Resource Director uses the concept of an LPAR cluster, the subset of z/os systems in a Parallel Sysplex cluster that are running as LPARs on the same z800/z900 server. In a Parallel Sysplex environment, Workload Manager directs work to the appropriate resources based on business policy. With IRD, resources are directed to the priority work. Together, Parallel Sysplex technology and IRD provide the flexibility and responsiveness to ebusiness workloads unrivaled in the industry. IRD has three major functions: LPAR CPU Management, Dynamic Channel Path Management, and Channel Subsystem Priority Queuing. LPAR CPU Management LPAR CPU Management allows WLM working in goal mode to manage the processor weighting and logical processors across an LPAR cluster. With z/os Version 1 Release 2, WLM can even direct CPU resources outside a z/os LPAR cluster, to an LPAR running either z/vm or Linux. CPU resources are automatically moved toward LPARs with the greatest need by adjusting the partition s weight. WLM also manages the available processors by adjusting the number of logical CPs in each LPAR. This optimizes the processor speed and multiprogramming level for each workload, reduces MP overhead, and gives z/os more control over how CP resources are distributed to meet your business goals. z/os V1R2 enhances the LPAR CPU management capabilities and will allow the dynamic assignment of CPU resources to nonz/os partitions such as Linux. Dynamic Channel Path Management (DCM) In the past, and on other architectures, I/O paths are defined with a fixed relationship between processors and devices. With z/os and the z800/z900, paths may be dynamically assigned to control units to reflect the I/O load. For example, in an environment where an installation normally requires four ESCON channels to several control units, but occasionally zseries IRD Scope LPAR Cluster z/os z/os Linux OS/390 ICF needs as many as six, system programmers must currently define all six channels to each control unit that may require them. With Dynamic Channel Path Management, the system programmer need only define the four ESCON channels to the control units, and indicate that DCM may add an additional two. As the control unit becomes more heavily used, DCM may assign additional ESCON channels from a pool of managed channels, identified by the system programmer, to the control unit. If the work shifts to other control units, DCM will unassign them from lesser utilized control units and assign them to what are now the more heavily used ones. This helps reduce the requirement for greater than 256 ESCON channels. DCM can also reduce the cost of the fibre infrastructure required for connectivity between multiple data centers. Channel Subsystem Priority Queuing The notion of I/O Priority Queuing is not new; it has been in place in OS/390 for many years. With IRD, this capability is extended into the I/O channel subsystem. Now, when higher priority workloads are running in an LPAR cluster, their I/Os will be given higher priority, and will be sent to the attached I/O devices (normally disk but also tape and network devices) ahead of I/O for lower priority workloads. LPAR priorities are managed by WLM in goal mode. 5

6 Channel Subsystem Priority Queuing provides two advantages. First, customers who did not share I/O connectivity via MIF (Multiple Image Facility) out of concern that a lower priority I/O intensive workload might preempt the I/O of higher priority workloads, can now share the channels and reduce costs. Second, high priority workloads may even benefit with improved performance if there were I/O contention with lower priority workloads. Initially, Channel Subsystem Priority Queuing is implemented for ESCON and FICON Express channels. z800 / z900 z/vm Linux z/os HiperSockets C z/os Channel Subsystem Priority Queuing complements the IBM Enterprise Storage Server capability to manage I/O priority across CECs. With IRD, the combination of z/os and the z800/z900 working in synergy extends the industry leading workload management tradition of S/390 and OS/390 to ensure that the most important work on a server meets its goals, to increase the efficiency of existing hardware, and to reduce the amount of intervention in a constantly changing environment. z800 / z900 HiperSockets A z/os z/os HiperSockets B Linux Linux HiperSockets HiperSockets, a feature unique to the zseries, provides a TCP/IP network in the server that allows high speed anytoany connectivity among virtual servers (TCP/IP images) within a z800/z900 without any physical cabling. HiperSockets minimizes network latency and maximizes bandwidth between combinations of Linux, z/os and z/vm virtual servers. These OS images can be first level (directly under an LPAR), or second level images (under z/vm). With up to four HiperSockets per LPAR connection, one could separate traffic to different HiperSockets for security (separation of LAN traffic, no external wiretapping, monitoring) and performance and management reasons (separate sysplex traffic Linux or nonsysplex LPAR traffic). Since HiperSockets does not use an external network, it can free up system and network resources, eliminating attachment cost while improving availability and performance. HiperSockets can have significant value in server consolidation, by connecting LPARs running multiple Linux virtual servers under z/vm to z/os images. Furthermore, HiperSockets will be utilized by TCP/IP in place of CF for sysplex connectivity between images which exist in the same server, thus z/os uses HiperSockets for connectivity between sysplex images in the same server and uses CF for connectivity between images in different servers. Management and administration cost reductions over existing configurations are possible. HiperSockets acts like any other TCP/IP network interface, so TCP/IP features like IP Security (IPSec) in Virtual Private Networks (VPN) and SSL can be used to provide heightened security for flows within the same CHPID. HiperSockets supports multiple frame sizes, which is configured on a per HiperSockets CHPID basis. This support gives the user the flexibility to optimize and tune each HiperSockets to the predominate traffic profile, for example distinguish between high bandwidth workloads such as FTP versus lower bandwidth interactive workloads. The HiperSockets function provides many possibilities for improved integration between workloads in different LPARs, bounded only by the combinations of operating systems and their respective applications. HiperSockets will provide the fastest z800/z900 connection between ebusiness and ERP solutions sharing information while running on the same server. 6

7 z800 Support for Linux WebSphere HTTP and Web Application Servers or Apache HTTP servers can be running in a Linux LPAR or z/vm guest machine and will be able to use HiperSockets for very fast TCP/IP traffic transfer to a DB2 database server LPAR which is running z/os. System performance is optimized because this allows you to keep your Web and transaction application environments in close proximity to your data and eliminates any exposure to network related outages, thus improving availability. The z/os HiperSockets Accelerator function can improve performance and cost efficiencies when attaching a high number of TCP/IP images via HiperSockets to a front end z/os system for shared access to a set of OSAExpress adapters. Linux on zseries Linux and zseries make a great team. The flexibility and openness of Linux bring with it access to a very large portfolio of applications. zseries incorporates the qualities of service that deliver an industrial strength environment for these Linux applications. In addition zseries enables massive scalability within a single server. Hundreds of Linux images can run simultaneously, providing unique server consolidation capabilities and reducing both cost and complexity. Of course, no matter which Linux applications are brought to the zseries platform, they all benefit from high speed access to the corporate data that typically resides on zseries. Linux is Linux, independent on which platform it runs. For the enablement to run Linux on the S/390 and zseries platform, IBM has developed and provided a series of patches. And, IBM continues to support the Open Source community. Linux for zseries supports the 64bit architecture available on zseries processors. This architecture eliminates the existing main storage limitation of 2 GB. Linux for zseries provides full exploitation of the architecture in both real and virtual modes. Linux for zseries is based on the Linux 2.4 kernel. Linux for S/390 is also able to execute on zseries and S/390 Servers in 31bit mode: IBM Software Connectors DB2 Connect, Version 7.1 MQSeries Client Version 5.2 CICS Transaction Gateway Version 4.0 IMS Connect Version 7 WebSphere Family WebSphere Application Server Advanced Edition 4.0 including Java Development Kit, Version and JIT WebSphere Commerce Suite, Version 5.1 WebSphere Personalization Version 3.5 WebSphere HostOnDemand Version Data Management DB2 Universal Database Version 7.2 DB2 Connect Unlimited Edition Version 2 DB2 Connect Web Starter Kit Version 7.2 DB2 Intelligent Miner Scoring Version 7.2 DB2 Net Search Extender Version 7.2 7

8 Tivoli Tivoli Storage Manager Client Version 4.2 Tivoli Enterprise Console Tivoli Software Distribution 4.0 Tivoli Distributed Monitoring 4.1 Tivoli Workload Scheduler 8.1 Linux Distribution Partners SuSE Linux Enterprise Server 7 for S/390 and zseries Product Information at suse.de/en/produkte/susesoft/s390/. Turbolinux Server 6 for zseries and S/390 Product Information at turbolinux.com/products/s390. Red Hat Linux 7.2 for S/390 Product Information at redhat.com/software/s390. z/vm Version 4 z/vm enables large scale horizontal growth of Linux images on zseries. Only z/vm gives the capability of running hundreds of Linux for zseries or Linux for S/390 images. This version of z/vm is priced on a perengine basis (onetime charge and annual maintenance fee) and will support IBM Integrated Facility for Linux (IFL) processor features for Linux based workloads and standard engines for all other zseries and S/390 workloads. IBM S/390 Integrated Facility for Linux (IFL) This optional feature provides a way to add processing capacity, exclusively for Linux workload, with no effect on the model designation. No traditional zseries workload will be able to run in this area. Consequently, these engines will not affect the IBM S/390 and zseries software charges for workload running on the other engines in the system. IFL engines can run in conjunction with Integrated Coupling Facility engines and General Purpose engines on the General Purpose z800 models. The Dedicated Linux Model has 1 4 IFLs exclusively. OSAExpress Gigabit Ethernet for Linux Driver support is provided for the functions of the OSAExpress Gigabit Ethernet feature. This driver supports the IPv4 protocol, delivering the advantages of more rapid communication across a network. This improvement may be between virtual Linux instances on a single machine (either in LPAR or virtual mode) communicating across a network, or a Linux for zseries or Linux for S/390 instance communicating with another physical system across a network. HiperSockets for Linux HiperSockets can be used for communication between Linux images and Linux images and z/os images. Linux can run under z/vm, natively or in an LPAR, independent if the engine is an IFL or a standard engine. Cryptographic Support for Linux Linux on zseries running on standard z800/z900 engines or IFLs is capable of exploiting the hardware cryptographic feature provided by the PCICA card for SSL acceleration. This enables customers implementing ebusiness applications on Linux on zseries to utilize this enhanced security of the hardware. Linux Support Environment zseries or S/390 single image zseries or S/390 LPAR VM/ESA or z/vm guest Block devices VM minidisks ECKD 3380 or 3390 DASDs VM virtual disk in storage Network devices Virtual CTC ESCON CTC OSAExpress (Gigabit Ethernet, Ethernet, Fast Ethernet, TokenRing) HiperSockets 3172 IUCV Character devices 3215 console Integrated console Additional information is available at ibm.com/linux/ and ibm.com/zseries/linux/. 8

9 Parallel Sysplex Cluster Technology Parallel Sysplex clustering was designed to bring the power of parallel processing to businesscritical zseries and S/390 applications. A Parallel Sysplex cluster consists of up to 32 z/os and/or OS/390 images coupled to one or more Coupling Facilities (CFs or ICFs) using high speed specialized links for communication. The Coupling Facilities, at the heart of the Parallel Sysplex cluster, enable highspeed, read/write data sharing and resource sharing among all the z/os and OS/390 images in a cluster. All images are also connected to a Sysplex Timer to ensure time synchronization. CF Parallel Sysplex Resource Sharing enables multiple system resources to be managed as a single logical resource shared among all of the images. Some examples of resource sharing include Automatic Tape Sharing, GRS star, and Enhanced Catalog Sharing; all of which provide simplified systems management, increased performance and/or scalability. For more detail, please see S/390 Value of Resource Sharing White Paper, GF on the Parallel Sysplex home page at ibm.com/servers/eservers/zseries/pso. Although there is a significant value in a single footprint and multifootprint environment with resource sharing, those customers looking for high availability must move on to a database data sharing configuration. With the Parallel Sysplex environment, combined with the Workload Manager and CICS TS or IMS TM, incoming work can be dynamically routed to the z/os or the OS/390 image most capable of handling the work. This dynamic workload balancing, along with the capability to have read/write access to data from anywhere in the Parallel Sysplex cluster, provide the scalability and availability that businesses demand today. When configured properly, a Parallel Sysplex cluster has no single point of failure and can provide customers with near continuous application availability over planned and unplanned outages. For detailed information on IBM s Parallel Sysplex technology visit our Parallel Sysplex Home Page at ibm.com/servers/eservers/zseries/pso. The IBM 9037 Sysplex Timer provides a common time reference to all images which assists in managing the cluster of multiple footprints as a single operational image. The common time source also enables proper sequencing and time stamping of updates to shared databases, a feature critical to recoverability of the shared data. Coupling Facility Configuration Alternatives IBM offers different options for configuring a functioning Coupling Facility: Standalone Coupling Facility: z800 Model 0CF, z900 Model 100, and 9672 Model R06 provide a physically isolated, totally independent CF environment. There are no software charges associated with standalone CF models. An ICF or CF partition sharing a server with z/os, z/os.e, or OS/390 images not in the sysplex acts like a logical standalone CF. Internal nal Coupling Facility (ICF): Customers considering clustering technology can get started with Parallel Sysplex technology at a lower cost. An ICF feature is a processor that can only run Parallel Sysplex coupling code (CFCC) in a partition. Since CF LPARs on ICFs are restricted to running only Parallel Sysplex coupling code, there are no software charges associated with ICFs. ICFs are ideal for Intelligent Resource Director and resource sharing environments. Coupling Facility partition on a z800/z900 or 9672 server using standard d LPAR: AR: A CF can be configured to run in either a dedicated or shared CP partition. Software charges apply. This may be a good alternative for test configurations that require very little CF processing resource or for providing hotstandby CF backup using the Dynamic Coupling Facility Dispatching function. 9

10 A Coupling Facility can be configured to take advantage of a combination of different Parallel Sysplex capabilities: Dynamic CF Dispatch: Prior to the availability of the Dynamic CF Dispatch algorithm, shared CF partitions could only use the active wait algorithm. With active wait, a CF partition uses all of its allotted timeslice, whether it has any requests to service or not. The optional Dynamic CF Dispatch algorithm puts a CF partition to sleep when there are no requests to service and the longer there are no requests, the longer the partition sleeps. Although less responsive than the active wait algorithm, Dynamic CF Dispatch will conserve CP or ICF resources when a CF partition has no work to process and will make the resources available to other partitions sharing the resource. Dynamic CF Dispatch can be used for test CFs and also for creating a hotstandby partition to backup an active CF. Dynamic ICF Expansion. Dynamic ICF expansion provides value by providing extra CF capacity when there are unexpected peaks in the workload or in case of loss of CF capacity in the cluster. ICF Expansion into shared CPs. A CF partition running with dedicated ICFs needing processing capacity beyond what is available with the dedicated CP ICFs, can grow into the shared pool of application CPs being used to execute applications on the same server. ICF Expansion into shared ICFs. A CF partition running with dedicated ICFs can grow into the shared pool of ICFs in case the dedicated ICF capacity is not sufficient. The resulting partition, and Lshaped LPAR, will be composed of both shared ICF and dedicated ICF processors, enabling more efficient utilization of ICF resources across the various CF LPARs. SystemManaged CF Structure Duplexing SystemManaged Coupling Facility (CF) Structure Duplexing provides a generalpurpose, hardwareassisted, easytoexploit mechanism for duplexing CF structure data. This provides a robust recovery mechanism for failure such as loss of a single structure or CF or loss of connectivity to a single CF, through rapid failover to the other structure instance of the duplex pair. Benefits of SystemManaged CF Structure Duplexing include: Availability Faster recovery of structures by having the data already there in the second CF. Furthermore, if a potential IBM, vendor or customer CF exploitation implementation were being prevented by the effort of providing alternative recovery mechanism such as structure rebuild, log recovery, etc., that constraint might be removed by the much simpler exploitation requirements for systemmanaged duplexing. Manageability and Usability A consistent procedure to set up and manage structures across multiple exploiters. Reliability A common framework provides less effort on behalf of the exploiters, resulting in more reliable subsystem code. Cost Benefits Enables the use of nonstandalone CFs (e.g. ICFs) for all resource sharing and data sharing environments. Flexibility The diagram below represents creation of a duplexed copy of the structure within a SystemManaged CF Duplexing Configuration. z/os ICF z800 sysplex timers ICF z/os An example of two systems in a Parallel Sysplex with CF duplexing z900 To understand which of the options and capabilities discussed above are suitable for your environment, please review GF , Coupling Facility Configuration Options: A Positioning Paper at ibm.com/servers/eservers/zseries/library/whitepapers/ gf html. 10

11 Parallel Sysplex Coupling Connectivity The Coupling Facilities communicate with z/os and OS/390 images in the Parallel Sysplex environment over specialized highspeed links. For availability purposes, it is recommended that there be at least two links connecting each z/os or OS/390 image to each CF in a Parallel Sysplex cluster. As processor performance increases, it is important to also use faster links so that link performance does not become constrained. The performance, availability and distance requirements of a Parallel Sysplex environment are the key factors that will identify the appropriate connectivity option for a given configuration. Parallel Sysplex coupling links on the zseries have been enhanced with the introduction of Peer Mode. When connecting a zseries server to a zseries CF, the links can be configured to operate in Peer Mode. This allows for higher data transfer rates to and from the Coupling Facilities. In Peer Mode, the fiberoptic single mode coupling link (ISC3) provides 2 Gb/s capacity, the ICB3 link with 1 GB/s peak capacity, and the IC3 link with 1.25 GB/s capacity. Additional Peer Mode benefits are obtained by using MIF to enable the link to be shared between z/os (or OS/390) and CF LPARs. The link acts simultaneously as both a CF Sender and CF Receiver link, reducing the number of links required. Larger data buffers and improved protocols also improve long distance performance. For connectivity to 9672s, z800/z900 ICS3 links can be configured to run in Compatibility Mode with the same characteristics as links on the All of these above coupling link speeds are theoretical maximums. Connectivity Options Theoretical Maximum Coupling Link Speed z900/z800 ISC3 z900 ICB z900/z800 ICB3 G3G6 ISC 1 Gb/s n/a n/a z900/z800 ISC3 2 Gb/s Peer Mode n/a n/a G5G6 ICB n/a 333 MB/s n/a z900/z800 ICB3 n/a n/a 1 G B/s Peer Mode ISC3. InterSystem Coupling Facility 3rd Generation channels provide the connectivity required for data sharing between the Coupling Facility and the systems directly attached to it. ISC3 channels are pointtopoint connections that require a unique channel definition at each end of the channel. ISC3 channels operating in Peer Mode provide connection between z800/z900 general purpose models and z800/z900 coupling facilities. ISC3 channels operating in Compatibility Mode provide a long distance connection between z800/z900 systems and HiPerLink (ISC) channels on 9672 Models. A four port ISC3 card structure is provided on the z800/z900 family of processors. It consists of a mother card with two daughter cards which have two ports each. Each daughter card is capable of operation at 1 Gb/s in Compatibility Mode or 2 Gb/s in native mode up to a distance of 10 km. From 10 to 20 km, an RPQ is available which runs at 1 Gb/s in both Peer and Compatibility Modes. The mode is selected for each port via CHPID type in the IOCDS. The ports are activated in one port increments. HiPerLinks. HiPerLinks, based on singlemode CF links, are available on 9672s and 9674s only. ISC3 links replace HiPerLinks on z800/z900 models. ICB. The Integrated Cluster Bus is used to provide high speed coupling communication between 9672 G5/G6 and/or z900 servers over short distances (~7 meters). For longer distances, ISC links must be used. Up to eight ICB links (16 possible via RPQ) are available on the general purpose z900 models and up to 16 ICB links are available on the z900 Coupling Facility Model 100. ICB3. The Integrated Cluster Bus 3rd Generation is used to provide high speed coupling communication between two z800/z900 systems over short distances (~7 meters). For longer distances, ISC3 links must be used. Up to 16 ICB3 links are available on both the general purpose z900 models and the z900 Coupling Facility Model 100. Up to five ICB3 links are available for the z800 general purpose models and up to six on the z800 0CF model. The performance of the ICB3 link has been improved by higher data rates and new buffering capabilities. 11

12 IC3. The Internal Coupling3 channel emulates the coupling links between images within a single server. No hardware is required, however a minimum of two CHPID numbers must be defined in the IOCDS. IC3 links provide the fastest Parallel Sysplex connectivity. Up to 32 ICs are available on z800/z900 models. IBM provides extensive services to assist customers with migrating their environments and applications to benefit from Parallel Sysplex clustering. A basic set of IBM services is designed to help address planning and early implementation requirements. These services can reduce the time and costs of planning a Parallel Sysplex environment and moving it into production. An advanced optional package of services is also available and includes data sharing application enablement, project management and business consultation through advanced capacity planning and application stress testing. For more information on Parallel Sysplex Professional Services, visit IBM s Web site at ibm.com/servers/eserver/zseries/pso/services.html Geographically Dispersed Parallel Sysplex The Geographically Dispersed Parallel Sysplex (GDPS ) complements a multisite Parallel Sysplex environment by providing a single, automated solution to dynamically manage storage subsystem mirroring, processors, and network resources to allow a business to attain continuous availability and near transparent business continuity/disaster recovery without data loss. GDPS provides the ability to perform a controlled site switch for both planned and unplanned site outages, while maintaining full data integrity across multiple storage subsystems. GDPS requires Tivoli NetView for OS/390, System Automation for OS/390, and remote copy technologies. GDPS supports both the synchronous PeertoPeer Remote Copy (PPRC) as well as the asynchronous Extended Remote Copy (RC) forms of remote copy. GDPS/PPRC is a continuous availability solution and near transparent business continuity/disaster recovery solution that allows a customer to meet a Recovery Time Objective (RTO) of less than an hour, a Recovery Point Objective (RPO) of no data loss, and protects against metropolitan area disasters (up to 40 km between sites). GDPS/RC is a business continuity/disaster recovery solution that allows a customer to meet a RTO of one to two hours, a RPO of less than a minute, and protects against metropolitan as well as regional disasters, since the distance between sites is unlimited. RC can use either common communication links and channel extender technology between sites or dark fiber. Note: Dark fiber refers to dedicated strands of fiber optic cable with no electronics between the ends (source and destination). Continuous Availability Recommended Configuration z/os IC Internal Coupling Facility ICF ESCON/FICON Express Dedicated (External) Coupling Facility* z800 0CF z/os * This Coupling Facility could be a z900 Model 100, 9672 R06, 9674 C04 or C05. Geographically Dispersed Parallel Sysplex support for PeertoPeer Virtual Tape Server (PtP VTS): The GDPS solution has been extended to include tape data in its management of data consistency and integrity across sites with the announced support of the PeertoPeer VTS configuration (IBM United States Hardware Announcement ). The PtP VTS provides a hardware based duplex tape solution and GDPS automatically manages the duplexed tapes in the event of a planned site switch or a site failure. At the present time, the GDPS PtP support is only available for a GDPS/PPRC (Peer to Peer Remote Copy) configuration. 12

13 A new I/O VTS selection option is provided especially for use with GDPS, so that all virtual volumes are processed from a primary VTS, and a copy is stored on the secondary VTS. Control capability has been added to allow GDPS to freeze copy operations, so that tape data consistency can be maintained across GDPS managed sites during a switch between the primary and secondary VTSs. Synchronization of system data sets such as catalogs, the tape control database, and tape management databases is also provided with the PtP VTS after an emergency switchover. Operational data, data that is used directly by applications supporting end users, is normally found on disk. For the past several years, GDPS has provided continuous availability and near transparent business continuity for disk resident data. However, there is another category of data that supports the operational data, which is typically found on tape subsystems. Support data typically covers migrated data, point in time backups, archive data, etc.. For sustained operation in the failover site, the support data is indispensable. Furthermore, several enterprises have mission critical data that only resides on tape. By extending GDPS support to data resident on tape, the GDPS solution provides continuous availability and near transparent business continuity benefit for both disk and tape resident data. Enterprises will no longer be forced to develop and utilize processes that create duplex tapes and maintain the tape copies in alternate sites. For example, previous techniques created two copies of each DBMS image copy and archived log as part of the batch process and manual transportation of each set of tapes to different locations. Automatic Enablement of CBU for Geographically Dispersed Parallel Sysplex The intent of the GDPS (CBU) is to enable automatic management of the reserved PUs provided by the CBU feature in the event of a processor failure and/or a site failure. Upon detection of a site failure or planned disaster test, GDPS will dynamically add PUs to the configuration in the takeover site to restore processing power for missioncritical production workloads. GDPS is discussed in detail in two white papers which are available at ibm.com/server/eserver/zseries/pso/library.html. GDPS is a service offering of IBM Global Services. For IBM Installation Services for GDPS refer to the IBM Web site. Key attributes can include Fast, automatic recovery: CF: rebuild in surviving CF Central Electronic Complex (CEC), z/os, OS/390: restart subsystems on surviving image TM/DBMS: restart in place Surviving components absorb new work No service loss for planned or unplanned outages Near unlimited, plug and play, growth capacity For additional information, see the Five Nines/Five Minutes: Achieving Near Continuous Availability white paper at ibm.com/s390/pso/library. 13

14 IBM ^ zseries 800 IBM s zseries is the enterprise class ebusiness server optimized for integration, transactions and data of the next generation ebusiness world. In implementing the z/architecture with new technology solutions, the z800/z900 servers are designed to facilitate the IT business transformation and reduce the stress of businesstobusiness and businesstocustomer growth pressure. The zseries represents a new generation of servers that feature enhanced performance, support for S/390 Parallel Sysplex clustering, improved hardware management controls and innovative functions to address ebusiness processing. The I/O subsystem includes Dynamic Channel Path Management (DCM) and Channel CHPID Assignment. These two functions effectively increase the number of CHPIDS that can be used for I/O connectivity. DCM allows channel paths to be dynamically and automatically moved from less utilized devices to constrained devices under the supervision of the Workload Manager. Channel CHPID Assignment permits the assignment of a CHPID to any physical port. This allows the assignment of up to 256 CHPIDs to usable channel paths. Combined, the use of these two functions allows the full exploitation of the I/O bandwidth. Design and technology advances also include the FIbre CONnectivity (FICON) Express channel card used for high speed communication to FIbre devices. These are some of the significant enhancements in the zseries that bring improved performance, availability and function to the platform. The following sections highlight the functions and features of the hardware platform. z800 Design and Technology The z800 is designed to be a leading server for ebusiness. It utilizes the z/architecture and state of the art technology to provide balanced processor, memory and I/O performance. The z800 design is a derivative of the z900. Almost all z900 functions are supported on the z800. The z800 scales to about 20% of a z900 and extends the zseries to small and medium size customers and workloads. Its balanced design allows efficient scaling while utilizing the most advanced zseries functions such as 64bit enablement, the Intelligent Resource Director and HiperSockets. The heart of the z800 is the Basic Processor Unit (BPU) which contains the memory and processor subsystems. Key parts in the BPU package are an advanced Multichip Module (MCM), Memory Controller (MC) modules, memory DIMMS, Key Store DIMMS, and optional CMOS Cryptographic modules. 71 mm Actual size of a z800 MCM used in all Models SD0 MBA PU1 SC0 PU3 SD1 PU2 PU0 PU4 CLK 71 mm Technology Excellence Powerful Processor Packaging Note: PU chips can be enabled as CP, ICF, IFL, SAP or spares by microcode. 14

15 The z800 MCM contains 10 chips [(five Processor Units (PUs), one Memory Bus Adapter (MBA), two Storage Data (SD), one Storage Control (SC), one clock (CLK)]. The z800 MCM can scale up to four Central Processors. The z800 has 8 MB of onmodule L2 cache compared to the z900 s 16 or 32 MB of L2 cache. The fast L2 cache on the MCM, next to the Processor Units, enables a powerful 4way multiprocessor which can dynamically share data between processors. The PU used in the z800 is the same as the z900. They have 64bit capabilities and an improved compression engine compared to 9672 Generation 6 servers. The PU has 512 KB of onchip (L1) memory. Processor Units can be configured as Central Processors, a dedicated I/O processor System Assist Processor, Integrated Coupling Facilities, Integrated Facility for Linux or as spares which can be enabled for emergencies or permanent upgrades. The processor unit uses IBM s most advanced CMOS technology, CMOS 8SE. This technology utilizes copper interconnects, Silicon On Insulator (SOI) and lowk dielectric technological innovations to provide the density and speed required for high performance, advanced 64bit function and mainframe reliability. (All models, except those which utilize all PUs, can transparently spare a PU.) The cycle time of the z800 s processors is 1.6 ns compared to the z900 s 1.3 ns. The zseries processors are significantly faster than the G5 based Multiprise 3000 which has a cycle time of 2.4 ns. The 33% faster engine is used by traditional CPs, ICFs or IFLs. Base Processor Unit The z800 has a storage capacity of up to 32 GB. The entry configuration of 8 GB can be scaled in 8 GB increments to 16 GB, 24 GB and 32 GB. Dense 256 Mb Double Data Rate (DDR) memory chips are used in banks of DIMMS. Memory is protected by design and built in Error Checking and Correcting circuitry. The MC chip is an integrated memory controller. The z800 s I/O subsystem has been sized to provide sufficient connectivity for even the most demanding application by providing 16 slots for channels and network adapters. The z800 supports only current and advanced high bandwidth advanced technology. It does not support parallel channels or the first and second generations of Open Systems Adapters. OSAExpress and FICON Express cards utilize the latest technology to provide the best performance for channel and network attachments. Up to 240 ESCON channels are available utilizing the 16 port ESCON card design. Eight PCICC and six PCICA cards can also be plugged into the I/O slots for high performance cryptographic function. The one MBA chip supports six 1 GB/sec Self Timed Interconnect (STI) busses giving z800 a total I/O bandwidth of 6 GB/sec. Four STIs attach to the I/O cage and two can be used for Integrated Cluster Bus 3rd Generation (ICB3) high speed Parallel Sysplex attachments. In addition to ICB3s, the z800 can utilize InterSystem Coupling Facility 3rd Generation (ISC3) and IC3 for Parallel Sysplex attachments making it a fully functional machine which can be used as a server, Coupling Facility or a server with an Integrated Coupling Facility. STI STI STI STI STI STI Bank 2 Bank 3 MCM CRYPT0 MC0 MC1 CRYPT1 Key Store Bank 1 Bank 4 8 GB 16 GB 24 GB 32 GB Main Edge Connector 15

16 There are minor differences between a z800 and z900: Design Differences The z800 has a larger entry memory configuration (8 GB vs. z900 s 5 GB). The z800 has larger memory increments (8 GB) minimizing the need to upgrade memory. The z800 has no partial memory restart function. The z800 has one Primary Memory Array vs. the z900's two or four. The z800 has no concurrent memory upgrade. The z800 has a cycle time of 1.6 ns vs. 1.3 ns for z900. The z800 has no Internal Battery feature. The z800 has one power phase. The z800 has no native parallel channels. Parallel channel. connectivity is obtained via ESCON and the OPTICA converter. The z800 has no OSA2 support. The z800 has a smaller MCM which scales to a 4way. All z800s have one SAP standard. The z900 models have two or three SAPs standard. There is no spare PU on the z800 4way. All z900s have at least one spare. The z800 has a smaller entry point (performance) than the z900. There are four z800 uniprocessors. The z800 is air cooled (no closed loop liquid cooling). Not all models can be concurrently upgraded. The z800 uses less power, cooling and space. The z800 requires 30 inches of service clearance on all sides. Offering Differences The z800 Dedicated Linux Model Software is priced differently Cryptographic Coprocessor is optional z/os.e is available on the z800 The z800 has been designed to provide the platform for the reliability, availability and serviceability that demanding 24x365 applications need. The z800 s technology integration enhances reliability by providing fewer chances for failure. Use of highly reliable burned in technology assists in eliminating early life failures. Robust design such as constant memory error checking and correction, transparent sparing and concurrent maintenance allow robust availability. The z800 has two service elements. The z800 fits efficiently into a single frame which has a smaller footprint than the z900 s base frame. The z800 is an extension of the z900 design to allow small and medium size customers to exploit function and features of the zseries and its benefits to ebusiness applications. z800 5PU Logical Structure Processor card Memory Memory MCM PU0 PU4 PU1 L1 L1 L1 1 Cache Control Chip + 2 Cache Data Chips Crypto 0 MC0 MC1 Crypto 1 8 MB Shared L2 Cache up to 32GB L1 L1 total Memory Clock PU2 MBA PU3 Memory Memory STI Links 1 Gbyte/s Left bank DIMMS and Control Store Up to 16 I/O Adapter: ESCON 16 Port FICON Express 2 Port OSAE GbE 2 port OSAE FEN 2 port OSAE ATM 2 port OSAE HSTR 2 port ISC3 14 Port PCICC 2 Processors PCICA 2 Processors I/O Cage ETRLinks Right bank DIMMS ICB3 Links 1 Gbyte/s (To other z800 or z900) 16

17 Relative Performance of z900 Servers Relative Performance of z800 Servers z900 Relative/Avg Performance Based on ITRs z800 Relative/Avg Performance Based on ITRs 4way zseries 16way zseries 12way G6 12way G6 10way G5 Relative Performance 4way G5 2way G5 based 10way G4 10way G3 Relative Performance 4way zseries 5way z900 Zn7 z800 n7 Yn6 z800 RA6RD6 MP3000 RA5RY5 RA4RY4 MP2000 Note: The figures on this page are to be used to depict the relationship between different Parallel Enterprise servers and zseries in terms of increasing capacity within each family. Comparing relative performance of one model to other models should be done using LSPR. 17

18 z800 Family Models The z800 has a total of 10 models to offer flexibility in selecting a system to meet the customer s needs. Eight of the models are general purpose systems. One model is the Dedicated Linux Model 0LF, the remaining model is the Coupling Facility Model 0CF. There are a wide range of upgrade options available which are described below and shown on the following pages. Capacity Upgrade on Demand and Capacity Backup (CBU) are available. The z800 has also been designed to offer a high performance and efficient I/O structure to meet the demands of ebusiness as well as high demand transaction processing applications. The z800 has one 16 slot I/O cage to house the different I/O cards. Up to 240 ESCON channels (16 cards) will fit into the I/O cage; or a total of 16 FICON Express channels (eight cards) and 120 ESCON channels (eight cards) can be accommodated in a fully configured system. There is one system design for the zseries 800. This is a five Processor Unit (PU) MultiChip Module (MCM) with up to 32 GB of memory (entry storage is 8 GB). All z800 models have one memory bus. The PU has a cycle time of 1.6 nanoseconds. The z800 models and a discussion of configurations follow. z800 General Purpose Models These eight models are general purpose systems and range from a 1way to 4way symmetrical multiprocessor (SMP) server. These models can upgrade from one model to the next. All models have one System Assist Processor (SAP) as standard and six STI links for I/O attachment. Any of the spare PUs on the MCM can be assigned as a Central Processor (CP), Integrated Coupling Facility (ICF) engine or Integrated Facility for Linux (IFL) engine. Transparent sparing allows a spare PU, if available, to replace a failing CP, SAP, ICF or IFL. z800 Dedicated Linux Model, z800 Linux Model 0LF The model 0LF is the Dedicated Linux Model. All engines are IFLs and can run one to four independent IFLs. This model can be upgraded by adding IFLs until the maximum of four is reached. The fifth PU is an SAP. All IFLs run at full speed. 0A1 0B1 0C A z800 Server CMOS 8S with copper interconnect 5 PUs Up to 4 CPs 832 GB Memory 1.6 ns Cycle time 6 STI buses Other Models Available: z800 Dedicated Linux Model, z800 Linux Model 0LF z800 Coupling Facility Model 0CF z800 Coupling Facility Model 0CF The Model 0CF is the standalone Coupling Facility Model in the z800 family. This model can have one to four ICF engines. All ICF engines run at full speed. It is recommended that the z800 CF Model 0CF be used in production data sharing configurations for isolating the physical Coupling Facility for availability and flexibility. The Coupling Facility Control Code (CFCC) on the z800 0CF is a 64bit implementation, using the full addressing capability of the z/architecture. This capability provides storage relief for any software that needs to use large lock structures; DB2 data sharing is just one example. z800 s Capacity Upgrade on Demand (CUoD) Capacity Upgrade on Demand (CUoD) allows for the nondisruptive addition of one or more Central Processors (CPs), Internal Coupling Facilities (ICFs) and/or Integrated Facility for Linux (IFLs). Capacity Upgrade on Demand can very quickly add processors up to the maximum number of available inactive engines. This provides customers with value for much needed dynamic growth in an unpredictable ebusiness world. The Capacity Upgrade on Demand function combined with Parallel Sysplex technology enables dynamic capacity upgrade capability. 18

19 The CUoD functions are: z800 continues to support the dynamic CUoD function introduced on G5/G6. Nondisruptive CP, ICF, and IFL upgrades are available on full engine general purpose models (Models 001, 002 and 003) within minutes. Dynamic upgrade of all I/O cards in the z800 I/O Cage. zseries Server Capacity BackUp (CBU) Capacity BackUp (CBU) is offered with the zseries processors to provide reserved emergency backup CPU capacity for situations where customers have lost capacity in another part of their establishment and want to recover by adding reserved capacity on a designated zseries system. A CBU system normally operates with a base CPU configuration and with a preconfigured number of additional Processor Units (PUs) reserved for activation in case of an emergency. The zseries server technology is ideally suited for providing capacity backup since the reserved CBU processing units are on the same technology building block, the MCM, as the regular CPs. Therefore, a single processor can support two diverse configurations with the same MCM. For CBU purposes, the z800 general purpose full engine models (001, 002, 003) can scale from a oneway to a fourway system. The following chart describes the possible nondisruptive CBU upgrades. Concurrent subuni or subdyadic CBU upgrades are not supported. From/To Yes Yes Yes 002 N/A Yes Yes The base CBU configuration must have sufficient memory and channels to accommodate the potential needs of the larger CBU target machine. When capacity is needed in an emergency, the primary operation performed is activating the emergency CBU configuration with the reserved PUs added into the configuration as CPs. Upon request from the customer, IBM can remotely activate the emergency configuration. This is a fast electronic activation that eliminates time associated with waiting for an IBM CE to arrive onsite to perform the activation. A customer request through the Hardware Master Console and Remote Support Facility could drive activation time down to minutes; a request by telephone (for customers without RSF) could drive activation to less than an hour. The z800 supports concurrent CBU downgrade on full engine models. This function enables a Capacity Backup Server to be returned to its normal configuration without an outage (i.e. Poweron reset). Automatic Enablement of CBU for Geographically Dispersed Parallel Sysplex (GDPS) The intent of the GDPS CBU is to enable automatic management of the reserved PUs provided by the CBU feature in the event of a processor failure and/or a site failure. Upon detection of a site failure or planned disaster test GDPS is designed to dynamically add PUs to the processors in the takeover site to restore processing power for missioncritical production workloads. I/O Connectivity The z800 contains an I/O subsystem infrastructure which uses an I/O cage that provides 16 I/O slots compared to the G5/G6 style cage with 22 slots. ESCON, FICON Express, OSAExpress and ICF3 cards plug into the zseries I/O cage. All I/O cards can be hotplugged in the I/O cage for advanced availability. 003 N/A N/A Yes 19

20 The I/O cage takes advantage of an exclusive IBM packaging technology that provides a subsystem with approximately seven times higher bandwidth than the previous G5/G6 I/O cage. Each z800 model comes with one I/O cage in the Frame (the Frame also contains the processor CEC cage). The I/O cage, using 16 port ESCON cards, can hold 240 ESCON channels; previous packaging required three I/O cages to package the same number of channels. For FICON Express, the I/O cage can accommodate up to 16 cards or 32 FICON Express channels per cage. Cage Layout and Options Supported z800 Channels and I/O Adapters: ESCON (16 port) ISC3 FICON Express PCICC PCICA OSAExpress TokenRing OSAExpress Gb Ethernet OSAExpress Fast Ethernet OSAExpress 155 ATM Up to 240 ESCON Channels A 16 port channel card which plugs into the z800 I/O cage is used for all ESCON channel orders. Up to 15 ports are used for ESCON connectivity; one port is reserved as a spare port. Channels are available in four port increments and are allocated for maximum availability across multiple ESCON adapters. The chart below shows channel increments and the number of ESCON adapters needed. ESCON Configuration Number of Channels Cards Note: The I/O cage has eight slots in the front and eight slots in the back. I/O Cards and Channels The following topics describe I/O cards and the functionality supported by the I/O cage as well as coupling links and virtual networks on the z Up to 32 FICON Express Channels The z800 supports up to 32 FICON Express channels. FICON Express is available in long wave (L) and short wave (S) operation. The FICON Express card has two channels per card. The L and S cannot be intermixed on a single card. The maximum number of FICON Express cards is 16 installed in the I/O cage. 20

Specialty engines for flexible

Specialty engines for flexible A mainframe sized and priced to suit your business needs IBM ^ zseries 890 A mainframe for the mid-sized enterprise The wave of change that IBM first called on demand continues to gain momentum. Customers

More information

IBM Systems and Technology Group Technical Conference

IBM Systems and Technology Group Technical Conference IBM TRAINING IBM STG Technical Conference IBM Systems and Technology Group Technical Conference Munich, Germany April 16 20, 2007 IBM TRAINING IBM STG Technical Conference E72 Storage options and Disaster

More information

FICON Extended Distance Solution (FEDS)

FICON Extended Distance Solution (FEDS) IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon bfallon@us.ibm.com FEDS: The Optimal Transport Solution

More information

IBM Communications Server for Linux - Network Optimization for On Demand business

IBM Communications Server for Linux - Network Optimization for On Demand business Optimizing your network infrastructure for on demand business IBM Communications Server for Linux - Network Optimization for On Demand business IBM Communications Server for Linux provides a cost-effective

More information

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number

More information

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment

More information

Performance and scalability of a large OLTP workload

Performance and scalability of a large OLTP workload Performance and scalability of a large OLTP workload ii Performance and scalability of a large OLTP workload Contents Performance and scalability of a large OLTP workload with DB2 9 for System z on Linux..............

More information

Big Data Storage in the Cloud

Big Data Storage in the Cloud Big Data Storage in the Cloud Russell Witt Scott Arnett CA Technologies Tuesday, March 11 Session Number 15288 Tuesday, March 11Tuesday, March 11 Abstract Need to reduce the cost of managing storage while

More information

IBM Software Group. Lotus Domino 6.5 Server Enablement

IBM Software Group. Lotus Domino 6.5 Server Enablement IBM Software Group Lotus Domino 6.5 Server Enablement Agenda Delivery Strategy Themes Domino 6.5 Server Domino 6.0 SmartUpgrade Questions IBM Lotus Notes/Domino Delivery Strategy 6.0.x MRs every 4 months

More information

Setting a new standard

Setting a new standard Changing the UNIX landscape IBM pseries 690 nearly doubling the power of the pseries 680, previously the most powerful pseries server available. 2 The powerful IBM ^ pseries 690 Highlights Datacenter-class

More information

Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308

Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant Laurak@aesclever.com Applied Expert Systems, Inc. 2011 1 Background

More information

Lisa Gundy IBM Corporation. Wednesday, March 12, 2014: lisat@us.ibm.com. 11:00 AM 12:00 PM Session 15077

Lisa Gundy IBM Corporation. Wednesday, March 12, 2014: lisat@us.ibm.com. 11:00 AM 12:00 PM Session 15077 Continuing the understanding of IBM Copy Services: Peer-to-Peer-Remote-Copy (PPRC) and Point in Time Copy (FlashCopy) for High Availability (HA) and Disaster Recovery (DR) Lisa Gundy IBM Corporation lisat@us.ibm.com

More information

IBM Introduces Nine New Models of the IBM z800 Family of Servers

IBM Introduces Nine New Models of the IBM z800 Family of Servers Hardware Announcement February 19, 2002 IBM Introduces Nine New Models of the IBM z800 Family of Servers Overview The newest additions to the IBM z800 family of servers consist of eight general-purpose

More information

IBM TotalStorage IBM TotalStorage Virtual Tape Server

IBM TotalStorage IBM TotalStorage Virtual Tape Server IBM TotalStorage IBM TotalStorage Virtual Tape Server A powerful tape storage system that helps address the demanding storage requirements of e-business storag Storage for Improved How can you strategically

More information

Mainframe. Large Computing Systems. Supercomputer Systems. Mainframe

Mainframe. Large Computing Systems. Supercomputer Systems. Mainframe 1 Large Computing Systems Server Farm Networked cluster of interchangeable file/application servers Provides load balancing for availability and reliability Blade Server Server farm in a single cabinet

More information

Mainframe hardware course: Mainframe s processors

Mainframe hardware course: Mainframe s processors Mainframe hardware course: Mainframe s processors z/os Basic Skills: The mainframe s processors Mainframe s processors This hardware course introduces you to one model of IBM mainframe computer, the IBM

More information

SuSE Linux High Availability Extensions Hands-on Workshop

SuSE Linux High Availability Extensions Hands-on Workshop SHARE Orlando August 2011 SuSE Linux High Availability Extensions Hands-on Workshop Richard F. Lewis IBM Corp rflewis@us.ibm.com Trademarks The following are trademarks of the International Business Machines

More information

V01. IBM zseries Expo April 16-20, 2007 Munich, Germany

V01. IBM zseries Expo April 16-20, 2007 Munich, Germany TRAINING V01 Virtualization Basics Brian K. Wade, Ph.D. - bkw@us.ibm.com Expo April 16-20, 2007 Munich, Germany Corporation 2006 1 Trademarks The following are trademarks of the International Business

More information

Virtual Networking with z/vm 5.1.0 Guest LAN and Virtual Switch

Virtual Networking with z/vm 5.1.0 Guest LAN and Virtual Switch Virtual Networking with z/vm 5.1.0 Guest LAN and Virtual Switch HILLGANG March 2005 Dennis Musselwhite, IBM z/vm Development, Endicott, NY Note References to IBM products, programs, or services do not

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

GDPS and Hitachi Virtual Storage Platform Qualification Test

GDPS and Hitachi Virtual Storage Platform Qualification Test GDPS GDPS / USP-100 / VSP Qualification test GDPS and Hitachi Virtual Storage Platform Qualification Test Paolo Vitali GDPS Solution Test Team, IT Specialist paolo_vitali@it.ibm.com David Petersen GDPS

More information

SCSI vs. Fibre Channel White Paper

SCSI vs. Fibre Channel White Paper SCSI vs. Fibre Channel White Paper 08/27/99 SCSI vs. Fibre Channel Over the past decades, computer s industry has seen radical change in key components. Limitations in speed, bandwidth, and distance have

More information

STORAGETEK VIRTUAL STORAGE MANAGER SYSTEM

STORAGETEK VIRTUAL STORAGE MANAGER SYSTEM STORAGETEK VIRTUAL STORAGE MANAGER SYSTEM KEY BENEFITS BUSINESS RESUMPTION NOW! Manage massive data growth. Support your legacy and growing business applications by dramatically increasing the amount of

More information

A Fresh Look at the Mainframe

A Fresh Look at the Mainframe A Fresh Look at the Mainframe The Mainframe Design Point Fundamentally Better ODI s New Applications Will Quickly Gain Momentum European Expansion 06 - Fund_Z_Design v3.0.ppt 2 1 ODI Needs a World Wide

More information

UPSTREAM for Linux on System z

UPSTREAM for Linux on System z PRODUCT SHEET UPSTREAM for Linux on System z UPSTREAM for Linux on System z UPSTREAM for Linux on System z is designed to provide comprehensive data protection for your Linux on System z environment, leveraging

More information

High Availability for Linux on IBM System z Servers

High Availability for Linux on IBM System z Servers High Availability for Linux on IBM System z s Scott Loveland IBM Systems and Technology Group Poughkeepsie, NY d10swl1@us.ibm.com August 14, 2008 Session 9276 Trademarks The following are trademarks of

More information

Oracle on System z Linux- High Availability Options Session ID 252

Oracle on System z Linux- High Availability Options Session ID 252 Oracle on System z Linux- High Availability Options Session ID 252 Sam Amsavelu IBM Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or

More information

The Consolidation Process

The Consolidation Process The Consolidation Process an overview Washington System Center IBM US Gaithersburg SIG User Group April 2009 Trademarks The following are trademarks of the International Business Machines Corporation in

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

High Availability Architectures for Linux in a Virtual Environment

High Availability Architectures for Linux in a Virtual Environment High Availability Architectures for Linux in a Virtual Environment Scott Loveland IBM Systems and Technology Group Poughkeepsie, NY d10swl1@us.ibm.com August 24, 2009 Session 9276 Trademarks The following

More information

Four important conclusions were drawn from the survey:

Four important conclusions were drawn from the survey: The IBM System z10 is an evolutionary machine in the history of enterprise computing. To realize the full value of an investment in the System z10 requires a transformation in the infrastructure supporting

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key

More information

Why Switched FICON? (Switched FICON vs. Direct-Attached FICON)

Why Switched FICON? (Switched FICON vs. Direct-Attached FICON) WHITE PAPER MAINFRAME Why Switched FICON? (Switched FICON vs. Direct-Attached FICON) Organizations of all sizes are looking to simplify their IT infrastructures to reduce costs. Some might consider implementing

More information

Implementing Tivoli Storage Manager on Linux on System z

Implementing Tivoli Storage Manager on Linux on System z IBM Software Group Implementing Tivoli Storage Manager on Linux on System z Laura Knapp ljknapp@us.ibm.com 2006 Tivoli Software 2006 IBM Corporation Agenda Why use Linux on System z for TSM TSM Some basics

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

OPTIMIZING SERVER VIRTUALIZATION

OPTIMIZING SERVER VIRTUALIZATION OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)

More information

Creating a World of Value for LINUX: S/390 Virtual Image Facility for LINUX and the IBM S/390 Integrated Facility for LINUX

Creating a World of Value for LINUX: S/390 Virtual Image Facility for LINUX and the IBM S/390 Integrated Facility for LINUX Software Announcement August 1, 2000 Creating a World of Value for LINUX: S/390 Virtual Image for LINUX and the IBM S/390 Integrated for LINUX Overview LINUX is one of the fastest growing operating systems,

More information

RELEASE NOTES. StoneGate Firewall/VPN v2.2.11 for IBM zseries

RELEASE NOTES. StoneGate Firewall/VPN v2.2.11 for IBM zseries RELEASE NOTES StoneGate Firewall/VPN v2.2.11 for IBM zseries Copyright 2006 Stonesoft Corp. All rights reserved. All trademarks or registered trademarks are property of their respective owners. Disclaimer:

More information

zframe: a technical overview for

zframe: a technical overview for ES : A Bottom Up View of High Tech Mainframe Options zframe: a technical overview for Mike Hammock Cornerstone Systems Inc. IBM zseries Enablement Solutions mhammock@csihome.com Cornerstone's zframe Objectives:

More information

Exam : IBM 000-851. : Iseries Linux Soluton Sales v5r3

Exam : IBM 000-851. : Iseries Linux Soluton Sales v5r3 Exam : IBM 000-851 Title : Iseries Linux Soluton Sales v5r3 Version : R6.1 Prepking - King of Computer Certification Important Information, Please Read Carefully Other Prepking products A) Offline Testing

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

IBM Enterprise Linux Server

IBM Enterprise Linux Server IBM Systems and Technology Group February 2011 IBM Enterprise Linux Server Impressive simplification with leading scalability, high availability and security Table of Contents Executive Summary...2 Our

More information

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database

An Oracle White Paper November 2010. Oracle Real Application Clusters One Node: The Always On Single-Instance Database An Oracle White Paper November 2010 Oracle Real Application Clusters One Node: The Always On Single-Instance Database Executive Summary... 1 Oracle Real Application Clusters One Node Overview... 1 Always

More information

VERITAS Business Solutions. for DB2

VERITAS Business Solutions. for DB2 VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................

More information

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available Phone: (603)883-7979 sales@cepoint.com Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous

More information

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays

Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays Remote Copy Technology of ETERNUS6000 and ETERNUS3000 Disk Arrays V Tsutomu Akasaka (Manuscript received July 5, 2005) This paper gives an overview of a storage-system remote copy function and the implementation

More information

IBM Tivoli Storage FlashCopy Manager Overview Wolfgang Hitzler Technical Sales IBM Tivoli Storage Management hitzler@de.ibm.com

IBM Tivoli Storage FlashCopy Manager Overview Wolfgang Hitzler Technical Sales IBM Tivoli Storage Management hitzler@de.ibm.com IBM Tivoli Storage FlashCopy Manager Overview Wolfgang Hitzler Technical Sales IBM Tivoli Storage Management hitzler@de.ibm.com Why Snapshots Are Useful for Backup Faster backups without taking applications

More information

Distribution One Server Requirements

Distribution One Server Requirements Distribution One Server Requirements Introduction Welcome to the Hardware Configuration Guide. The goal of this guide is to provide a practical approach to sizing your Distribution One application and

More information

ORACLE DATABASE 10G ENTERPRISE EDITION

ORACLE DATABASE 10G ENTERPRISE EDITION ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (Fibre Channel/iSCSI) enables SAN tiering Balanced performance well-suited

More information

SHARE Lunch & Learn #15372

SHARE Lunch & Learn #15372 SHARE Lunch & Learn #15372 Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Scott James VP Global Alliances Luminex Software, Inc. Randy Fleenor Worldwide Data Protection

More information

Virtual Networking with z/vm Guest LANs and the z/vm Virtual Switch

Virtual Networking with z/vm Guest LANs and the z/vm Virtual Switch Virtual Networking with z/vm Guest LANs and the z/vm Virtual Switch Alan Altmark, IBM z/vm Development, Endicott, NY Note References to IBM products, programs, or services do not imply that IBM intends

More information

HITACHI DATA SYSTEMS USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013

HITACHI DATA SYSTEMS USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013 HITACHI DATA SYSTEMS USER HITACHI GROUP DATA CONFERENCE SYSTEMS 2013 USER GORUP CONFERENCE 2013 MAINFRAME / ZOS WALTER AMSLER, SENIOR DIRECTOR JANUARY 23, 2013 AGENDA Hitachi Mainframe Strategy Compatibility

More information

Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U

Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Hitachi Virtage Embedded Virtualization Hitachi BladeSymphony 10U Datasheet Brings the performance and reliability of mainframe virtualization to blade computing BladeSymphony is the first true enterprise-class

More information

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците

Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците Why SAN? Business demands have created the following challenges for storage solutions: Highly available and easily

More information

Server and Storage Virtualization with IP Storage. David Dale, NetApp

Server and Storage Virtualization with IP Storage. David Dale, NetApp Server and Storage Virtualization with IP Storage David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this

More information

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment

More information

Continuous Data Protection. PowerVault DL Backup to Disk Appliance

Continuous Data Protection. PowerVault DL Backup to Disk Appliance Continuous Data Protection PowerVault DL Backup to Disk Appliance Continuous Data Protection Current Situation The PowerVault DL Backup to Disk Appliance Powered by Symantec Backup Exec offers the industry

More information

Business Value of the Mainframe

Business Value of the Mainframe Business Value of the Mainframe Jim Elliott Advocate Linux, Open Source, and Virtualization and Manager System z Operating Systems IBM Canada Ltd. 1 International zseries Oracle SIG 2007-04-17 2007 IBM

More information

Planning for Virtualization

Planning for Virtualization Planning for Virtualization Jaqui Lynch Userblue Jaqui.lynch@mainline.com http://www.circle4.com/papers/ubvirtual.pdf Agenda Partitioning Concepts Virtualization Planning Hints and Tips References 1 Partitioning

More information

Virtualization Standards for Business Continuity: Part 1

Virtualization Standards for Business Continuity: Part 1 The purpose of this series of articles is to define the policies, guidelines, standards, and procedures that provide the foundation of a virtualized environment enabling business continuity, disaster recovery,

More information

iseries Logical Partitioning

iseries Logical Partitioning iseries Logical Partitioning Logical Partition (LPAR) SYS1 1:00 Japan SYS2 10:00 USA SYS3 11:00 Brazil SYS4 15:00 UK ORD EMEPROGRESSO i5/os Linux i5/os AIX LPARs operate independently! iseries Partition

More information

High Availability Server Clustering Solutions

High Availability Server Clustering Solutions White Paper High vailability Server Clustering Solutions Extending the benefits of technology into the server arena Intel in Communications Contents Executive Summary 3 Extending Protection to Storage

More information

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin rhbear@us.ibm.com IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency

More information

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery

More information

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering DELL RAID PRIMER DELL PERC RAID CONTROLLERS Joe H. Trickey III Dell Storage RAID Product Marketing John Seward Dell Storage RAID Engineering http://www.dell.com/content/topics/topic.aspx/global/products/pvaul/top

More information

System i and System p. Customer service, support, and troubleshooting

System i and System p. Customer service, support, and troubleshooting System i and System p Customer service, support, and troubleshooting System i and System p Customer service, support, and troubleshooting Note Before using this information and the product it supports,

More information

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments

Optimized Storage Solution for Enterprise Scale Hyper-V Deployments Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March

More information

Hitachi TagmaStore Universal Storage Platform and Network Storage Controller. Partner Beyond Technology

Hitachi TagmaStore Universal Storage Platform and Network Storage Controller. Partner Beyond Technology Hitachi TagmaStore Universal Storage Platform and Network Storage Controller Partner Beyond Technology Hitachi TagmaStore Universal Storage Platform and Network Storage Controller Having established a

More information

Dell High Availability Solutions Guide for Microsoft Hyper-V

Dell High Availability Solutions Guide for Microsoft Hyper-V Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

What s new in Hyper-V 2012 R2

What s new in Hyper-V 2012 R2 What s new in Hyper-V 2012 R2 Carsten Rachfahl MVP Virtual Machine Rachfahl IT-Solutions GmbH & Co KG www.hyper-v-server.de Thomas Maurer Cloud Architect & MVP itnetx gmbh www.thomasmaurer.ch Before Windows

More information

IBM. by zseries 900. LUGS Meeting 30.8.2001. Peter Stammbach IBM Schweiz Consulting IT Specialist, High End Servers peter.stammbach@ch.ibm.

IBM. by zseries 900. LUGS Meeting 30.8.2001. Peter Stammbach IBM Schweiz Consulting IT Specialist, High End Servers peter.stammbach@ch.ibm. IBM by zseries 900 Peter Stammbach IBM Schweiz Consulting IT Specialist, High End Servers peter.stammbach@ch.ibm.com What is LINUX Popular UNIX-like operating system Developed by Linus Torvalds in 1991

More information

Universal Data Access and Future Enhancements

Universal Data Access and Future Enhancements IBM Enterprise Storage Server (ESS) The Storage Server Standard For The New Millennium ESS provides:! Ultra High Availability! Massive Scalability 420GB to 11.2TB! High Performance with I/O Rates Exceeding

More information

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization White Paper Intel Ethernet Multi-Port Server Adapters Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization Introduction As enterprise-class server infrastructures adopt virtualization

More information

BROCADE PERFORMANCE MANAGEMENT SOLUTIONS

BROCADE PERFORMANCE MANAGEMENT SOLUTIONS Data Sheet BROCADE PERFORMANCE MANAGEMENT SOLUTIONS SOLUTIONS Managing and Optimizing the Performance of Mainframe Storage Environments HIGHLIGHTs Manage and optimize mainframe storage performance, while

More information

Primary Data Center. Remote Data Center Plans (COOP), Business Continuity (BC), Disaster Recovery (DR), and data

Primary Data Center. Remote Data Center Plans (COOP), Business Continuity (BC), Disaster Recovery (DR), and data White Paper Storage Extension Network Solutions Between Data Centers Simplified, Low Cost, Networks for Storage Replication, Business Continuity and Disaster Recovery TODAY S OPERATING CLIMATE DEMANDS

More information

SHARE in Pittsburgh Session 15591

SHARE in Pittsburgh Session 15591 Top 10 Things You Should Be Doing On Your HMC But You're NOT You Probably Are Tuesday, August 5th 2014 Jason Stapels HMC Development jstapels@us.ibm.com Agenda Setting up HMC for Remote Use Securing User

More information

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms

Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms IT@Intel White Paper Intel IT IT Best Practices: Data Center Solutions Server Virtualization August 2010 Analyzing the Virtualization Deployment Advantages of Two- and Four-Socket Server Platforms Executive

More information

IMS Disaster Recovery

IMS Disaster Recovery IMS Disaster Recovery Part 1 Understanding the Issues February 5, 2008 Author Bill Keene has almost four decades of IMS experience and is recognized world wide as an expert in IMS recovery and availability.

More information

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of

More information

Arwed Tschoeke, Systems Architect tschoeke@de.ibm.com IBM Systems and Technology Group

Arwed Tschoeke, Systems Architect tschoeke@de.ibm.com IBM Systems and Technology Group Virtualization in a Nutshell Arwed Tschoeke, Systems Architect tschoeke@de.ibm.com and Technology Group Virtualization Say What? Virtual Resources Proxies for real resources: same interfaces/functions,

More information

Windows Server 2008 R2 Hyper V. Public FAQ

Windows Server 2008 R2 Hyper V. Public FAQ Windows Server 2008 R2 Hyper V Public FAQ Contents New Functionality in Windows Server 2008 R2 Hyper V...3 Windows Server 2008 R2 Hyper V Questions...4 Clustering and Live Migration...5 Supported Guests...6

More information

IMS Disaster Recovery Overview

IMS Disaster Recovery Overview IMS Disaster Recovery Overview Glenn Galler gallerg@us.ibm.com IBM Advanced Technical Skills (ATS) August 8, 2012 (1:30-2:30 pm) IBM Disaster Recovery Solutions IMS Recovery Solutions IMS databases are

More information

Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata

Implementing Network Attached Storage. Ken Fallon Bill Bullers Impactdata Implementing Network Attached Storage Ken Fallon Bill Bullers Impactdata Abstract The Network Peripheral Adapter (NPA) is an intelligent controller and optimized file server that enables network-attached

More information

S/390 Virtual Image Facility for Linux (VIF)

S/390 Virtual Image Facility for Linux (VIF) S/390 Virtual Image Facility for Linux (VIF) WAVV 2000 Colorado Springs October, 2000 Agenda Introduction Product Overview Planning Installation Positioning Availability Introduction S/390 Virtual Image

More information

Without a doubt availability is the

Without a doubt availability is the June 2013 Michael Otey The Path to Five 9s Without a doubt availability is the DBA s first priority. Even performance ceases to matter if the database isn t available. High availability isn t just for

More information

CommuniGate Pro SIP Performance Test on IBM System z9. Technical Summary Report Version V03

CommuniGate Pro SIP Performance Test on IBM System z9. Technical Summary Report Version V03 CommuniGate Pro SIP Performance Test on IBM System z9 Technical Summary Report Version V03 Version : 03 Status : final Updated : 16 March 2007. PSSC IBM Customer Centre Montpellier March 16, 2007 Page

More information

IBM Tivoli Storage Manager

IBM Tivoli Storage Manager Centralized, automated data protection from laptops to mainframes IBM Tivoli Storage Manager Highlights Data backup and restore Manage data archive and retrieve Protection for 24x365 business-critical

More information

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.

More information

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC

More information

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development

Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems

More information

Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Copyright 2013, Oracle and/or its affiliates. All rights reserved. 1 Oracle SPARC Server for Enterprise Computing Dr. Heiner Bauch Senior Account Architect 19. April 2013 2 The following is intended to outline our general product direction. It is intended for information

More information

Cisco Active Network Abstraction Gateway High Availability Solution

Cisco Active Network Abstraction Gateway High Availability Solution . Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and

More information

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...

More information

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer kklemperer@blackboard.com Agenda Session Length:

More information

SQL Server Storage Best Practice Discussion Dell EqualLogic

SQL Server Storage Best Practice Discussion Dell EqualLogic SQL Server Storage Best Practice Discussion Dell EqualLogic What s keeping you up at night? Managing the demands of a SQL environment Risk Cost Data loss Application unavailability Data growth SQL Server

More information

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building

More information