Implementing Storage in Intel Omni-Path Architecture Fabrics

Similar documents
RoCE vs. iwarp Competitive Analysis

Vendor Update Intel 49 th IDC HPC User Forum. Mike Lafferty HPC Marketing Intel Americas Corp.

Solution Recipe: Improve PC Security and Reliability with Intel Virtualization Technology

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Configuring LNet Routers for File Systems based on Intel Enterprise Edition for Lustre* Software

Intel Platform and Big Data: Making big data work for you.

InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment

Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003

Virtual Compute Appliance Frequently Asked Questions

SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation

High Performance Computing and Big Data: The coming wave.

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture

Lustre Networking BY PETER J. BRAAM

Intel Network Builders: Lanner and Intel Building the Best Network Security Platforms

SAN Conceptual and Design Basics

BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Cluster Implementation and Management; Scheduling

High Availability Server Clustering Solutions

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

Oracle Big Data Appliance: Datacenter Network Integration

RDMA over Ethernet - A Preliminary Study

Headline in Arial Bold 30pt. The Need For Speed. Rick Reid Principal Engineer SGI

Intel and Qihoo 360 Internet Portal Datacenter - Big Data Storage Optimization Case Study

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

SMB Direct for SQL Server and Private Cloud

新 一 代 軟 體 定 義 的 網 路 架 構 Software Defined Networking (SDN) and Network Function Virtualization (NFV)

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

Intel Media SDK Library Distribution and Dispatching Process

Mellanox Academy Online Training (E-learning)

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

Intel Open Network Platform Release 2.1: Driving Network Transformation

High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

System Requirements and Platform Support Guide

Cluster Grid Interconects. Tony Kay Chief Architect Enterprise Grid and Networking

Intelligent Business Operations

Solution Recipe: Remote PC Management Made Simple with Intel vpro Technology and Intel Active Management Technology

FLOW-3D Performance Benchmark and Profiling. September 2012

Configuration Maximums VMware vsphere 4.0

Evaluation Report: Emulex OCe GbE and OCe GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters

Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization

Developing Higher Density Solutions with Dialogic Host Media Processing Software

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Oracle SDN Performance Acceleration with Software-Defined Networking

IP SAN Fundamentals: An Introduction to IP SANs and iscsi

The Transition to PCI Express* for Client SSDs

Integrated Application and Data Protection. NEC ExpressCluster White Paper

PCI Express IO Virtualization Overview

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

A Superior Hardware Platform for Server Virtualization

Zadara Storage Cloud A

Mellanox WinOF for Windows 8 Quick Start Guide

Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches

3G Converged-NICs A Platform for Server I/O to Converged Networks

Scaling up to Production

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

Adopting Software-Defined Networking in the Enterprise

Appro Supercomputer Solutions Best Practices Appro 2012 Deployment Successes. Anthony Kenisky, VP of North America Sales

Configuration Maximums

The Intel NetStructure SIU520 Signaling Interface

UEFI PXE Boot Performance Analysis

Developing High-Performance, Scalable, cost effective storage solutions with Intel Cloud Edition Lustre* and Amazon Web Services

Fiber Channel Over Ethernet (FCoE)

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School

Hadoop on the Gordon Data Intensive Cluster

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

Intel Active Management Technology with System Defense Feature Quick Start Guide

Re-Hosting Mainframe Applications on Intel Xeon Processor-based Servers

EXAScaler. Product Release Notes. Version Revision A0

The Foundation for Better Business Intelligence

Running Native Lustre* Client inside Intel Xeon Phi coprocessor

Intel Solid-State Drives Increase Productivity of Product Design and Simulation

Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks

Configuration Maximums VMware vsphere 4.1

Oracle Linux Strategy and Roadmap

Performance Evaluation of the RDMA over Ethernet (RoCE) Standard in Enterprise Data Centers Infrastructure. Abstract:

10GBASE-T for Broad 10 Gigabit Adoption in the Data Center

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

High-Availability and Scalable Cluster-in-a-Box HPC Storage Solution

Introduction to PCI Express Positioning Information

Intel Service Assurance Administrator. Product Overview

Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V. Jose Barreto Principal Program Manager Microsoft Corporation

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

How To Test Nvm Express On A Microsoft I7-3770S (I7) And I7 (I5) Ios 2 (I3) (I2) (Sas) (X86) (Amd)

Intel Storage System SSR212CC Enclosure Management Software Installation Guide For Red Hat* Enterprise Linux

Kronos Workforce Central on VMware Virtual Infrastructure

Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520

Interwoven TeamSite* 5.5 Content Management Solution Sizing Study

Dell Reference Configuration for Hortonworks Data Platform

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Next Gen Data Center. KwaiSeng Consulting Systems Engineer

Post-production Video Editing Solution Guide with Quantum StorNext File System AssuredSAN 4000

Microsoft Private Cloud Fast Track Reference Architecture

Transcription:

white paper Implementing in Intel Omni-Path Architecture s Rev 1 A rich ecosystem of storage solutions supports Intel Omni- Path Architecturebased systems. Executive Overview The Intel Omni-Path Architecture () is the next-generation fabric architected to deliver the performance and scaling needed for tomorrow s high performance computing (HPC) workloads. A rich ecosystem of storage offerings and solutions is key to enabling the building of high performance Intel OPA-based systems. This white paper describes Intel OPA storage solutions and discusses the considerations involved in selecting the best storage solution. It is aimed at the OEM solution architect and others interested in understanding storage options for Intel OPA-based systems. Overview An Intel OPA-based system solution often requires connectivity to a parallel file system, enabling the Intel OPA-based compute nodes to access the file system storage. These storage solutions involve storage servers, routers, block storage devices, storage networks, and parallel file systems. This overview discusses the components that make up the storage solutions and describes some typical configurations.

2 Table of Contents Executive Overview... 1 Overview... 1 Components This section describes the terminology that will be used in this paper to discuss the components of a storage solution. Components... 2 Configurations... 2 INTEL OPA Software and Considerations... 3 Intel OPA Host Software.............3 Intel OPA HFI coexistence with Mellanox* InfiniBand* HCA... 4 Intel OPA Solutions... 4 Client Side Connection IB,, Ethernet, etc. Running Lustre*, GPFS, NFS, etc. Connection IB, FC, SAS, etc. Devices Intel OPA Direct Attached Figure 1: Components... 4 Interoperability with Existing... 5 Dual-homed... 5 LNet Router... 6 IP Router... 7 Figures Figure 1: Components... 2 The client side connections are the connections to the compute cluster fabric. The storage servers run the file system server software, such as Lustre* Object Server (OSS) software, or General Parallel File System (GPFS) Network Shared Disk (NSD) software. servers can take different forms. The storage servers are often implemented as standalone Linux* servers with adapter cards for connectivity to both the client side connections and the storage connections. These are sometimes productized by storage vendors, and in some cases the storage servers are integrated into an appliance offering. Figure 2: Server Configurations... 2 Figure 3: Configurations Overview... 3 Figure 4: Direct Attached... 4 Figure 5: Dual-homed Configuration... 5 Figure 6: Router Configuration... 6 Figure 7: LNet Router... 7 Figure 2: Server Configurations Appliance Figure 8: IP Router... 7 Tables Table 1: Linux* Support... 3 Table 2: Lustre* Support....3 Table 3: GPFS Support... 3 Table 4: Options Considerations... 5 Table 5: LNet Router Hardware Recipe... 6 Table 6: LNet Router Software Recipe... 6 Table 7: IP Router Hardware Recipe... 8 Table 8: IP Router Software Recipe... 8 The storage connection and storage devices generally take the form of a block storage device offered by the storage vendors. It is expected that there will be block storage devices with Intel OPA storage connections; however, this is not critical to enabling Intel OPA storage solutions. The client side connection is the focus for Intel OPA enablement. Configurations The connections from the storage servers to the Intel OPA fabric can take different forms, depending on the requirements of the system installation. Direct attached the storage servers are directly attached to the Intel OPA fabric with Intel OPA adapter cards in the storage servers. Dual-homed the storage servers are directly attached to the Intel OPA fabric and to another fabric, typically InfiniBand* (IB) or Ethernet. Adapter cards for both fabrics are installed in the storage servers. Routed the storage servers are connected to the Intel OPA fabric through routers that carry traffic between the Intel OPA fabric to the client side connection of the storage servers, typically InfiniBand or Ethernet.

Implementing in Intel Omni-Path Architecture s 3 The dual-homed and routed configurations are typically used to provide connectivity to legacy storage or to share storage across multiple clusters with different fabrics. Direct-attached OPA Routed Connection File Sys Server Legacy Cluster Legacy File Sys Server Router Dual-homed File Sys Server CN 0 CN 1 CN 2 CN 3 CN 4 CN n Figure 3: Configurations Overview Intel OPA Software and Considerations Intel OPA Host Software Intel s host software strategy is to utilize the existing Opens Alliance interfaces, thus ensuring that today s application software written to those interfaces runs with Intel OPA with no code changes required. This immediately enables an ecosystem of applications to just work. ISVs may over time choose to implement changes to take advantages of the unique capabilities present in Intel OPA to further optimize their offerings. All of the Intel Omni-Path host software is open source. Intel is working with major operating system vendors to incorporate Intel OPA support into future releases. Prior to being in-box with these distributions, Intel will release a delta package to support Intel OPA. The Intel software will be available on the Intel Download Center: https://downloadcenter.intel.com. Table 1: Linux* Support LINUX* DISTRIBUTION VERSIONS SUPPORTED RedHat RHEL 7.1 or newer SuSE SLES 12 SP0 or newer CentOS TBD version in 2016 Scientific Linux TBD version in 2016 Table 2: Lustre* Support LUSTRE* DISTRIBUTION Community Intel Foundation Edition Intel Enterprise Edition VERSIONS SUPPORTING INTEL OPA 2.8 or newer 2.7.1 or newer 2.4 (client support only) 3.0 or newer (client and server) Table 3: GPFS Support GPFS SOFTWARE GPFS over IP GPFS over RDMA with Intel OPA VERSION SUPPORTING INTEL OPA Supported with current versions of GPFS TBD, targeting 1H16

Implementing in Intel Omni-Path Architecture s 4 Intel OPA HFI coexistence with Mellanox* InfiniBand HCA In a multi-homed file system server, or in a Lustre Networking (LNet) or IP router, a single Opens Alliance (OFA) software environment supporting both an Intel OPA HFI and a Mellanox* InfiniBand HCA is required. The OFA software stack is architected to support multiple targeted network types. Currently, the OFA stack simultaneously supports iwarp for Ethernet, RDMA over Converged Ethernet (RoCE), and InfiniBand networks, and the Intel OPA network has been added to that list. As the OS distributions implement their OFA stacks, it will be validated to simultaneously support both Intel OPA Host Adapters and Mellanox Host Channel Adapters. Intel is working closely with the major Linux distributors, including Red Hat* and SUSE*, to ensure that Intel OPA support is integrated into their OFA implementation. Once this is accomplished, then simultaneous Mellanox InfiniBand and Intel OPA support will be present in the standard Linux distributions. Once support is present, it may still be necessary to update the OFA software to resolve critical issues. Linux distribution support is provided by the operating system vendor. Operating system vendors are expected to provide the updates necessary to address issues with the OFA stack that must be resolved prior to the next official Linux distribution release. This is the way that software drivers for other interconnects, such as Ethernet, work as well. With the Lustre versions mentioned above, the software doesn t yet have the ability to manage more than one set of Lustre settings in a single node. There is a patch to address this capability that is being tracked by https://jira.hpdd.intel.com/browse/ LU-7101. With QDR and FDR InfiniBand, there are settings that work well for both IB and Intel OPA. With the advent of Enhanced Data Rate (EDR) InfiniBand, there isn t a set of settings that work well for both the InfiniBand and the Intel OPA devices. Therefore, coexistence of Intel OPA and EDR InfiniBand isn t recommended until that Lustre patch is available. Once the patch is available, it will be available for Lustre source builds, and also included in all future Intel Lustre releases as well as community releases. It is expected that there will be an IEEL version in 1Q16 that includes this patch. Intel OPA Solutions Intel OPA Direct Attached High performance file systems with connectivity to an Intel OPA compute fabric, including Lustre and GPFS, are a core part of end-to-end Intel OPA solutions. When an Intel OPA based system requires new storage, there are several options available. OEM/Customer built OEMs or end customers put together the file system, procuring block storage from storage vendors, selecting an appropriate server, and obtaining the file system software either from the open source community or from vendors that offer supported versions. This option is straightforward with Intel OPA. See the Intel OPA Host Software section above for information about OS and file system software versions compatible with Intel OPA. vendor offering In some cases, the complete file system solution is provided by a storage vendor. These complete solutions can take the form of block storage with external servers running the file system software or fully integrated appliance-type solutions. Compute Nodes Figure 4: Direct Attached

Implementing in Intel Omni-Path Architecture s 5 Interoperability with Existing In many cases, when a new Intel OPA-based system is deployed, there are requirements for access to existing storage. There are three main options to be considered: Upgrade existing file system to Intel OPA Dual-home the existing file system to Intel OPA LNet and IP Router solutions If the existing file system can be upgraded to support Intel OPA connections, this is often the best solution. There are multiple viable options. For cases where the existing storage will only be accessed by the Intel OPA based system, this upgrade can take the form of replacing existing fabric adapter cards with Intel OPA adapter cards. Software upgrades may also be required. See the Intel OPA Host Software section above for information about OS and file system software versions compatible with Intel OPA. In other cases, the existing file system will need to continue to be accessed from an existing (non-intel OPA) cluster and also will require access from the new Intel OPA-based system. In these cases, the file system can be upgraded to be dual homed by adding Intel OPA host adapters to the file system servers. In some cases, there are requirements for accessing existing storage from new Intel OPA-based systems and it is not possible to upgrade the existing file system. This can happen if the customer is unwilling to tolerate either the down time that would be required to upgrade the existing file system or the risk in doing so. In these cases, a router-based solution can solve the interoperability challenge. For Lustre file systems, the LNet router component of Lustre can be used for this purpose. For other file systems, such as GPFS, the Linux IP router can be used. Table 4: Options Considerations Pros Cons DIRECT-ATTACHED/DUAL HOMED Excellent bandwidth Predictable performance No additional system complexity Requires downtime of the existing file system to perform software and hardware upgrades May be viewed as higher risk ROUTED SOLUTION Easy to add to existing storage Minimal downtime of existing storage Bandwidth requirements may mean that multiple routers are required Complexity to manage extra pieces in the system Dual-homed In the dual-homing approach, an Intel OPA connection is provided directly from the file system server, providing the best possible bandwidth and latency solution. This is a good option when: Compute Nodes InfiniBand* Dual-homed The file system servers have a PCIe slot available to add the Intel OPA adapters The file system servers utilize OS and file system software versions compatible with Intel OPA, or can be upgraded to do so Compute Nodes For OEMs and customers who have built the file system themselves, this solution will be supported through the OS and file system software arrangements that are already in place. When the file system solution was provided by a storage vendor, that vendor can be engaged to perform and support the upgrade to dual-homed. Figure 5: Dual-homed Configuration

Implementing in Intel Omni-Path Architecture s 6 LNet Router The LNet router is a standard component of the Lustre stack. When configured to support routing between an Intel OPA fabric and a legacy fabric, it can provide routing of LNet storage traffic. To facilitate the implementation of Lustre routers in Intel OPA deployments, a validated reference design recipe is provided. This recipe provides instructions on how to implement and configure LNet routers to connect Intel OPA and InfiniBand fabrics. The Lustre software supports dynamic load sharing between multiple targets. This is handled in the client, which has a router table and does periodic pings to all of its end points to check status. This capability is leveraged to provide load balancing across multiple LNet routers. Roundrobin load sharing is performed transparently. This capability also provides for failover because in the event of an LNet router failure, the load is automatically redistributed to other available routers. Compute Nodes Compute Nodes Figure 6: Router Configuration InfiniBand* Routers Table 5: LNet Router Hardware Recipe HARDWARE CPU Memory Server Platform connection IB/Ethernet connection RECOMMENDATION Intel Xeon E5-2697 v3 ( 2.60 GHz, 1-2 core Turbo 3.6 GHz), 14 core 64 GByte RAM per node 1U rack server or equivalent form factor with two x16 PCIe* slots One x16 Chippewa Forest HCA Mellanox FDR Mellanox EDR will be supported in the future Ethernet TBD Table 6: LNet Router Software Recipe SOFTWARE Base OS Lustre* RECOMMENDATION RHEL 7.1 + delta distribution OR SLES 12 SP0 + Intel OPA delta distribution Community version 2.8 OR IEEL 2.4 or newer OR FE 2.7.1 or newer Note: The Net Router Hardware Recipe (Table 5) and the Net Router Software Recipe (Table 6) information is preliminary and based upon configurations that have been tested by Intel to date. Further optimization of CPU and memory requirements is planned.

Implementing in Intel Omni-Path Architecture s 7 CN o CN n Compute OPA LNet Routers IB Existing Infrastructure Figure 7: LNet Router The target peak aggregate forwarding rate is 85 percent of the InfiniBand line rate or 47 Gbps per LNet router server with FDR InfiniBand when following the above recipe. Other generations of Infini- Band are expected to work with this recipe; however, performance will vary. Ethernet connectivity to storage is also supported by the recipe. Additional bandwidth can be achieved by instantiating multiple LNet routers. The target message rate is at least 5 Million messages per second. The LNet router solution package will include: Hardware and software recipes for router solution (as shown here) Hardware requirement specifications Performance expectations Documentation/User Guides Support for the LNet router solution is provided through the customer s Lustre support path, as it is a standard part of the Lustre software stack. IP Router The IP router is a standard component in Linux. When configured to support routing between an Intel OPA fabric and a legacy fabric, it provides IP based routing that can be used for GPFS, NFS, and other file systems that use IP based traffic. To facilitate the implementation of IP routers in Intel OPA deployments, a validated reference design recipe is provided. This recipe provides instructions on how to implement and configure IP routers to connect Intel OPA and InfiniBand or Ethernet fabrics. The Virtual Router Redundancy Protocol (VRRP) v3 software in Linux is used to enable failover and load balancing with the IP routers. The VRRP is a computer networking protocol that provides for automatic assignment of available Internet Protocol (IP) routers to participating hosts. This increases the availability and reliability of routing paths via automatic default gateway selections on an IP subnetwork. IP routers can be configured for high availability using VRRP. This can be done with an active and a passive server. In a system configured with multiple routers, routers can be configured to be master on some subnets and slaves on the others, thus allowing the routers to be more fully utilized while still providing resiliency. The load-balancing capability is provided by VRRP using IP Virtual Server (IPVS). IPVS implements transport-layer load balancing, usually called Layer 4 LAN switching, as part of the Linux kernel. IPVS is incorporated into the Linux Virtual Server (LVS), where it runs on a host and acts as a load balancer in front of a cluster of real servers. IPVS can direct requests for TCP- and UDPbased services to the real servers, and make services of the real servers appear as virtual services on a single IP address. A weighted round-robin algorithm is used and different weights can been added to distribute load across file system servers that have different performance capabilities. CN o CN n Compute OPA IP Routers Ethernet or IB Existing Infrastructure (IB or Ethernet) GPFS or NFS Figure 8: Router

Implementing in Intel Omni-Path Architecture s Table 7: IP Router Hardware Recipe HARDWARE CPU Memory Server Platform connection IB/Ethernet connection RECOMMENDATION Dual-socket current or future generation Intel Xeon Processors which have their 1-2 core Turbo frequency above 3.3 GHz, and at least 10 cores per CPU Example: Intel Xeon E5-2697 v3 (2.60 GHz, 1-2 core Turbo 3.6 GHz), 14 core 64 GByte RAM per node 1U rack server or equivalent form factor with two x16 PCIe slots One x16 Chippewa Forest HCA Mellanox FDR and EDR (Other generations are expected to work, performance will vary) Ethernet - TBD Table 8: IP Router Software Recipe SOFTWARE Base OS RECOMMENDATION RHEL 7.1 + delta distribution OR SLES 12 SP0 + Intel OPA delta distribution Note: The IP Router Hardware Recipe (Table 7) and the IP Router Software Recipe (Table 8) information is preliminary and based upon configurations that have been tested by Intel to date. Further optimization of CPU and memory is planned. The target peak aggregate forwarding rate is 40 Gbps per IP router server with either EDR or FDR IB when following the above recipe. Other generations of InfiniBand are expected to work with this recipe, however performance will vary. Ethernet connectivity to storage is also supported by the recipe. Additional bandwidth can be achieved by instantiating multiple IP routers. The target message rate is at least 5 Million messages per second. The IP router solution package will include: Hardware and software recipes for router solution (as shown here) Hardware requirement specifications Performance expectations Documentation/User Guides Support for the IP router solution is provided through the customer s Linux support path, as it is a standard part of the Linux software stack. For more information about Intel Omni-Path Architecture and next-generation fabric technology, visit: www.intel.com/hpcfabrics www.intel.com/omnipath Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. THE INFORMATION PROVIDED IN THIS PAPER IS INTENDED TO BE GENERAL IN NATURE AND IS NOT SPECIFIC GUIDANCE. RECOMMENDATIONS (INCLUDING POTENTIAL COST SAVINGS) ARE BASED UPON INTEL S EXPERIENCE AND ARE ESTIMATES ONLY. INTEL DOES NOT GUARANTEE OR WARRANT OTHERS WILL OBTAIN SIMILAR RESULTS. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS AND SERVICES. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS AND SERVICES, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS AND SERVICES INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. Copyright 2015 Intel Corporation. All rights reserved. Intel, Intel Xeon, and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others. Printed in USA 1115/EON/HBD/PDF Please Recycle 333509-001US