Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA



Similar documents
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

Fibre Channel Over and Under

Virtual Compute Appliance Frequently Asked Questions

An Oracle White Paper July Accelerating Database Infrastructure Using Oracle Real Application Clusters 11g R2 and QLogic FabricCache Adapters

3G Converged-NICs A Platform for Server I/O to Converged Networks

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS

Oracle Big Data Appliance: Datacenter Network Integration

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Oracle Exalogic Elastic Cloud: Datacenter Network Integration

How To Evaluate Netapp Ethernet Storage System For A Test Drive

An Oracle White Paper September Oracle Exadata Database Machine - Backup & Recovery Sizing: Tape Backups

Running Oracle s PeopleSoft Human Capital Management on Oracle SuperCluster T5-8 O R A C L E W H I T E P A P E R L A S T U P D A T E D J U N E

FIBRE CHANNEL OVER ETHERNET

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

An Oracle White Paper July Introducing the Oracle Home User in Oracle Database 12c for Microsoft Windows

Driving Down the High Cost of Storage. Pillar Axiom 600

An Oracle White Paper May Exadata Smart Flash Cache and the Oracle Exadata Database Machine

Block based, file-based, combination. Component based, solution based

An Oracle White Paper October Realizing the Superior Value and Performance of Oracle ZFS Storage Appliance

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

SUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER

FCoE Deployment in a Virtualized Data Center

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

An Oracle White Paper August Higher Security, Greater Access with Oracle Desktop Virtualization

An Oracle White Paper July Oracle Desktop Virtualization Simplified Client Access for Oracle Applications

An Oracle White Paper June Oracle Database Firewall 5.0 Sizing Best Practices

Virtualizing the SAN with Software Defined Storage Networks

System. A Product Concept Introduction ORACLE WHITE PAPER JUNE 2015

An Oracle White Paper February Centralized vs. Distributed SIP Trunking: Making an Informed Decision

Oracle SDN Performance Acceleration with Software-Defined Networking

Fibre Channel over Ethernet in the Data Center: An Introduction

An Oracle White Paper December Consolidating and Virtualizing Datacenter Networks with Oracle s Network Fabric

Integrating Oracle's Exadata Database Machine with a Data Center LAN Using Oracle Ethernet Switch ES2-64 and ES2-72 ORACLE WHITE PAPER MARCH 2015

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

An Oracle White Paper January Using Oracle's StorageTek Search Accelerator

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Doubling the I/O Performance of VMware vsphere 4.1

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

An Oracle White Paper September Lowering Storage Costs with the World's Fastest, Highest Capacity Tape Drive

Converged block storage solution with HP StorageWorks MPX200 Multifunction Router and EVA

An Oracle White Paper November Backup and Recovery with Oracle s Sun ZFS Storage Appliances and Oracle Recovery Manager

An Oracle White Paper February Rapid Bottleneck Identification - A Better Way to do Load Testing

Managed Storage Services

Simplify IT and Reduce TCO: Oracle s End-to-End, Integrated Infrastructure for SAP Data Centers

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Cut I/O Power and Cost while Boosting Blade Server Performance

An Oracle White Paper August Oracle VM 3: Application-Driven Virtualization

Performance with the Oracle Database Cloud

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Oracle Banking Platform. Enabling Transformation of Retail Banking

Building Enterprise-Class Storage Using 40GbE

ORACLE VIRTUAL DESKTOP INFRASTRUCTURE

An Oracle White Paper March Oracle s Single Server Solution for VDI

Top of Rack: An Analysis of a Cabling Architecture in the Data Center

Field Service Management in the Cloud

Fibre Channel over Ethernet: Enabling Server I/O Consolidation

Data Center Convergence. Ahmad Zamer, Brocade

iscsi: Accelerating the Transition to Network Storage

VERITAS Backup Exec 9.0 for Windows Servers

An Oracle White Paper September Oracle Database and the Oracle Database Cloud

An Oracle Communications White Paper December Serialized Asset Lifecycle Management and Property Accountability

Unified Computing Systems

March Oracle Business Intelligence Discoverer Statement of Direction

An Oracle White Paper. Oracle Database Appliance X4-2

An Oracle White Paper October Gneis Turns to Oracle to Secure and Manage SIP Trunks

White. Paper. The Converged Network. November, By Bob Laliberte. 2009, Enterprise Strategy Group, Inc. All Rights Reserved

SUN ORACLE DATABASE MACHINE

SUN ORACLE EXADATA STORAGE SERVER

An Oracle White Paper May Oracle Audit Vault and Database Firewall 12.1 Sizing Best Practices

An Oracle White Paper November Achieving New Levels of Datacenter Performance and Efficiency with Software-optimized Flash Storage

An Oracle White Paper June High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Oracle Net Services for Oracle10g. An Oracle White Paper May 2005

An Oracle Technical White Paper June Oracle VM Windows Paravirtual (PV) Drivers 2.0: New Features

G Cloud 7 Pricing Document

The Benefits of a Unified Enterprise Content Management Platform

The Role of Data Integration in Public, Private, and Hybrid Clouds

Migration Best Practices for OpenSSO 8 and SAM 7.1 deployments O R A C L E W H I T E P A P E R M A R C H 2015

An Oracle White Paper November Oracle Business Intelligence Standard Edition One 11g

Integrating 16Gb Fibre Channel and 10GbE Technologies with Current Infrastructure

October A New Standard for Excellence. Transforming Education and Research with Oracle Innovation

An Oracle White Paper January A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c

June, 2015 Oracle s Siebel CRM Statement of Direction Client Platform Support

An Oracle Benchmarking Study February Oracle Insurance Insbridge Enterprise Rating: Performance Assessment

A Comprehensive Solution for API Management

An Oracle White Paper May Distributed Development Using Oracle Secure Global Desktop

An Oracle White Paper July Increasing the Value of Siebel and Reducing Deployment Risk using Optimal Best Practices

Oracle s Secure HetNet Backhaul Solution. A Solution Based on Oracle s Network Session Delivery and Control Infrastructure

An Oracle White Paper October Oracle Data Integrator 12c New Features Overview

SUN STORAGE F5100 FLASH ARRAY

Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000

An Oracle White Paper April 2012; Revised July Improving Data Center Infrastructure with Oracle s Sun x86 Systems

Oracle s BigMachines Solutions. Cloud-Based Configuration, Pricing, and Quoting Solutions for Enterprises and Fast-Growing Midsize Companies

IP SAN Best Practices

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Transcription:

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging data and storage networking. Standard TCP/IP and Fibre Channel traffic can both run on the same high speed 10Gb/sec Ethernet wire, resulting in cost savings through reduced adapter, switch, cabling, power, cooling, and management requirements. The Sun Storage Dual Port 10GbE FCoE PCIe Converged Network Adapter (CNA) from Oracle delivers high performance, reduces data center total cost of ownership (TCO), and protects current data center investment. The Oracle Converged Network Adapter Advantage The Sun Storage 10GbE FCoE PCIe CNAs have been designed specifically for use in Oracle s Sun servers. They exceed the business requirements of the enterprise data center with higher performance, investment protection, and increased power and cooling efficiency. Oracle solves enterprise-wide business challenges with a comprehensive offering of hardware, software, and services, providing customers with a total end-to-end solution. This end-to-end solution provides world class interoperability testing, service and support, and support for Sun and third-party storage arrays. And, Sun Storage 10GbE FCoE PCIe CNA architecture simplifies LAN and SAN management to improve resource use and reduce your TCO. Customer Benefits The customer benefits of implementing FCoE in the datacenter include lower Total Cost of Ownership (TCO), network consolidation, investment protection and high performance. Lower TCO Reduce your data center costs through convergence with the Sun Storage 10GbE FCoE PCIe CNA. Now, one converged network adapter can do the work of a discrete FC host bus adapter and Ethernet NIC. This convergence also means fewer cables, fewer switches, less power consumption, reduced cooling, and easier LAN and SAN management. The Sun Storage 10GbE FCoE PCIe CNA supports iscsi storage protocol using iscsi software initiators, which are available with all major operating systems. Network Consolidation FCoE converged network is an emerging high-performance networking technology that will power a new wave of network consolidation. FCoE requires the use of a new class of converged network adapters, like Oracle s Sun Storage 10GbE FCoE CNA, that appear to the operating system like a Fibre Channel HBA and an Ethernet NIC consolidated into a single adapter. Converged network adapters from Oracle are designed to preserve our customers investment in proven Fibre Channel software stacks and provide an ability to run data networking traffic simultaneously. Network consolidation with Oracle s Sun Storage 10GbE FCoE PCIe CNAs could save Oracle customers up to 33% savings over a 4 year period. Investment Protection Preserve your existing investment in Fibre Channel storage and core Ethernet switches and routers for data networking with the new Sun Storage 10GbE FCoE PCIe CNA. It leverages the same identical software driver stacks that have been deployed and battle-hardened in millions of previous installations, and preserves familiar FC concepts such as WWNs, FC-IDs, LUN masking, and zoning.

2 High Performance Boost your system performance with 10Gb/sec speed and full hardware offload for FCoE protocol processing. Your nextgeneration data center requires the high-performance capabilities of the Sun Storage 10GbE FCoE PCIe Converged Network Adapter. Cutting edge 10Gb/sec bandwidth eliminates performance bottlenecks in the I/O path with a 10X data rate improvement versus existing 1Gbps Ethernet solutions. The Sun Storage 10GbE FCoE PCIe CNA delivers up to 250,000 IOPS per port for truly superior performance. And full hardware offload for FCoE protocol processing reduces system CPU utilization for I/O operations, which leads to faster application performance and higher levels of consolidation in virtualized systems. Frequently Asked Questions What exactly is Fibre Channel over Ethernet? Until now, storage and network I/O has existed on two separate networks, a model that 10 Gigabit Ethernet and Fibre Channel over Ethernet promise to change. As datacenters converge, their Ethernet and Fibre Channel networks, they can reap the benefits of lower costs, higher performance, reduced power consumption, simplified infrastructure, and seamless integration with existing storage. While FCoE is based on Ethernet, it's an enhanced Ethernet that has been optimized for low latency, quality of service, guaranteed delivery and other functionality traditionally associated with a channel-type interface like parallel SCSI, Fibre Channel, FICON or ESCON. Where does FCoE stand in the market? FCoE will be used where performance, scalability and low latency are paramount. It will be used for getting a converged enhanced Ethernet into a simplified environment, much where you would find Fibre Channel today. What are the main applications for FCoE? The main application of FCoE is in data center storage area networks (SANs). FCoE has particular application in data centers with cabling reduction it makes consolidation possible, as well as in server virtualization applications, which often require many physical I/O connections per server. How does FCoE help with datacenter consolidation? With FCoE, network (IP) and storage (SAN) data traffic can be consolidated using a single network switch. This consolidation can: reduce the number of network interface cards required to connect to disparate storage and IP networks reduce the number of cables and switches reduce power and cooling costs With 10 Gigabit Ethernet coming and boosts in server and storage processing power, do you really need FCoE at all? It depends on your environment and your market. Sure, 10 Gigabit Ethernet exists today. The prices for the adapters have come down dramatically. A 10 Gigabit Ethernet adapter with optics (transceivers) maybe with cable, is now available for around $1,000 to $1,200 - that's the normal price for Fibre Channel adapters. Those prices continue to come down. It is unclear as you go down market into an SMB space as to whether the need is for 10 GbE with iscsi or 10 GbE with NAS, versus4 Gb or 8 Gb Fibre Channel. Going forward, part of that question gets resolved in that a single adapter gets put into the server. That single adapter is an enhanced Ethernet adapter that has the ability to run both Fibre Channel stacks for talking to storage, as well as Ethernet-based stacks for supporting things like TCP/IP for iscsi, for NFS, for CIFS, for HTTP, as well as other activities, all on one single adapter. For redundancy purposes you put a pair of them in there. The speed aspect of FCoE is certainly needed more in the enterprise where there's more scaling. When speeds and feeds are talked about, whether it's 10 GbE for iscsi, or 10 Gb Fibre Channel in the case of FCoE, most of the focus centers around bandwidth and around throughput. While some applications don t require that much bandwidth, most applications do have a concern about response time. They do have a concern about latency or a concern about the number of I/Os, transactions, files, videos messages processed per second. So, while bandwidth may not be the concern, there certainly is the benefit of lower response time, lower latency, as well as

3 supporting more IOPS. When you start looking at consolidated environments, for example, using server virtualization to aggregate multiple physical servers, speed is important. In the past, you might have had 10 servers, each running at just under a megabyte per second, hardly requiring a 1 Gigabit Ethernet, let alone a 1 Gb, 2 Gb, 4 Gb or 8 Gb Fibre Channel. But when you aggregate 10 of those systems together onto one physical server, it's straight math. All of a sudden it's adding up to 8 Gb, 10 Gb or more. So, we start to see the aggregation play, whether it be on megabytes per second throughput, whether it's on IOPS, transactions, files, messages processed or on lower latency. Additionaly, although networks have gotten faster, storage has gotten larger with more processing power. You havemore data to process now. Going forward, you will need those capabilities. Why do we need FCoE if we already have iscsi FCoE and iscsi are both encapsulation protocols, but the similarities end there. FCoE encapsulates Fibre Channel frames so they can be transported over Ethernet. In contrast, iscsi encapsulates SCSI to be transported over TCP/IP. FCoE is not routable over IP, but iscsi is. There is a perception that iscsi will be able to do everything that FCoE can do and vice versa. The reality is that iscsi's fundamental value proposition has been about ease of use and low cost. Whereas the fundamental value proposition of FCoE is not of low cost, but of a convergence of multiple technologies -- Fibre Channel, Ethernet, the Fibre Channel upper-level protocol stacks, such as Fibre Channel on SCSI and FICON, coexisting on an Ethernet without the need for IP. FCoE is positioned more toward the upper part of market where iscsi really hasn't seen much adoption. Part of that has to do with the fact that for iscsi to play in the upper market requires extra hardware and extra capabilities, which play counter to its low-cost value proposition. Midmarket is the high end of where iscsi plays and the low end of where FCoE plays. Where iscsi continues to find increasing market share is in the lower part of the market -- the midmarket to lower part of the SMBs, maybe into SOHO, even the upper parts of SMB; certainly making some inroads into different parts of the enterprise. Likewise, FCoE will trickle down from the enterprise into the upper reaches of the SMB market. If the predominant focus is NAS or iscsi (and only a few servers are FC) then it does not make sense to include the added cost of CNAs as part of a general infrastructure; it will make more economic sense to use Ethernet only cards in general, and FC HBAs or CNAs in the few servers that require it. The problem with iscsi is that it relies upon TCP/IP as its underlying supporting protocol. TCP/IP is a "lossy" protocol -- as TCP/IP-based iscsi packets are sent across an Ethernet network, packets may be lost or dropped and require retransmission. This can result in packets arriving at the target in an out-of-order sequence, especially during busy periods. Congestion and out-of-order packet delivery is particularly undesirable for mission-critical applications in enterprise SANs since they need the guaranteed, in-order delivery of packets that the Fibre Channel protocol can provide. What effect does FCoE have on Fibre Channel? Fibre Channel will, at least for next couple of years, continue to be popular in environments that are risk-averse, that don't want to jump to FCoE, that want to take more of wait-and-see and go from 4 Gb to 8 Gb to 16 Gb Fibre Channel, and maybe even beyond that, depending on their comfort level. What is the advantage of FCOE over Fibre Channel? Data centers typically run multiple networks, including Ethernet networks for client-to-server, server-to-server communications and FC SAN for networked storage. To support various types of networks, data centers use separate redundant interface modules for each network segment Ethernet network interface cards (NICs) and FC interfaces in their servers and redundant pairs of switches at each layer in the network architecture. Use of the parallel infrastructure increases capital costs, makes data center management more difficult and reduces business flexibility. Consolidation of I/O in the data center, allowing FC and Ethernet to share a single, integrated infrastructure would help enterprises to address these challenges smoothly and the answer is FCoE technology. Storage architects embrace FCoE simply because they see that converging and leveraging I/O technology makes sense for them. They can swap out the physical and data link layers from Fibre Channel to Ethernet relatively easily, so the switch to

4 FCoE switch is an easy change. Enterprise storage and network architects are beginning to consider the implications of server consolidation and virtualization. As they see footprint shrink thanks to compact or blade servers and server virtualization, they will begin to question the proliferation of interconnects on the back end required to keep up with the I/O demands of these super servers. With the simplified FCoE topology, what formerly took a minimum of four interfaces per server two NICs and two HBAs now requires only one FCoE CNA per server. FCoE helps to extend datacenter SANs to servers whose expansion capability was insufficient, or where the cost of the Fibre Channel HBAs was prohibitive. Now virtually every datacenter server can leverage the benefits of centrally managed storage. In a typical 20-server rack with fully redundant connectivity, this reduces 40 Ethernet and 40 fibre connections down to only 40 FCoE CNA connections. The reduced power consumption that comes from using fewer NICs and fewer switches provides relief to organizations that are up against their datacenter power and cooling envelopes. Why not InfiniBand? So is InfiniBand a viable alternative to FCoE for your enterprise customers? Yes and no. The primary advantages that InfiniBand offer over FCoE are that InfiniBand is more mature and more widely used in high-performance computing environments; it's also more economical and already has higher bandwidth capabilities (40 Gb/sec) than Ethernet. But like FCoE, it requires companies to upgrade their network infrastructure, introduce InfiniBand-to-FC and/or Ethernet gateways and install InfiniBand HCAs on host servers. InfiniBand has limited or no support for VMware and XenServer, and only a limited number of target storage systems (such as those from DataDirect Networks and LSI) support InfiniBand. Is there a deployment cost savings by implementing & deploying FCOE over these other protocols? cooling. And, preservation of familiar data and storage concepts resulting in lower training and administrative costs. Improved Performance with 150% performance improvement compared to most widely used infrastructures today, including increased speed of links and utilization of links. Network Consolidation over a 4 year period can save customers up to 33% on capital and operational expenses. Is there a significant performance advantage of FCOE CNAs when compared 10GbE NIC with a software stack? Yes. Due to the offload capabilities of the FCOE CNAs there will performance and scalability advantages of the FCOE CNAs when compared to other networking solutions. Can the FCoE CNA be used for general Ethernet data networking? Yes, the FCoE CNAs are designed to be fully Ethernet compliant and support Ethernet networking. Can the FCoE CNA support simultaneous NIC and iscsi storage functionality? Yes, using a software iscsi initiator the FCoE CNA can support simultaneous NIC and iscsi storage functionality. Can the FCoE CNA support simultaneous Fibre Channel and IP traffic? Yes, the FCoE CNA is designed to support the simultaneous transport of both IP and Fibre Channel traffic. Will the FCoE CNA be fully Fibre Channel compliant? Yes, the FCoE CNA is designed to be fully Fibre Channel compliant. What is Oracle s strategy for developing FCoE and NIC management tools? At FCoE CNA GA, Oracle will distribute standalone management tools for FCoE and NIC management. The long term goal is to develop a new combined LAN + SAN management tool that will support all of the major OSs. These management tools will heavily leverage the existing QLogic SANsurfer management tool. FCoE CNAs Lowers CapEx by 50% space savings with reduced number of server ports, switch ports and cabling. Lowers OpEx by 66% reduction in power consumption and Can the FCoE CNA run native Fibre Channel frames over a Fibre Channel SAN?

5 No. The Fibre Channel frame is encapsulated into an Ethernet frame with the CNA. Subsequently the physical port of the CNA transmits and receives transports Ethernet frames only. Can the FCoE CNA connect to a 10GbE Ethernet switch available today? The FCoE CNA can connect to a standard 10GbE Ethernet switch available today if it is used for data networking only. However, if the FCoE CNA is used for storage networking, it must be connected to a new FCoE capable switch. Where can I find out more information? You can contact your Oracle Sales representative directly or call 1-800-Oracle1. For more information about the Sun Storage 10GbE FCoE CNA on the web, go to: http://www.oracle.com/us/products/serversstorage/storage/storage-networking/index.html Pricing for all Oracle Networking products can be found at: www.oracle.com/pricing

Oracle Corporation Worldwide Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries Phone +1.650.506.7000 +1.800.ORACLE1 Fax +1.650.506.7200 oracle.com Copyright 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0110