Mellanox OpenStack Solution Reference Architecture
|
|
|
- Rachel Banks
- 10 years ago
- Views:
Transcription
1 Mellanox OpenStack Solution Reference Architecture Rev 1.3 January
2 NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES AS-IS WITH ALL FAULTS OF ANY KIND AND SOLELY FOR THE PURPOSE OF AIDING THE CUSTOMER IN TESTING APPLICATIONS THAT USE THE PRODUCTS IN DESIGNATED SOLUTIONS. THE CUSTOMER'S MANUFACTURING TEST ENVIRONMENT HAS NOT MET THE STANDARDS SET BY MELLANOX TECHNOLOGIES TO FULLY QUALIFY THE PRODUCT(S) AND/OR THE SYSTEM USING IT. THEREFORE, MELLANOX TECHNOLOGIES CANNOT AND DOES NOT GUARANTEE OR WARRANT THAT THE PRODUCTS WILL OPERATE WITH THE HIGHEST QUALITY. ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON INFRINGEMENT ARE DISCLAIMED. IN NO EVENT SHALL MELLANOX BE LIABLE TO CUSTOMER OR ANY THIRD PARTIES FOR ANY DIRECT, INDIRECT, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, PAYMENT FOR PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY FROM THE USE OF THE PRODUCT(S) AND RELATED DOCUMENTATION EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 350 Oakmead Parkway Suite 100 Sunnyvale, CA U.S.A. Tel: (408) Fax: (408) , Ltd. Beit Mellanox PO Box 586 Yokneam Israel Tel: +972 (0) Fax: +972 (0) Copyright All Rights Reserved. Mellanox, Mellanox logo, BridgeX, ConnectX, CORE-Direct, InfiniBridge, InfiniHost, InfiniScale, MLNX-OS, PhyX, SwitchX, UFM, Virtual Protocol Interconnect and Voltaire are registered trademarks of, Ltd. Connect-IB, FabricIT, Mellanox Open Ethernet, Mellanox Virtual Modular Switch, MetroX, MetroDX, ScalableHPC, Unbreakable-Link are trademarks of, Ltd. All other trademarks are property of their respective owners 2 Document Number:
3 Mellanox OpenStack Solution Reference Architecture Rev 1.3 Contents 1 Solution Overview Accelerating Storage Network Virtualization on Mellanox Adapters Performance Measurements Seamless Integration Setup and Installation Hardware Requirements Software Requirements Prerequisites OpenStack Software Installation Troubleshooting Setting Up the Network Configuration Examples Creating a Network Creating a Para-Virtualized vnic Instance Creating an SR-IOV Instance Creating a Volume Attaching a Volume Verification Examples Instances Overview Connectivity Check Volume Check
4 Rev 1.3 Solution Overview List of Figures Figure 1: Mellanox OpenStack Architecture... 8 Figure 2: OpenStack Based IaaS Cloud POD Deployment Example... 9 Figure 3: RDMA Acceleration Figure 4: eswitch Architecture Figure 5: Latency Comparison Figure 6: Network Virtualization Figure 7: Mellanox MCX314A-BCBT, ConnectX-3 40GbE Adapter Figure 8: Mellanox SX1036, 36x40GbE Figure 9: Mellanox 40GbE, QSFP Copper Cable Figure 10: OpenStack Dashboard Instances Figure 11: OpenStack Dashboard, Launch Instance Figure 12: OpenStack Dashboard, Launch Interface Select Network Figure 13: OpenStack Dashboard, Volumes Figure 14: OpenStack Dashboard, Create Volumes Figure 15: OpenStack Dashboard, Volumes Figure 16: OpenStack Dashboard, Manage Volume Attachments Figure 17: VM Overview Figure 18: Remote Console Connectivity Figure 19: OpenStack Dashboard, Volumes Figure 20: OpenStack Dashboard, Console Confidential
5 Mellanox OpenStack Solution Reference Architecture Rev 1.3 Preface About this Document This reference design presents the value of using Mellanox interconnect products and describes how to integrate the OpenStack solution (Havana release or later) with the end-to-end Mellanox interconnect products. Audience This reference design is intended for server and network administrators. The reader must have experience with the basic OpenStack framework and installation. References For additional information, see the following documents: Table 1: Related Documentation Reference Mellanox OFED User Manual Location > Products > Adapter IB/VPI SW > Linux SW/Drivers n&product_family=26&menu_section=34 Mellanox software source packages OpenStack Website Mellanox OpenStack wiki page Mellanox Ethernet Switch Systems User Manual Mellanox Ethernet adapter cards Solutions space on Mellanox community OpenStack RPM package Mellanox eswitchd Installation for OpenFlow and OpenStack X_User_Manual.pdf
6 Rev 1.3 Solution Overview Reference Troubleshooting Mellanox OFED Driver Installation and Configuration for SR-IOV Location Revision History Table 2: Document Revision History Revision Date Changes 1.3 Jan Removed OpenFlow sections. Added related topics for OpenStack Havana release. 1.2 Sep Minor editing 1.1 June 2013 Added OpenFlow feature 1.0 May 2013 Initial revision 6 Confidential
7 Mellanox OpenStack Solution Reference Architecture Rev Solution Overview Deploying and maintaining a private or public cloud is a complex task, with various vendors developing tools to address the different aspects of the cloud infrastructure, management, automation, and security. These tools tend to be expensive and create integration challenges for customers when they combine parts from different vendors. Traditional offerings suggest deploying multiple network and storage adapters to run management, storage, services, and tenant networks. These also require multiple switches, cabling, and management infrastructure, which increases both up front and maintenance costs. Other, more advanced offerings provide a unified adapter and first level ToR switch, but still run multiple and independent core fabrics. Such offerings tend to suffer from low throughput because they do not provide the aggregate capacity required at the edge or in the core; and because they deliver poor application performance due to network congestion and lack of proper traffic isolation. Several open source cloud operating system initiatives have been introduced to the market, but none has gained sufficient momentum to succeed. Recently OpenStack has managed to establish itself as the leading open source cloud operating system, with wide support from major system vendors, OS vendors, and service providers. OpenStack allows central management and provisioning of compute, networking, and storage resources, with integration and adaptation layers allowing vendors and/or users to provide their own plug-ins and enhancements. offers seamless integration between its products and OpenStack layers and provides unique functionality that includes application and storage acceleration, network provisioning, automation, hardware-based security, and isolation. Furthermore, using Mellanox interconnect products allows cloud providers to save significant capital and operational expenses through network and I/O consolidation and by increasing the number of virtual machines (VMs) per server. Mellanox provides a variety of network interface cards (NICs) supporting one or two ports of 10GbE, 40GbE, or 56Gb/s InfiniBand. These adapters simultaneously run management, network, storage, messaging, and clustering traffic. Furthermore, these adapters create virtual domains within the network that deliver hardware-based isolation and prevent cross-domain traffic interference. In addition, Mellanox Virtual Protocol Interconnect (VPI) switches deliver the industry s most cost-effective and highest capacity switches (supporting up to 36 ports of 56Gb/s). When deploying large-scale, high-density infrastructures, leveraging Mellanox converged network VPI solutions translates into fewer switching elements, far fewer optical cables, and simpler network design. Mellanox plugins are inbox for Havana release. The Havana release includes out of the box support for InfiniBand and Ethernet Mellanox components for Nova, Cinder and Neutron. 7
8 Rev 1.3 Solution Overview Mellanox integration with OpenStack provides the following benefits: Cost-effective and scalable infrastructure that consolidates the network and storage to a highly efficient flat fabric, increases the VM density, commoditizes the storage infrastructure, and linearly scales to thousands of nodes Delivers the best application performance with hardware-based acceleration for messaging, network traffic, and storage Easy to manage via standard APIs. Native integration with OpenStack Neutron (network) and Cinder (storage) provisioning APIs Provides tenant and application security/isolation, end-to-end hardware-based traffic isolation, and security filtering Mellanox designed its end-to end OpenStack cloud solution to offer seamless integration between its products and OpenStack services. By using Mellanox 10/40GbE and FDR 56Gb/s adapters and switches with OpenStack Havana release, customers can gain significant improvement in block storage access performance with Cinder. In addition, customers can deploy an embedded virtual switch to run virtual machine traffic with bare-metal performance, provide hardened security and QoS, all with simple integration. Mellanox has partnered with the leading OpenStack distributions to allow customers to confidently deploy an OpenStack cloud, with proven interoperability and integrated support. For more information click here. Figure 1: Mellanox OpenStack Architecture 8 Confidential
9 Mellanox OpenStack Solution Reference Architecture Rev Accelerating Storage Data centers rely on communication between compute and storage nodes, as compute servers read and write data from the storage servers constantly. In order to maximize the server s application performance, communication between the compute and storage nodes must have the lowest possible latency, highest possible bandwidth, and lowest CPU utilization. Figure 2: OpenStack Based IaaS Cloud POD Deployment Example Storage applications rely on iscsi over TCP communications protocol stack continuously interrupt the processor in order to perform basic data movement tasks (packet sequence and reliability tests, re-ordering, acknowledgements, block level translations, memory buffer copying, etc). This causes data center applications that rely heavily on storage communication to suffer from reduced CPU efficiency, as the processor is busy sending data to and from the storage servers rather than performing application processing. The data path for applications and system processes must wait in line with protocols such as TCP, UDP, NFS, and iscsi for their turn using the CPU. This not only slows down the network, but also uses system resources that could otherwise have been used for executing applications faster. Mellanox OpenStack solution extends the Cinder project by adding iscsi running over RDMA (iser). Leveraging RDMA Mellanox OpenStack delivers 6X better data throughput (for example, increasing from 1GB/s to 5GB/s) and while simultaneously reducing CPU utilization by up to 80% (see Figure 3). Mellanox ConnectX -3 adapters bypass the operating system and CPU by using RDMA, allowing much more efficient data movement. iser capabilities are used to accelerate hypervisor traffic, including storage access, VM migration, and data and VM replication. The use of RDMA shifts data movement processing to the Mellanox ConnectX-3 hardware, which provides zero-copy message transfers for SCSI packets to the application, producing significantly faster performance, lower network latency, lower access time, and lower CPU overhead. iser can provide 6X faster performance than traditional TCP/IP based iscsi. The 9
10 Rev 1.3 Accelerating Storage iser protocol unifies the software development efforts of both Ethernet and InfiniBand communities, and reduces the number of storage protocols a user must learn and maintain. RDMA bypass allows the application data path to effectively skip to the front of the line. Data is provided directly to the application immediately upon receipt without being subject to various delays due to CPU load-dependent software queues. This has three effects: There is no waiting, which means that the latency of transactions is incredibly low. Because there is no contention for resources, the latency is deterministic, which is essential for offering end users a guaranteed SLA. Bypassing the OS, using RDMA results in significant savings in CPU cycles. With a more efficient system in place, those saved CPU cycles can be used to accelerate application performance. In the following diagram, it is clear that by performing hardware offload of the data transfers using the iser protocol, the full capacity of the link is utilized to the maximum of the PCIe limit. To summarize, network performance is a significant element in the overall delivery of data center services and benefits from high speed interconnects.unfortunately the high CPU overhead associated with traditional storage adapters prevents systems from taking full advantage of these high speed interconnects. The iser protocol uses RDMA to shift data movement tasks to the network adapter and thus frees up CPU cycles that would otherwise be consumed executing traditional TCP and iscsi protocols. Hence, using RDMA-based fast interconnects significantly increases data center application performance levels. Figure 3: RDMA Acceleration 10 Confidential
11 Mellanox OpenStack Solution Reference Architecture Rev Network Virtualization on Mellanox Adapters Single Root IO Virtualization (SR-IOV) allows a single physical PCIe device to present itself as multiple devices on the PCIe bus. Mellanox ConnectX -3 adapters are capable of exposing up to 127 virtual instances called Virtual Functions (VFs). These virtual functions can then be provisioned separately. Each VF can be viewed as an additional device associated with the Physical Function. It shares the same resources with the Physical Function, and its number of ports equals those of the Physical Function. SR-IOV is commonly used in conjunction with an SR-IOV enabled hypervisor to provide virtual machines with direct hardware access to network resources, thereby improving performance. Mellanox ConnectX-3 adapters equipped with onboard embedded switch (eswitch) are capable of performing layer-2 switching for the different VMs running on the server. Using the eswitch will gain even higher performance levels and in addition improve security, and isolation. Figure 4: eswitch Architecture eswitch main capabilities and characteristics: Virtual switching: creating multiple logical virtualized networks. The eswitch offload engines handle all networking operations up to the VM, thereby dramatically reducing software overheads and costs. Performance: The switching is handled in hardware, as opposed to other applications that use a software-based switch. This enhances performance by reducing CPU overhead. 11
12 Rev 1.3 Network Virtualization on Mellanox Adapters Security: The eswitch enables network isolation (using VLANs) and anti-mac spoofing. Monitoring: Port counters are supported. 3.1 Performance Measurements Many data center applications benefits from low latency network communication while others require deterministic latency. Using regular TCP connectivity between VMs can create high latency and unpredictable delay behavior. Figure 5 shows the dramatic difference (20X improvement) delivered by SR-IOV connectivity running RDMA compared to para-virtualized vnic running a TCP stream. Using the direct connection of the SR-IOV and the ConnectX-3 hardware eliminates the software processing that adds an unpredictable delay to packet data movement. The result is a consistently low latency that allows application software to rely on deterministic packet transfer times. Figure 5: Latency Comparison 12 Confidential
13 Mellanox OpenStack Solution Reference Architecture Rev Seamless Integration The eswitch configuration is transparent to the OpenStack controller administrator. The installed eswitch daemon on the server is responsible for hiding the low-level configuration. The administrator will use the standard OpenStack dashboard APIs REST interface for the fabric management. The Neutron agent configures the eswitch in the adapter card. Figure 6: Network Virtualization 13
14 Rev 1.3 Setup and Installation 4 Setup and Installation The OpenStack environment should be installed according to the OpenStack documentation package. For OpenStack Havana release the following installation changes should be applied: A Neutron server should be installed with the Mellanox Neutron plugin. Mellanox Neutron agent, eswitch daemon, and Nova VIF driver should be installed on the compute notes. For OpenStack Grizzly release the following installation changes should be applied: A Neutron server should be installed with the Mellanox Neutron plugin. A Cinder patch should be applied to the storage servers (for iser support). Mellanox Neutron agent, eswitch daemon, and Nova VIF driver should be installed on the compute notes. 4.1 Hardware Requirements Mellanox ConnectX -3 adapter cards 10GbE or 40GbE Ethernet switches Cables required for the ConnectX-3 card (typically using SFP+ connectors for 10GbE or QSFP connectors for 40GbE) Server nodes complying with OpenStack requirements Compute nodes with SR-IOV capability (BIOS and OS support) In terms of adapters, cables, and switches, many variations are possible. Visit for more information. Figure 7: Mellanox MCX314A-BCBT, ConnectX-3 40GbE Adapter 14 Confidential
15 Mellanox OpenStack Solution Reference Architecture Rev 1.3 Figure 8: Mellanox SX1036, 36x40GbE Figure 9: Mellanox 40GbE, QSFP Copper Cable 4.2 Software Requirements Supported OS RHEL 6.4 or higher 4.3 Prerequisites Mellanox OFED (SR-IOV support) or higher. KVM hypervisor complying with OpenStack requirements 1. Hardware is set up. To reduce the number of ports in the network, two different subnets can be mapped to the same physical interface on two different VLANs. 2. Mellanox OFED (SR-IOV enabled) is installed on each of the network adapters. For Mellanox OFED installation refer to Mellanox OFED User Manual (Installation chapter). Visit Mellanox Community for verification options and adaptation The OpenStack packages are installed on all network elements. 4. EPEL repository is enabled. ( 4.4 OpenStack Software Installation For Mellanox OpenStack installation visit the Mellanox OpenStack wiki pages at 15
16 Rev 1.3 Setup and Installation 4.5 Troubleshooting Troubleshooting actions for OpenStack installation with Mellanox plugins can be found at 16 Confidential
17 Mellanox OpenStack Solution Reference Architecture Rev Setting Up the Network 5.1 Configuration Examples Once installation is completed, the network must be set up. Setting up a network consists of the following steps: 1. Creating a network. 2. Creating a VM instance. Two types of instances can be created: i. Para-virtualized vnic. ii. SR-IOV direct path connection. 3. Creating disk volume. 4. Binding the disk volume to the instance created Creating a Network Use the commands neutron net-create and neutron subnet-create to create a new network and a subnet ( net-example in the example). $neutron net-create net-example Created a new network: Field Value admin_state_up True id 16b790d6-4f5a-4739-a f331b696 name net-example provider:network_type vlan provider:physical_network default provider:segmentation_id 4 shared False status ACTIVE subnets tenant_id ff6c1e4401adcafa0857aefe2e
18 Rev 1.3 Setting Up the Network $neutron subnet-create net-example /24 Created a new subnet: Field Value allocation_pools {"start": " ", "end": " "} cidr /24 dns_nameservers enable_dhcp True gateway_ip host_routes id 3c9ff1ae-218d-4020-b065-a2991d23bb72 ip_version 4 name network_id 16b790d6-4f5a-4739-a f331b696 tenant_id ff6c1e4401adcafa0857aefe2e Creating a Para-Virtualized vnic Instance 1. Using the OpenStack Dashboard, launch an VM instance using the Launch Instance button. 2. Insert all the required parameters and click Launch. This operation creates a macvtap interface on top of a Virtual Function (VF). Figure 10: OpenStack Dashboard Instances 18 Confidential
19 Mellanox OpenStack Solution Reference Architecture Rev 1.3 Figure 11: OpenStack Dashboard, Launch Instance 3. Select the desired network for the vnic ( net3 in the example). Figure 12: OpenStack Dashboard, Launch Interface Select Network 19
20 Rev 1.3 Setting Up the Network Creating an SR-IOV Instance 1. Use the command neutron port-create for the selected network ( net3 in the example) to create a port with vnic_type=hostdev. $neutron port-create net-example --binding:profile type=dict vnic_type=hostdev Created a new port: Field Value admin_state_up True binding:capabilities {"port_filter": false} binding:host_id binding:profile {"physical_network": "default"} binding:vif_type hostdev device_id device_owner fixed_ips {"subnet_id": "3c9ff1ae-218d-4020-b065-a2991d23bb72", "ip_address": " "} id a43d35f ae1-9a9d-d2d341b693d6 mac_address fa:16:3e:67:ad:ef name network_id 16b790d6-4f5a-4739-a f331b696 status DOWN tenant_id ff6c1e4401adcafa0857aefe2e Confidential
21 Mellanox OpenStack Solution Reference Architecture Rev Use the command nova boot to launch an instance with the created port attached. $nova boot --flavor m1.small --image rh6.4p --nic port-id=a43d35f ae1-9a9d-d2d341b693d6 vm Property Value OS-EXT-STS:task_state scheduling image rh6.4p OS-EXT-STS:vm_state building OS-EXT-SRV-ATTR:instance_name instance OS-SRV-USG:launched_at None flavor m1.small id 161da6a e23-9f6f ab4 security_groups [{u'name': u'default'}] user_id b94edf2504c84223b58e OS-DCF:diskConfig MANUAL accessipv4 accessipv6 progress 0 OS-EXT-STS:power_state 0 OS-EXT-AZ:availability_zone nova config_drive status BUILD updated T07:32:42Z hostid OS-EXT-SRV-ATTR:host None OS-SRV-USG:terminated_at None key_name None OS-EXT-SRV-ATTR:hypervisor_hostname None name vm3 adminpass tite37tqrnbn tenant_id ff6c1e4401adcafa0857aefe2e created T07:32:41Z os-extended-volumes:volumes_attached [] metadata {}
22 Rev 1.3 Setting Up the Network Creating a Volume Create a volume using the Volumes tab on the OpenStack dashboard. Click the Create Volume button. Figure 13: OpenStack Dashboard, Volumes Figure 14: OpenStack Dashboard, Create Volumes Figure 15: OpenStack Dashboard, Volumes 22 Confidential
23 Mellanox OpenStack Solution Reference Architecture Rev Attaching a Volume Attach a volume to the desired instance. The device name should be /dev/dv<letter>. E.g. /dev/vdc Figure 16: OpenStack Dashboard, Manage Volume Attachments 5.2 Verification Examples Instances Overview Use the OpenStack Dashboard to view all configured instances. Figure 17: VM Overview Connectivity Check There are many options for checking connectivity between instances, one of which is to simply open a remote console and ping the required host. 23
24 Rev 1.3 Setting Up the Network To launch a remote console for a specific instance, select the Console tab and launch the console. Figure 18: Remote Console Connectivity Volume Check To verify that the created volume is attached to a specific instance, click the Volumes tab. Figure 19: OpenStack Dashboard, Volumes In addition, run the fdisk command from the instance console to see the volume details. Figure 20: OpenStack Dashboard, Console 24 Confidential
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0
Mellanox Reference Architecture for Red Hat Enterprise Linux OpenStack Platform 4.0 Rev 1.1 March 2014 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Connecting the Clouds
Connecting the Clouds Mellanox Connected Clouds Mellanox s Ethernet and InfiniBand interconnects enable and enhance worldleading cloud infrastructures around the globe. Utilizing Mellanox s fast server
Mellanox Global Professional Services
Mellanox Global Professional Services User Guide Rev. 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
Mellanox WinOF for Windows 8 Quick Start Guide
Mellanox WinOF for Windows 8 Quick Start Guide Rev 1.0 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
State of the Art Cloud Infrastructure
State of the Art Cloud Infrastructure Motti Beck, Director Enterprise Market Development WHD Global I April 2014 Next Generation Data Centers Require Fast, Smart Interconnect Software Defined Networks
Building a Scalable Storage with InfiniBand
WHITE PAPER Building a Scalable Storage with InfiniBand The Problem...1 Traditional Solutions and their Inherent Problems...2 InfiniBand as a Key Advantage...3 VSA Enables Solutions from a Core Technology...5
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
High Performance OpenStack Cloud. Eli Karpilovski Cloud Advisory Council Chairman
High Performance OpenStack Cloud Eli Karpilovski Cloud Advisory Council Chairman Cloud Advisory Council Our Mission Development of next generation cloud architecture Providing open specification for cloud
ConnectX -3 Pro: Solving the NVGRE Performance Challenge
WHITE PAPER October 2013 ConnectX -3 Pro: Solving the NVGRE Performance Challenge Objective...1 Background: The Need for Virtualized Overlay Networks...1 NVGRE Technology...2 NVGRE s Hidden Challenge...3
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
Long-Haul System Family. Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity
Long-Haul System Family Highest Levels of RDMA Scalability, Simplified Distance Networks Manageability, Maximum System Productivity Mellanox continues its leadership by providing RDMA Long-Haul Systems
Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
Mellanox HPC-X Software Toolkit Release Notes
Mellanox HPC-X Software Toolkit Release Notes Rev 1.2 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX TECHNOLOGIES
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions
Simplifying Big Data Deployments in Cloud Environments with Mellanox Interconnects and QualiSystems Orchestration Solutions 64% of organizations were investing or planning to invest on Big Data technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Mellanox ConnectX -3 Firmware (fw-connectx3) Release Notes
Mellanox ConnectX -3 Firmware (fw-connectx3) Notes Rev 2.11.55 www.mellanox.com NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION ARE PROVIDED BY MELLANOX
Mellanox Accelerated Storage Solutions
Mellanox Accelerated Storage Solutions Moving Data Efficiently In an era of exponential data growth, storage infrastructures are being pushed to the limits of their capacity and data delivery capabilities.
Power Saving Features in Mellanox Products
WHITE PAPER January, 2013 Power Saving Features in Mellanox Products In collaboration with the European-Commission ECONET Project Introduction... 1 The Multi-Layered Green Fabric... 2 Silicon-Level Power
InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox Smart InfiniBand Switch Systems the highest performing interconnect
Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
Mellanox Academy Online Training (E-learning)
Mellanox Academy Online Training (E-learning) 2013-2014 30 P age Mellanox offers a variety of training methods and learning solutions for instructor-led training classes and remote online learning (e-learning),
SMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability Mellanox continues its leadership providing InfiniBand Host Channel
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
RoCE vs. iwarp Competitive Analysis
WHITE PAPER August 21 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...4 Summary...
Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520
COMPETITIVE BRIEF August 2014 Choosing the Best Network Interface Card Mellanox ConnectX -3 Pro EN vs. Intel X520 Introduction: How to Choose a Network Interface Card...1 Comparison: Mellanox ConnectX
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV
Technology Brief Optimize Server ization with QLogic s 10GbE Secure SR-IOV Flexible, Secure, and High-erformance Network ization with QLogic 10GbE SR-IOV Solutions Technology Summary Consolidation driven
Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA
WHITE PAPER April 2014 Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA Executive Summary...1 Background...2 File Systems Architecture...2 Network Architecture...3 IBM BigInsights...5
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Broadcom Ethernet Network Controller Enhanced Virtualization Functionality
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development [email protected]
Installing Hadoop over Ceph, Using High Performance Networking
WHITE PAPER March 2014 Installing Hadoop over Ceph, Using High Performance Networking Contents Background...2 Hadoop...2 Hadoop Distributed File System (HDFS)...2 Ceph...2 Ceph File System (CephFS)...3
Enabling High performance Big Data platform with RDMA
Enabling High performance Big Data platform with RDMA Tong Liu HPC Advisory Council Oct 7 th, 2014 Shortcomings of Hadoop Administration tooling Performance Reliability SQL support Backup and recovery
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V Technical Brief v1.0 September 2012 2 Intel Ethernet and Configuring SR-IOV on Windows*
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Virtual Compute Appliance Frequently Asked Questions
General Overview What is Oracle s Virtual Compute Appliance? Oracle s Virtual Compute Appliance is an integrated, wire once, software-defined infrastructure system designed for rapid deployment of both
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
Oracle Big Data Appliance: Datacenter Network Integration
An Oracle White Paper May 2012 Oracle Big Data Appliance: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
Network Function Virtualization Using Data Plane Developer s Kit
Network Function Virtualization Using Enabling 25GbE to 100GbE Virtual Network Functions with QLogic FastLinQ Intelligent Ethernet Adapters DPDK addresses key scalability issues of NFV workloads QLogic
BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH MELLANOX SWITCHX
White Paper BRIDGING EMC ISILON NAS ON IP TO INFINIBAND NETWORKS WITH Abstract This white paper explains how to configure a Mellanox SwitchX Series switch to bridge the external network of an EMC Isilon
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Storage at a Distance; Using RoCE as a WAN Transport
Storage at a Distance; Using RoCE as a WAN Transport Paul Grun Chief Scientist, System Fabric Works, Inc. (503) 620-8757 [email protected] Why Storage at a Distance the Storage Cloud Following
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
Mellanox Technologies, Ltd. Announces Record Quarterly Results
PRESS RELEASE Press/Media Contacts Ashley Paula Waggener Edstrom +1-415-547-7024 [email protected] USA Investor Contact Gwyn Lauber Mellanox Technologies +1-408-916-0012 [email protected] Israel
Replacing SAN with High Performance Windows Share over a Converged Network
WHITE PAPER November 2015 Replacing SAN with High Performance Windows Share over a Converged Network Abstract...1 Introduction...1 Early FC SAN (Storage Area Network)...1 FC vs. Ethernet...1 Changing SAN
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions
WHITE PAPER May 2014 Deploying Ceph with High Performance Networks, Architectures and benchmarks for Block Storage Solutions Contents Executive Summary...2 Background...2 Network Configuration...3 Test
SMB Advanced Networking for Fault Tolerance and Performance. Jose Barreto Principal Program Managers Microsoft Corporation
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
UCS M-Series Modular Servers
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
Telecom - The technology behind
SPEED MATTERS v9.3. All rights reserved. All brand names, trademarks and copyright information cited in this presentation shall remain the property of its registered owners. Telecom - The technology behind
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
IP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
CON8474 - Software-Defined Networking in a Hybrid, Open Data Center
CON8474 - Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Director, Product Management Oracle Virtual Networking Ronen Kofman Director of Product Development Oracle OpenStack
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
SDN v praxi overlay sítí pro OpenStack. 5.10.2015 Daniel Prchal [email protected]
SDN v praxi overlay sítí pro OpenStack 5.10.2015 Daniel Prchal [email protected] Agenda OpenStack OpenStack Architecture SDN Software Defined Networking OpenStack Networking HP Helion OpenStack HP
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015
Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015 Introduction 1 Netra Modular System 2 Oracle SDN Virtual Network Services 3 Configuration Details
The Next Phase of Datacenter Network Resource Management and Automation March 2011
I D C T E C H N O L O G Y S P O T L I G H T The Next Phase of Datacenter Network Resource Management and Automation March 2011 Adapted from Worldwide Datacenter Network 2010 2015 Forecast and Analysis
WHITE PAPER Data center consolidation: Focus on application and business services: Virtualization anywhere: Fabric convergence:
WHITE PAPER Scaling-out Ethernet for the Data Center Applying the Scalability, Efficiency, and Fabric Virtualization Capabilities of InfiniBand to Converged Enhanced Ethernet (CEE) The Need for Scalable
HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report
WHITE PAPER July 2012 HP Mellanox Low Latency Benchmark Report 2012 Benchmark Report Executive Summary...1 The Four New 2012 Technologies Evaluated In This Benchmark...1 Benchmark Objective...2 Testing
7 Ways OpenStack Enables Automation & Agility for KVM Environments
7 Ways OpenStack Enables Automation & Agility for KVM Environments Table of Contents 1. Executive Summary 1 2. About Platform9 Managed OpenStack 2 3. 7 Benefits of Automating your KVM with OpenStack 1.
Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP
Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP Agenda ADP Cloud Vision and Requirements Introduction to SUSE Cloud Overview Whats New VMWare intergration HyperV intergration ADP
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
Building Enterprise-Class Storage Using 40GbE
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Advancing Applications Performance With InfiniBand
Advancing Applications Performance With InfiniBand Pak Lui, Application Performance Manager September 12, 2013 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server and
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Oracle Exalogic Elastic Cloud: Datacenter Network Integration
An Oracle White Paper February 2014 Oracle Exalogic Elastic Cloud: Datacenter Network Integration Disclaimer The following is intended to outline our general product direction. It is intended for information
Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure
TECHNICAL WHITE PAPER Ubuntu OpenStack on VMware vsphere: A reference architecture for deploying OpenStack while limiting changes to existing infrastructure A collaboration between Canonical and VMware
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
Security in Mellanox Technologies InfiniBand Fabrics Technical Overview
WHITE PAPER Security in Mellanox Technologies InfiniBand Fabrics Technical Overview Overview...1 The Big Picture...2 Mellanox Technologies Product Security...2 Current and Future Mellanox Technologies
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro
Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro Whitepaper What s wrong with today s clouds? Compute and storage virtualization has enabled
Introduction to Infiniband. Hussein N. Harake, Performance U! Winter School
Introduction to Infiniband Hussein N. Harake, Performance U! Winter School Agenda Definition of Infiniband Features Hardware Facts Layers OFED Stack OpenSM Tools and Utilities Topologies Infiniband Roadmap
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Mellanox Technical Training and Certification Program
Mellanox Technical Partner Certification Program is a world-class certification program, designed for Mellanox partners' technical professionals to ensure and validate the technical and sales required
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION
Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION v 1.0/Updated APRIl 2012 Table of Contents Introduction.... 3 Storage Protocol Comparison Table....4 Conclusion...10 About the
Datacenter Operating Systems
Datacenter Operating Systems CSE451 Simon Peter With thanks to Timothy Roscoe (ETH Zurich) Autumn 2015 This Lecture What s a datacenter Why datacenters Types of datacenters Hyperscale datacenters Major
High Throughput File Servers with SMB Direct, Using the 3 Flavors of RDMA network adapters
High Throughput File Servers with SMB Direct, Using the 3 Flavors of network adapters Jose Barreto Principal Program Manager Microsoft Corporation Abstract In Windows Server 2012, we introduce the SMB
Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware vsphere 5
Top 5 Reasons to choose Microsoft Windows Server 2008 R2 SP1 Hyper-V over VMware Published: April 2012 2012 Microsoft Corporation. All rights reserved. This document is provided "as-is." Information and
SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS
SUN DUAL PORT 10GBase-T ETHERNET NETWORKING CARDS ADVANCED PCIE 2.0 10GBASE-T ETHERNET NETWORKING FOR SUN BLADE AND RACK SERVERS KEY FEATURES Low profile adapter and ExpressModule form factors for Oracle
Meeting the Five Key Needs of Next-Generation Cloud Computing Networks with 10 GbE
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
Oracle OpenStack for Oracle Linux Release 1.0 Installation and User s Guide ORACLE WHITE PAPER DECEMBER 2014
Oracle OpenStack for Oracle Linux Release 1.0 Installation and User s Guide ORACLE WHITE PAPER DECEMBER 2014 Introduction 1 Who Should Use this Guide? 1 OpenStack Basics 1 What Is OpenStack? 1 OpenStack
