Real-World Best Practices for Successful Data Center Virtualization THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 1
|
|
|
- Maryann Gibbs
- 10 years ago
- Views:
Transcription
1 Real-World Best Practices for Successful Data Center Virtualization THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 1
2 The Virtual I/O Revolution The Problem with Today s I/O Why Virtualized Servers are Different No Attractive Options with Traditional I/O Blade Servers Solve One Problem, Create Another Virtual I/O Overview What I/O Virtualization is Not Why InfiniBand is an Ideal Interconnect for Virtual I/O How I/O Virtualization Enhances Server Virtualization Six Use Studies I/O Virtualization at a Glance Infrastructure Benefits Management Efficiency Xsigo I/O Virtualization Solutions System Management Traffic Engineering Fast Server-to-Server Communications High-Availability Ethernet Connectivity High Availability Fibre Channel Connectivity Management Highlights I/O Director Highlights Xsigo Connectivity Server Management Deploying Virtual I/O Large-Scale Deployments High Availability with Virtual I/O Cost Savings Summary THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 2
3 The Virtual I/O Revolution In today s leading-edge data centers, virtualization technologies boost the utilization of storage, server, and networking resources, allowing these assets to be deployed with much greater speed and flexibility than ever before. On the other hand, server connectivity remains much as it was ten years ago. IT managers must still contend with a maze of cables, cards, switches and routers and the management complexity that accompanies them. This inflexible I/O infrastructure drives up cost and slows resource deployment, making it impossible to accommodate changing business requirements in real time. Virtual I/O changes the game. By replacing fixed I/O cards with virtual I/O resources, IT managers can significantly enhance data center agility. Tasks that formerly took weeks can now be done in minutes. Because connectivity is consolidated, the infrastructure becomes dramatically simpler: Hundreds of cables are replaced by dozens, most I/O adapter cards are eliminated, and overall connectivity costs drop by up to 50%. This guide reviews the factors driving virtual I/O adoption, how virtual machine technologies benefit from virtual I/O, and the specific solutions that are available now. The Problem with Today s I/O Server I/O is a major obstacle to effective server virtualization. Virtualized servers demand more network bandwidth, and they need connections to more networks and storage to accommodate the multiple applications they host. VMware best practices recommend dedicated connectivity for critical virtual machines and management networks. According to a recent survey of virtualization users, a typical virtualized server will include seven or more I/O connections, with some users reporting as many as 16 connections per server. Furthermore, a virtualized server is far more likely to need I/O reconfiguration as requirements change. Virtualization introduces performance questions as well. When all I/O resources are shared, how do you ensure the performance and security of a specific application? For these reasons, IT managers are looking for new answers to the I/O question. While blade server hardware and virtualization software enhance data center management, today s server I/O still limits agility. I/O virtualization promises to deliver the next significant advancement. Just as server virtualization decouples applications from specific servers, virtual I/O will accelerate application management by removing the constraints of the fixed I/O infrastructure. DR. BERNARD S. MEYERSON IBM FELLOW VP OF STRATEGIC ALLIANCES CTO IBM THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 3
4 Why Virtualized Servers are Different Traditional servers do not necessarily encounter the same I/O issues as virtualized servers. Traditional servers are often deployed in the three-tier data center model (shown below) where servers are configured with the I/O necessary to meet that device s specific application requirements. Database servers, for example, will connect only to the SAN and to the application tier. Servers in the application tier connect only to the database and Web tiers. These physically distinct networks provide security and availability while isolating devices from intrusion threats, denial of service attacks, or application failures. Server virtualization completely changes the deployment model. The objective of virtualization is to increase server utilization by creating a dynamic pool of compute resources that can be deployed as needed. There are several requirements to achieve this objective: Flexible application deployment: Most applications should be able to run on any server. Multiple applications per server: If a server is underutilized, applications can be added to capitalize on available compute resources. Application mobility: Applications should be portable among all servers for high availability or load balancing purposes. These requirements have important implications for server I/O: Increased demand for connectivity: For flexible deployment, servers need connectivity to all networks. A server may connect to the SAN and to the DMZ, and to every network in between. To ensure isolation, many users require separate physical connections to these networks, which results in numerous physical connections to each server. Traditional three-tier data center architecture. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 4
5 Management networks: Virtualization management places additional demands on connectivity. VMware best practices, for example, dictate dedicated connectivity for the VMotion network. Bandwidth requirements: Virtualization increases the bandwidth demands of server I/O. Traditional servers often operate at only 10% processor utilization; virtualized devices may grow beyond 50%. A similar increase will occur in I/O utilization, revealing new bottlenecks in the I/O path. No Attractive Options with Traditional I/O Traditional I/O, designed for one-application-per-server deployments, can be repurposed for virtualized servers. But the options have limitations. Use the server s pre-existing I/O: One option is to employ conventional I/O a few NICs and perhaps a few HBAs and share it among virtual machines. For several reasons, this is not likely to work. Applications may cause congestion during periods of peak performance, such as when VM backups occur. Performance issues are difficult to resolve since diagnostic tools for VMware are still relatively immature and not widely available. Even if an administrator identifies a bottleneck s source, corrective action may require purchasing more network cards or rebalancing application workloads on the VMs. Another issue with pre-existing I/O is the sheer number of connections needed with virtualization. Beyond the numerous connections to data networks, virtualized servers also require dedicated interconnects for management and virtual machine migration. Servers are also likely to require connectivity to external storage. If your shop uses SAN, this means FC cards in every server. Efficient virtualized data center architecture with virtual I/O. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 5
6 Scale up the I/O for each server: Most IT managers increase connectivity to accommodate virtualization. While this is a viable option, it is not always an attractive one. Virtualization users find they need anywhere from six to 16 I/O connections per server, which adds significant cost and cabling. More importantly, it also drives the use of larger servers to accommodate the needed I/O cards, which add cost, space and power requirements. The end result is that the cost of I/O can frequently exceed the cost of the server itself. Traffic congestion on a gigabit ethernet link. Blade Servers Solve One Problem, Create Another Server blades solve the cabling issue with an internal backplane that reduces connectivity issues. Limits on the number of connections, however, can be problematic for virtualization. Accommodating high levels of connectivity may prove to be either costly (sometimes requiring an expansion blade that consumes an extra slot), or impossible depending on the requirements. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 6
7 Virtual I/O Overview Virtual I/O presents an option that is scalable, flexible, and easily managed. It removes the limitations of traditional I/O and provides optimal connectivity when running multiple VMs on a single physical server: I/O Consolidation: Virtual I/O consolidates all storage and network connections to a single cable per server (or two for redundancy). This reduces cabling and allows additional connections to be added whenever required. The single link must have these characteristics to fully consolidate connectivity: - Fibre Channel-ready: Virtual I/O carries Fibre Channel traffic for connection to FC storage. - High Performance: Virtual I/O has sufficient performance to meet all of the server s aggregate I/O requirements. The single link will never be a bottleneck. - Quality of Service: Virtual I/O s QoS controls manage bandwidth allocation, which helps manage application performance in a shared resource environment. Transparent I/O Management: Virtual I/O appears as conventional connectivity resources (NICs and HBAs) to the application, OS, and hypervisor. It accomplishes this with virtual NICs (vnics) and virtual HBAs (vhbas). Much like virtual machines appear to applications as distinct servers, vnics and vhbas appear exactly as their physical counterparts would, which allows them to be easily and broadly deployed. I/O Isolation: From a security standpoint, vnics and vhbas provide the same isolation as if they were physically separate adapter cards connected to separate cables. Data center without virtual I/O. Data center with virtual I/O. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 7
8 Elements of Virtual I/O Virtual I/O maintains the appearance of traditional I/O (from the standpoint of servers and networks) while eliminating or reducing the need for components like adapter cards, cables, and edge switches. With virtual I/O, system elements are built specifically for the I/O virtualization task. Virtual I/O in the data center. Virtual I/O director: Application servers connect to this top-of-rack device via individual links. The I/O director then connects to core Ethernet and SAN devices via conventional Ethernet and FC links. High-speed interconnect: Each server connects to the I/O director via a dedicated link (for redundancy, two I/O directors may connect to each server by separate links). Because this link must accommodate both FC and Ethernet traffic, InfiniBand is today the only transport that meets the requirements. Host Channel Adapter (HCA): A single card in each server (or two for redundancy) provides connectivity to the I/O director(s). Virtual NIC (vnic): Appears exactly as a physical network adapter to the application, operating system, and hypervisor. Capabilities of the vnic include: Deployable on demand: vnics can be created on the fly (no need for re-boot). Scalable: Multiple vnics can be created on a single server. Mobile: vnics can be moved among physical servers under the control of either the administrator or a management application. Persistent identity: MAC addresses remain persistent, even through re-boots or migration. This allows server I/O re-configuration without affecting network settings. Virtual HBAs (vhba): Like the vnic, the vhba behaves exactly as a physical Fibre Channel host adapter. It can be deployed on demand, is scalable, mobile, and has a persistent WWN. Quality of Service (QoS): User-defined QoS attributes define guaranteed performance parameters for individual vnics and vhbas. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 8
9 What I/O Virtualization is Not Virtual I/O is designed specifically to meet the needs of server I/O. It neither conflicts nor overlaps with server or storage virtualization. It is in fact, fully complementary with those and other management solutions. Virtual I/O is not a hypervisor: Virtual I/O provides network and storage connectivity for applications, operating systems, and hypervisors. It does not virtualize servers. Virtual I/O is not storage virtualization: Virtual I/O presents storage to servers exactly as a conventional HBA does. There is no re-mapping of LUNs. Why InfiniBand is an Ideal Interconnect for Virtual I/O InfiniBand (IB) was developed specifically as an interconnect technology for converged I/O fabrics. The requirements for these fabrics are: Reliable transmission with flow control: IB provides reliable link and transport layers with flow control, all implemented in hardware. High performance: IB Latency is very low: 1usec at the endpoints, 150ns at switch hops. Throughput is high: 10Gbs and 20Gbs. Data rates are non-available. Efficient data transfer: IB provides RDMA which enables zero-copy transfers, thus eliminating the need to waste CPU cycles and memory bandwidth on copying data. Dedicated resources for different traffic flows: IB provides virtual lanes with their own resources (such as switch buffers) for different traffic flows (such as networking and storage). Scalability: IB scalability to thousands of nodes has been proven in large clusters-a quarter of the TOP500 clusters use it as their interconnect. InfiniBand delivers in all these critical areas for I/O fabrics, and it is now a mature technology which is offered as an option by major server vendors. Operating systems with mature IB support include Linux and Windows. The technology has been proven in many large clusters, including ones running mission-critical applications such as high-frequency trading. IB provides an ideal technology for consolidating data and storage traffic by offering the best transport currently available for I/O virtualization. How I/O Virtualization Enhances Server Virtualization Virtual I/O to delivers connectivity that is ideally suited to virtual servers. The two are very much analogous: While server virtualization dynamically allocates high-speed processing resources among multiple applications, I/O virtualization dynamically allocates high-speed I/O resources among multiple demands. From the application perspective, both are managed as traditional resources. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 9
10 Benefits of virtual I/O include: Agility: You can deploy vnics and vhbas to virtualized servers, without server reboots. Consolidation: With one cable, you can obtain the multiple connections needed for VMs and management. Continuity: Hypervisors view virtual resource exactly as they view physical cards. Resource isolation: VMs can be associated with specific I/O resources for security. The simplest security device is connectivity. If a particular server does not require access to a particular network, it is a simple, software controlled process to remove that connectivity. Mobility: Virtual I/O can be migrated between physical servers to assist with VM migration. Predictability: Quality of Service delivers guaranteed bandwidth to specific VMs. Six use cases show how these attributes enhance virtual server deployments. 1. Predictable Application Performance It can be challenging to guarantee application performance and isolation when relying on shared resources. In production deployments, it s important to eliminate bottlenecks and guarantee throughput for critical applications. With traditional I/O, IT managers resolve these issues by adding resources. Multiple connections to each server provide isolation and bandwidth, but they also drive up costs to the point that server I/O may be more expensive than the server itself. Virtual I/O provides a less costly, more easily managed alternative that delivers predictable application performance in three ways: Dynamically allocated bandwidth: 10-20Gb bandwidth to each server is dynamically shared among virtual machines. Conventional I/O may get only 1Gb bandwidth to an app. With virtual I/O, all bandwidth is available when needed. Resource isolation: Connectivity can be assigned to specific VMs for I/O isolation. Quality of Service (QoS): Ensures that critical applications receive the bandwidth required. QoS helps ensure predictable application performance by delivering guaranteed bandwidth, even when resource contention occurs. Throughput is hardware enforced, and controlled by these user-defined settings: Committed information rate (CIR) guarantees a minimum bandwidth. Peak information rate (PIR) limits the maximum amount of bandwidth the resource can consume. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 10
11 QoS settings can be managed in two ways: By virtual machine: QoS can be set by virtual machine, regardless of which vnic or vhba the VM is associated with. If the VM is moved to another server, the QoS setting remains with it. This powerful set-it-and-forget-it feature allows QoS management through VMotion events. By virtual I/O resource: QoS set per virtual NIC or HBA. When the resource is migrated to another server, the QoS settings remain intact. Virtual I/O enables QoS for each virtual machine. In addition to isolating traffic, virtual I/O allows connectivity to be added based on need. This capability allows a virtualized datacenter to be both flexible and secure. For example, if only a few of the virtual machines require access to the internet through the firewall DMZ, only the ESX servers hosting those virtual machines need connectivity to the firewall DMZ. By removing connectivity to the firewall DMZ, a potential vulnerability is eliminated from the servers. When demand for more DMZ connectivity occurs, the virtual I/O solution can establish that connectivity for additional ESX servers. Isolating traffic and connectivity improves deterministic performance of virtual machines and eliminates potential security risks that are prevalent in flat network architectures. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 11
12 2. Lower Cost and Power With conventional I/O, the connectivity requirements for virtualization often drive deployment of costly 4U servers. 1U servers may cost a fraction as much, but are typically limited to two Fibre Channel and two Ethernet interfaces. By eliminating connectivity constraints, virtual I/O enables 1U servers to be more widely deployed. With just two adapter cards, as many as 32 fully redundant vnics and 12 redundant vhbas may be deployed to a server. Fewer I/O cards means less power as well, saving as much as 40 watts per server. Edge switches and cables are also reduced, for a total capital savings of 30% to 50% on server I/O costs alone. Power savings associated with the reduced number of PCI cards and networking switches range from 4 to 6 kilowatts for a 120 server installation. With conventional I/O, more connectivity often requires 4U servers. With virtual I/O, 1U servers can be used. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 12
13 3. Blade Server Enablement Blade servers, like 1U rack mount servers, typically have very limited I/O expansion capabilities. In many cases, a fully populated server blade can only have four Ethernet NICs and two fibre channel HBAs. This limited I/O capability significantly reduces the capability of blade environments to run multiple virtual machines per host. Sharing I/O resources across many virtual machines will result in conflicts and bottlenecks that could be avoided if more I/O resources were available. Virtual I/O resolves the blade server I/O bottleneck. By replacing the traditional I/O resources with high-speed links and virtual resources, the virtual machines receive more bandwidth and better isolation. Most blade systems are available with InfiniBand cards that are proven compatible with virtual I/O. 4. I/O Resource Mobility Virtual I/O eases the challenge of building fault-tolerant datacenter environments by enabling virtual I/O resources to be rapidly moved from one server to another. Because a server s I/O personality is typically defined by its connectivity, MAC addresses and WWNs, moving the virtual I/O resources effectively moves the server personality. This enables the physical resource running a hypervisor to be quickly changed from one server to another in the event of a system failure or system upgrade. Server profiles can move from machine to machine. The I/O mobility inherent in virtual I/O helps enable: Disaster recovery: The exact I/O resources at one site can quickly be created at another, thus accelerating switchover. Server replacement: When a server fails, its full connectivity profile (including WWNs and MAC addresses) can be instantly moved to another server. Security: To ensure security, a server may be configured only with the connectivity currently needed. When requirements change, the configuration can be changed without entering the data center and the IT manager is never forced to implement big flat networks or open storage. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 13
14 5. Secure VMotion In most VMware deployments, VMotion information travels over the same Ethernet switching infrastructure as production traffic. This creates a security exposure that can be eliminated with virtual I/O. By using the high-speed, low-latency fabric as the virtual machine infrastructure, VMotion traffic remains on a network that is both extremely fast and separate from production traffic. Virtual NICs and a dedicated Ethernet I/O Module can create an internally-switched, high-speed, low-latency network that allows VMotion to transfer data over an isolated Ethernet network at rates up to 10 Gbps. Virtual I/O ensures VMotion security. Improves security by isolating VMotion traffic from the rest of the datacenter Improves VMotion performance and reliability by creating a high-speed network for VMotion traffic Reduces the burden on the LAN by keeping all of the VMotion traffic within the Xsigo network Reduces the need for expensive high-speed networking ports and server NICs 6. Virtual Machine Migration Across Many Servers Security requirements sometimes limit the use of VMotion. To migrate a VM from one server to another, VMotion requires that both servers simultaneously see the same networks and storage. All servers participating in the VMotion cluster would therefore need open access, a configuration that violates security requirements in some IT shops. Virtual I/O enables an alternative to VMotion that allows virtual machines to migrate without the need for open access. By migrating I/O, a VM can be suspended on one server, then immediately resumed on another, without ever exposing storage to two servers simultaneously. This is accomplished in three steps: Suspend the virtual machine on Server A Migrate the applications storage and network connectivity (vhbas and vnics) to Serv- er B. Resume the virtual machine on Server B. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 14
15 This simple process works because the vhba maintains its WWN through the migration. The target LUNs and associated zoning are therefore migrated with the vhba which allows the server administrator to resume the virtual machine on the new server without any storage configuration changes. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 15
16 I/O Virtualization at a Glance When deployed with server virtualization, I/O virtualization provides several operational efficiencies that save cost and time. Infrastructure Benefits Attribute Virtual I/O Physical I/O Cables per server 2 cables 6 to 16 cables Redundancy on every I/O connection Yes No, for most installations Minumum server size 1U 2U or 4U Minumum server cost $2K $10K I/O expense per server (including switch ports) Quality of Service (QoS) management per interconnect Quality of Service (QoS) management per virtual machine VMotion interconnect $2-3K QoS per virtual NIC or HBA, enforced in hardware. I/O resources and bandwidth managed from I/O Director. QoS per virtual machine enforced in hardware. QoS attributes remain with VM through migration. Communicate directly from one vnic to another via InfiniBand fabric. No LAN traffic. $4-6K QoS is managed either: In the server (software consumes server resources). At the switch (requires switch that supports QoS). Neither provides central management of I/O and performance. QoS enforced per connection only. Dedicated Ethernet port in each server. Connect via switch. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 16
17 Management Efficiency Task Virtual I/O Physical I/O Add network connectivity 1. Create a vnic in each 1. Add an Ethernet port (NICs) server. to every server. 2. Add cabling. Add switch ports if needed. Time: 5 minutes Time: Days Add FC storage connectivity 1. Create a vhba in each 2. Add an FC HBA card to (HBAs) server. every server. 3. Add cabling. 4. Add FC switch ports if needed. Time: 5 minutes Time: Days Add a server 1. Install 1 or 2 host cards 1. Install 2 to 6 host cards 2. in server. Rack server. 2. in server. Rack server. 3. Connect 1 or 2 cables. 3. Add Ethernet and FC 4. Deploy vnics and switches if needed. vhbas. 4. Connect 6 to 16 cables 5. Individually configure NICs and HBAs. Time: 2-4 hours Time: 8 hr+ Failover VM instance from 1. Migrate connectivity. 3. Add NICs to new one server to another 2. Restart new server. server, if needed. VMs come up automatically. 4. Configure NICs 5. Remap networks for 6. new MAC address. Add HBAs to new 7. server, if needed. Configure HBAs. 8. Remap networks to 9. reflect new WWNs. Restart server. Time: 10 minutes Time: 4 hours THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 17
18 Xsigo I/O Virtualization Solutions Xsigo Systems technology delivers on the promise of I/O virtualization. The Xsigo I/O Director TM is an enterprise-class, hardware-based solution that provides LAN and SAN connectivity for up to hundreds of servers. Under software control, any server may be configured with access to any LAN or SAN. Because the vnics and vhbas can be configured on-the-fly without a server reboot, the server s network connectivity is truly flexible and reconfigurable. Xsigo technology includes: The Xsigo I/O Director chassis: Includes 24 InfiniBand ports (10-20Gb), 15 I/O module slots, and fully redundant power and cooling. Directly connect up to 24 servers, or connect hundreds of servers via expansion switches. The Xsigo VP780 I/O Director I/O Modules: Provide connectivity to LAN and SAN. Options include 1Gb and 10Gb Ethernet, and 4Gb FC. XMS Management System: Provides central management of all I/O resources via CLI, GUI, or an XML-based API. System Management Xsigo Management System (XMS) is a powerful, easy-to-use, browser-based interface. Wizards guide the user through the process of creating server I/O profiles, individual virtual resources or templates. XMS also provides reporting views which enable an administrator to see all of the servers, virtual NICs or virtual HBAs that are currently configured. Traffic monitoring tools also enable administrators to understand how resources are being used. All I/O administration can be handled from the Xsigo Management System, without entering the data center. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 18
19 In addition to running as a stand-alone system, XMS has been integrated with the VMware Infrastructure Client. This integration enables complete management of virtual machines and virtual I/O from a single user interface. Traffic Engineering All I/O modules enable fine-grain Quality of Service (QoS) including Committed Information Rate (CIR) and Peak Information Rate (PIR) settings that are configurable for each virtual resource. The 10Gb Ethernet module also allows QoS parameters, access controls and classifiers to be configured per flow based on the type of traffic. Fast Server-to-Server Communications Server-to-server communications benefit from the high-speed and low-latency InfiniBand fabric. By sending information from one vnic to another, servers move traffic at nearly 5Gb per second, with a latency of less than 30 microseconds, using standard TCP/IP protocols. This performance enhancing feature for Ethernet-based applications requires no modifications to the applications themselves or to the TCP/IP stack. InfiniBand protocols are supported as well. For high performance computing applications, the Xsigo I/O Director supports MPI with a latency of 3.5 microseconds for 0-byte messages and a bandwidth of 846MB per second for 4KB messages. High-Availability Ethernet Connectivity The Xsigo I/O Director supports a layer-2 High-Availability (HA) solution that binds together two vnics (primary and secondary) through an HA group name and MAC address. The primary vnic is the active, online vnic that supports traffic. The secondary vnic is a live standby which takes over transmitting and receiving traffic if the primary vnic fails. The primary application of Xsigo HA is to protect against system level failures by configuring one vnic on two Xsigo I/O Directors. However, it is also possible to configure HA to protect against the following failures: Module-level failures: A single vnic can be configured on two different modules within the same Xsigo I/O Director chassis. If one module fails, the other will take over and support traffic. Port-level failures: Two vnics may be configured on the same module. If one connection fails, the other will take over and support traffic. Management Highlights Maximum vnics per server: 32 Maximum vhbas per server: 32 No server re-boot needed when deploying vnics and vhbas. QoS is configurable per vnic and vhba. Hardware enforced QoS. All I/O is portable among servers. I/O Director Highlights Bandwidth to each server: 10-20Gb/s InfiniBand ports: 24 Max number of servers: No limit Total aggregate bandwidth: 780Gb/s I/O Module slots: 15 I/O Module options: - 4 x 1Gb Ethernet - 1 x 10Gb Ethernet - 2 x 4Gb FC - 1 x 10Gb InfiniBand Serviceable design: - Hot swappable and redundant fans, power supplies - Hot swappable I/O modules - Field replaceable InfiniBand fabric board - Passive mid-plane High Availability Fibre Channel Connectivity Xsigo virtual I/O supports multi-pathing software such as PowerPath. For high-availability, two virtual HBAs are deployed in each server. Each server is connected to two I/O Directors. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 19
20 Server Management Managing servers connected to a Xsigo I/O Director is nearly identical to managing servers using traditional NICs and HBAs. Virtual NICs each have a permanent MAC address and will be assigned an IP address, either directly or by DHCP. The virtual NIC interface will appear to the operating system the same way that a traditional NIC does. Virtual HBAs are configured for a Fibre Channel network and storage devices, just as with conventional HBAs. In either case, interface configurations may be verified using standard commands ( ifconfig in Linux or ipconfig in Windows). The only Ethernet interface that cannot be virtualized by the Xsigo I/O Director is the IPMI or out-of-band management network. However, a traditional Ethernet management network can be used side-by-side with Xsigo s virtualized devices. Server Load Balancers or other network appliances will work seamlessly with virtual NICs. They will see the virtual NICs on the servers exactly like traditional NICs. The Xsigo system will be transparent to the network appliances looking from the network toward the server and to the servers looking toward the network. Xsigo Connectivity Two options are available for connecting servers to the Xsigo I/O Director: Copper: InfiniBand copper cables are available in lengths up to 30M. The cables are round and slightly thicker than a standard Cat5 Ethernet cable. Optical: InfiniBand optical cables are available in lengths up to 100M. Deploying Virtual I/O Virtual I/O is easier to deploy than physical I/O. There is less cabling and there are fewer cards, and all configurations are managed from a single console. Wizards guide the deployment of virtual NICs and HBAs. Install cards and cabling: Each server is configured with one Host Channel Adapter (HCA) card, or two cards for redundancy. These cards are off-the-shelf devices designed for PCI-Express busses. Cards are connected to the I/O Director via an Infini- Band cable. Install driver: As with any NIC or HBA, virtual I/O is installed on the server via a driver. Connect networks and storage: Ethernet routers and SAN Directors are connected to the virtual I/O Director via standard Ethernet and FC connections. Deploy virtual NICs and HBAs: All virtual I/O is managed from a single screen. Wizards permit virtual NICs and HBAs to be deployed on the fly, without server reboots. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 20
21 Large-Scale Deployments The Xsigo I/O Director supports large-scale deployments via an expansion switch. With 780Gb of aggregate bandwidth, a single I/O Director can accommodate the I/O needs for hundreds of servers. The Xsigo IS24 Expansion Switch TM provides connectivity for those servers by enabling fan out from the 24 ports found on the I/O Director. Each expansion switch provides 24 additional ports, one or more of which provide a link to the I/O Director. When multiple links are used, the system automatically aggregates them to fully capitalize on the available bandwidth. The remaining expansion switch ports are then connected to servers. For example, a fully redundant configuration would include: 120 servers Two I/O Directors 12 IS24 Expansion Switches Four links from each switch to the I/O Director (40 or 80Gb aggregate bandwidth) High Availability with Virtual I/O With virtual I/O, redundant connectivity is achieved exactly as it is with conventional I/O: using redundant devices. Each server is configured with two HCA cards, and two cables, connected to two I/O Directors. Virtual NICs and HBAs can be deployed in HA pairs and configured for automatic failover. Storage software such as PowerPath works with virtual HBAs exactly as it does with physical ones. The simplicity of establishing HA connections with virtual I/O allows HA to be employed across all connectivity, a significant benefit for application availability. 120-server deployment with virtual I/O. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 21
22 Cost Savings I/O virtualization eliminates up to 70% of the components associated with server I/O. This reduction in complexity typically results in these savings: I/O Capital Costs Server Capital Costs Management Expense Power 35 50% savings vs. traditional I/O for servers that are SAN attached. Savings include 70% fewer I/O cards, cables, FC and Ethernet switch ports. Savings can exceed $5,000 per server because 1U or 2U server can be deployed rather than 4U servers. Savings include reduced server, I/O, and space costs. Centralized, remote I/O management means 90% less time is required for routine management tasks. 70% fewer I/O cards and switch ports mean 40 watts can be saved per server, or 80 watts per server if you include power conversion and cooling. For a 120-server deployment, in three years the savings can amount to over $1,000,000. Furthermore, because capital costs are lower than traditional I/O, the payback in new installations is immediate. Operating Expenses Traditional I/O Virtual I/O I/O Management Cost Per server, per year $750 $100 Power cost related to $0.10 per KW hr Per server, per year $140 $70 Total annual expenses per server $890 $170 Total Expenses I/O capital cost Per server $4,500 $3,000 Server capital cost (4U server vs 2U) Per server $10,000 $5,000 Operating expenses over 3 years Per server $2,670 $510 Total 3-year expense $17,170 $8,510 Savings per server $8,660 Savings per 120 servers $1,039,200 % Savings 50% THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 22
23 Summary Server virtualization imposes new demands on server I/O. Servers now require increased bandwidth, more connections to more networks, and greater management flexibility. I/O virtualization delivers on all of these requirements, at reduced capital cost. The concept behind server virtualization is simple: Take a single resource and dynamically allocate it to multiple uses. In the process, costs fall and efficiency rises. I/O virtualization does the same for I/O resources. Beyond the cost and efficiency benefits, the resulting environment achieves the same on demand, highly mobile, easily scalable management model found with server virtualization. These two technologies, in conjunction with storage virtualization, finally deliver on the promise of the utility data center. THE TECH INSIDER S GUIDE TO I/O VIRTUALIZATION 23
24 Headquarters: Japan: United Kingdom: Germany: Australia: Xsigo Systems, Inc. 70 West Plumeria Drive San Jose, CA USA Tel: Fax: Xsigo Systems Japan K.K. Level 7 Wakamatsu Building Nihonbashi Honcho Chuo-ku, Tokyo Japan Tel: Xsigo Systems, Ltd. No. 1 Poultry London, EC2R 8JR United Kingdom Tel: Gartenstrasse 79 D Freising Germany Tel Fax Willoughby Road Crows Nest Sydney, 2065 Australia Tel: Xsigo Systems, Inc. All rights reserved. Specifications are subject to change without notice. Xsigo is trademarks of Xsigo Systems, Inc. in the U.S. and other countries. WP-0108
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center
Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center Expect enhancements in performance, simplicity, and agility when deploying Oracle Virtual Networking in the data center. ORACLE
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
A Platform Built for Server Virtualization: Cisco Unified Computing System
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
1 Oracle Virtual Networking: Data Center Fabric for the Cloud Sébastien Grotto Oracle Virtual Networking Specialist Optimisez et Virtualisez vos Infrastructures DataCenter avec Oracle 4 juin 2013 Why Data
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R O r a c l e V i r t u a l N e t w o r k i n g D e l i v e r i n g F a b r i c
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Virtual Connect
IBM BladeCenter H with Cisco VFrame Software A Comparison with HP Connect Executive Overview This white paper describes how Cisco VFrame Server Fabric ization Software works with IBM BladeCenter H to provide
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
How To Connect Virtual Fibre Channel To A Virtual Box On A Hyperv Virtual Machine
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
BUILDING A NEXT-GENERATION DATA CENTER
BUILDING A NEXT-GENERATION DATA CENTER Data center networking has changed significantly during the last few years with the introduction of 10 Gigabit Ethernet (10GE), unified fabrics, highspeed non-blocking
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business
WHITE PAPER Data Center Fabrics Why the Right Choice is so Important to Your Business Introduction Data center fabrics are emerging as the preferred architecture for next-generation virtualized data centers,
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters
W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized
From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013
Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013 Overview Oracle Virtual Networking revolutionizes data center economics by creating an agile, highly efficient infrastructure
Data Center Evolution without Revolution
WHITE PAPER www.brocade.com DATA CENTER Data Center Evolution without Revolution Brocade networking solutions help organizations transition smoothly to a world where information and applications can reside
OPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Evaluation of Enterprise Data Protection using SEP Software
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage. White Paper
Integration of Microsoft Hyper-V and Coraid Ethernet SAN Storage White Paper June 2011 2011 Coraid, Inc. Coraid, Inc. The trademarks, logos, and service marks (collectively "Trademarks") appearing on the
Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager
Achieve Automated, End-to-End Firmware Management with Cisco UCS Manager What You Will Learn This document describes the operational benefits and advantages of firmware provisioning with Cisco UCS Manager
Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright
Converged Networking Solution for Dell M-Series Blades Authors: Reza Koohrangpour Spencer Wheelwright. THIS SOLUTION BRIEF IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Next Gen Data Center. KwaiSeng Consulting Systems Engineer [email protected]
Next Gen Data Center KwaiSeng Consulting Systems Engineer [email protected] Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
Expert Reference Series of White Papers. Visions of My Datacenter Virtualized
Expert Reference Series of White Papers Visions of My Datacenter Virtualized 1-800-COURSES www.globalknowledge.com Visions of My Datacenter Virtualized John A. Davis, VMware Certified Instructor (VCI),
Whitepaper. Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers
Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Implementing High-Throughput and Low-Latency 10 Gb Ethernet for Virtualized Data Centers Introduction Adoption of
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Private cloud computing advances
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
Direct Attached Storage
, page 1 Fibre Channel Switching Mode, page 1 Configuring Fibre Channel Switching Mode, page 2 Creating a Storage VSAN, page 3 Creating a VSAN for Fibre Channel Zoning, page 4 Configuring a Fibre Channel
WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network
WHITE PAPER How To Build a SAN The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network TABLE OF CONTENTS Introduction... 3 What is a SAN?... 4 Why iscsi Storage?... 4
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments
QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments Increase Virtualization Density and Eliminate I/O Bottlenecks with QLogic High-Speed Interconnects Key Findings Support for increased workloads,
Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
3G Converged-NICs A Platform for Server I/O to Converged Networks
White Paper 3G Converged-NICs A Platform for Server I/O to Converged Networks This document helps those responsible for connecting servers to networks achieve network convergence by providing an overview
Introducing logical servers: Making data center infrastructures more adaptive
Introducing logical servers: Making data center infrastructures more adaptive technology brief, 2 nd edition Abstract... 2 Introduction... 2 Overview of logical servers... 3 Why use logical servers?...
UCS M-Series Modular Servers
UCS M-Series Modular Servers The Next Wave of UCS Innovation Marian Klas Cisco Systems June 2015 Cisco UCS - Powering Applications at Every Scale Edge-Scale Computing Cloud-Scale Computing Seamlessly Extend
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION Network Appliance, Inc. March 2007 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 BACKGROUND...
Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
HP Virtual Connect. Tarass Vercešuks / 3 rd of October, 2013
HP Virtual Connect Tarass Vercešuks / 3 rd of October, 2013 Trends Creating Data Center Network Challenges Trends 2 Challenges Virtualization Complexity Cloud Management Consumerization of IT Security
Virtualized Security: The Next Generation of Consolidation
Virtualization. Consolidation. Simplification. Choice. WHITE PAPER Virtualized Security: The Next Generation of Consolidation Virtualized Security: The Next Generation of Consolidation As we approach the
Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V
White Paper Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V What You Will Learn The modern virtualized data center is today s new IT service delivery foundation,
Simplify VMware vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
How To Make A Virtual Machine Aware Of A Network On A Physical Server
VMready Virtual Machine-Aware Networking White Paper Table of Contents Executive Summary... 2 Current Server Virtualization Environments... 3 Hypervisors... 3 Virtual Switches... 3 Leading Server Virtualization
Table of contents. Matching server virtualization with advanced storage virtualization
Matching server virtualization with advanced storage virtualization Using HP LeftHand SAN and VMware Infrastructure 3 for improved ease of use, reduced cost and complexity, increased availability, and
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
W H I T E P A P E R. VMware Infrastructure Architecture Overview
W H I T E P A P E R ware Infrastructure Architecture Overview ware white paper Table of Contents Physical Topology of the ware Infrastructure Data Center............................... 4 Virtual Data Center
Dell High Availability Solutions Guide for Microsoft Hyper-V
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
I/O Virtualization The Next Virtualization Frontier
I/O Virtualization The Next Virtualization Frontier Dennis Martin President Demartek Demartek Company Overview Industry analysis with on-site test lab Lab includes servers, networking and storage infrastructure
Unified Computing Systems
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
InfiniBand in the Enterprise Data Center
InfiniBand in the Enterprise Data Center InfiniBand offers a compelling value proposition to IT managers who value data center agility and lowest total cost of ownership Mellanox Technologies Inc. 2900
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
FIBRE CHANNEL OVER ETHERNET
FIBRE CHANNEL OVER ETHERNET A Review of FCoE Today ABSTRACT Fibre Channel over Ethernet (FcoE) is a storage networking option, based on industry standards. This white paper provides an overview of FCoE,
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks. An Oracle White Paper April 2003
Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building Blocks An Oracle White Paper April 2003 Achieving Mainframe-Class Performance on Intel Servers Using InfiniBand Building
How To Use The Cisco Mds F Bladecenter Switch For Ibi Bladecenter (Ibi) For Aaa2 (Ibib) With A 4G) And 4G (Ibb) Network (Ibm) For Anaa
Cisco MDS FC Bladeswitch for IBM BladeCenter Technical Overview Extending Cisco MDS 9000 Family Intelligent Storage Area Network Services to the Server Edge Cisco MDS FC Bladeswitch for IBM BladeCenter
Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers
CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW Dell PowerEdge M-Series Blade Servers Simplifying IT The Dell PowerEdge M-Series blade servers address the challenges of an evolving IT environment by delivering
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA
Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA Overview Oracle s Fibre Channel over Ethernet (FCoE technology provides an opportunity to reduce data center costs by converging
VMware vsphere Design. 2nd Edition
Brochure More information from http://www.researchandmarkets.com/reports/2330623/ VMware vsphere Design. 2nd Edition Description: Achieve the performance, scalability, and ROI your business needs What
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration
Solving the Hypervisor Network I/O Bottleneck Solarflare Virtualization Acceleration White Paper By: Gary Gumanow 9 October 2007 SF-101233-TM Introduction With increased pressure on IT departments to do
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
An Oracle White Paper August 2011. Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability
An Oracle White Paper August 2011 Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability Note This whitepaper discusses a number of considerations to be made when
Consolidating Multiple Network Appliances
October 2010 Consolidating Multiple s Space and power are major concerns for enterprises and carriers. There is therefore focus on consolidating the number of physical servers in data centers. Application
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.
- LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA. Summary Industry: Financial Institution Challenges: Provide a reliable,
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.
White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization
InfiniBand Software and Protocols Enable Seamless Off-the-shelf Applications Deployment
December 2007 InfiniBand Software and Protocols Enable Seamless Off-the-shelf Deployment 1.0 Introduction InfiniBand architecture defines a high-bandwidth, low-latency clustering interconnect that is used
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
Storage Area Network Design Overview Using Brocade DCX 8510. Backbone Switches
Storage Area Network Design Overview Using Brocade DCX 8510 Backbone Switches East Carolina University Paola Stone Martinez April, 2015 Abstract The design of a Storage Area Networks is a very complex
Virtualised MikroTik
Virtualised MikroTik MikroTik in a Virtualised Hardware Environment Speaker: Tom Smyth CTO Wireless Connect Ltd. Event: MUM Krackow Feb 2008 http://wirelessconnect.eu/ Copyright 2008 1 Objectives Understand
Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh
Solution Brief November 2011 Highlights Consolidate More Effectively Cisco Unified Computing System (Cisco UCS ) delivers comprehensive infrastructure density that reduces the cost per rack unit at the
Hyper-V R2: What's New?
ASPE IT Training Hyper-V R2: What's New? A WHITE PAPER PREPARED FOR ASPE BY TOM CARPENTER www.aspe-it.com toll-free: 877-800-5221 Hyper-V R2: What s New? Executive Summary This white paper provides an
COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE
COMPARING STORAGE AREA NETWORKS AND NETWORK ATTACHED STORAGE Complementary technologies provide unique advantages over traditional storage architectures Often seen as competing technologies, Storage Area
Blade Switches Don t Cut It in a 10 Gig Data Center
Blade Switches Don t Cut It in a 10 Gig Data Center Zeus Kerravala, Senior Vice President and Distinguished Research Fellow, [email protected] Introduction: Virtualization Drives Data Center Evolution
PR03. High Availability
PR03 High Availability Related Topics NI10 Ethernet/IP Best Practices NI15 Enterprise Data Collection Options NI16 Thin Client Overview Solution Area 4 (Process) Agenda Overview Controllers & I/O Software
