1 White PAPER 10 Gigabit Ethernet Virtual Data Center Architectures Introduction Consolidation of data center resources offers an opportunity for architectural transformation based on the use of scalable, high density, high availability technology solutions, such as high port-density 10 GbE switch/routers, cluster and grid computing, blade or rack servers, and network attached storage. Consolidation also opens doors for virtualization of applications, servers, storage, and networks. This suite of highly complementary technologies has now matured to the point where mainstream adoption in large data centers has been occurring for some time. According to a recent Yankee Group survey of both large and smaller enterprises, 62% of respondents already have a server virtualization solution at least partially in place, while another 21% plan to deploy the technology over the next 12 months. A consolidated and virtualized 10 GbE data center offers numerous benefits: Lower OPEX/CAPEX and TCO through reduced complexity, reductions in the number of physical servers and switches, improved lifecycle management, and better human and capital resource utilization Increased adaptability of the network to meet changing business requirements Reduced requirements for space, power, cooling, and cabling. For example, in power/cooling (P/C) alone, the following savings are possible: Server consolidation via virtualization: up to 50 60% of server P/C Server consolidation via Blade or Rack servers: up to an additional 20 30% of server P/C Switch consolidation with high density switching: up to 50% of switch P/C Improved business continuance and compliance with regulatory security standards The virtualized 10 GbE data center also provides the foundation for a service oriented architecture (SOA). From an application perspective, SOA is a virtual application architecture where the application is comprised of a set of component services (e.g., implemented with web services) that may be distributed throughout the data center or across multiple data centers. SOA s emphasis on application modularity and re-use of application component modules enables enterprises to readily create high level application services that encapsulate existing business processes and functions, or address new business requirements. From an infrastructure perspective, SOA is a resource architecture where applications and services draw on a shared pool of resources rather than having physical resources rigidly dedicated to specific applications. The application and infrastructure aspects of SOA are highly complementary. In terms of applications, SOA offers a methodology to dramatically increase productivity in application creation/modification, while the SOA-enabled infrastructure, embodied by the 10 GbE virtual data center, dramatically improves the flexibility, productivity, and manageability of delivering application results to end users by drawing on a shared pool of virtualized computing, storage, and networking resources. This document provides guidance in designing consolidated, virtualized, and SOA-enabled data centers based on the ultra high port-density 10 GbE switch/router products of Force10 Networks in conjunction with other specialized hardware and software components provided by Force10 technology partners, including those offering: Server virtualization and server management software iscsi storage area networks GbE and 10 GbE server NICs featuring I/O virtualization and protocol acceleration Application delivery switching, load balancers, and firewalls 2007 FORCE10 NETWORKS, INC. [ P AGE 1 OF 11 ]
2 The Foundation for a Service Oriented Architecture Over the last several years data center managers have had to deal with the problem of server sprawl to meet the demand for application capacity. As a result, the prevalent legacy enterprise data center architecture has evolved as a multi-tier structure patterned after high volume websites. Servers are organized into three separate tiers of the data center network comprised of web or front-end servers, application servers, and database/back-end servers, as shown in Figure 1. This architecture has been widely adapted to enterprise applications such as ERP and CRM, that support web-based user access. Multiple tiers of physically segregated servers as shown in Figure 1 are frequently employed because a single tier of aggregation and access switches may lack the scalability to provide the connectivity and aggregate performance needed to support large numbers of servers. The ladder structure of the network shown in Figure 1 also minimizes the traffic load on the data center core switches because it isolates intra-tier traffic, web-to-application traffic, and application-to-database traffic from the data center core. While this legacy architecture has performed fairly well, it has some significant drawbacks. The physical segregation of the tiers requires a large number of devices, including three sets of Layer 2 access switches, three sets of Layer 2/Layer 3 aggregation switches, and three sets of appliances such as load balancers, firewalls, IDS/IPS devices, and SSL offload devices that are not shown in the figure. The proliferation of devices is further exacerbated by dedicating a separate data center module similar to that shown in Figure 1 to each enterprise application, with each server running a single application or application component. This physical application/server segregation typically results in servers that are, on average, only 20% utilized. This wastes 80% of server capital investment and support costs. As a result, the inefficiency of dedicated physical resources per application is the driving force behind on-going efforts to virtualize the data center. The overall complexity of the legacy design has a number of undesirable side-effects: Figure 1. Legacy three tier data center architecture The infrastructure is difficult to manage, especially when additional applications or application capacity is required Optimizing performance requires fairly complex traffic engineering to ensure that traffic flows follow predictable paths When load-balancers, firewalls, and other appliances are integrated within the aggregation switch/router to reduce box count, it may be necessary to use activepassive redundancy configurations rather than the more efficient active-active redundancy more readily achieved with stand alone appliances. Designs calling for active-passive redundancy for appliances and switches in the aggregation layer require twice as much throughput capacity as active-active redundancy designs The total cost of ownership (TCO) is high due to low resource utilization levels combined with the impact of complexity on downtime and on the requirements for power, cooling, space, and management time 2007 FORCE10 NETWORKS, INC. [ P AGE 2 OF 11 ]
3 Design Principles for the Next Generation Virtual Data Centers The Force10 Networks approach to next generation data center designs is to build on the legacy architecture s concept of modularity, but to greatly simplify the network while significantly improving its efficiency, scalability, reliability, and flexibility, resulting in much lower low total cost of ownership. This is accomplished by consolidating and virtualizing the network, computing, and storage resources, resulting in an SOA-enabled data center infrastructure. Following are the key principles of data center consolidation and virtualization upon which the Virtual Data Center Architecture is based: POD Modularity: A POD (point of delivery) is a group of compute, storage, network, and application software components that work together to deliver a service or application. The POD is a repeatable construct, and its components must be consolidated and virtualized to maximize the modularity, scalability, and manageability of data centers. Depending on the architectural model for applications, a POD may deliver a high level application service or it may provide a single component of an SOA application, such as web front end or database service. In spite of the fact that the POD modules share a common architecture, they can be customized to support a tiered services model. For example, the security, resiliency/availability, and QoS capabilities of an individual POD can be adjusted to meet the service level requirements of the specific application or service that it delivers. Thus, an ecommerce POD would be adapted to deliver the higher levels of security/availability/qos required vs. those suitable for lower tier applications, such as . Server Consolidation and Virtualization: Server virtualization based on virtual machine (VM) technology, such as VMware ESX Server, allows numerous virtual servers to run on a single physical server, as shown in Figure 2. Figure 2. Simplified view of virtual machine technology Virtualization provides the stability of running a single application per (virtual) server, while greatly reducing the number of physical servers required and improving utilization of server resources. VM technology also greatly facilitates the mobility of applications among virtual servers and the provisioning of additional server resources to satisfy fluctuations in demand for critical applications. Server virtualization and cluster computing are highly complementary technologies for fully exploiting emerging multi-core CPU microprocessors. VM technology provides robustness in running multiple applications per core plus facilitating mobility of applications across VMs and cores. Cluster computing middleware allows multiple VMs or multiple cores to collaborate in the execution of a single application. For example, VMware Virtual SMP enables a single virtual machine to span multiple physical cores, virtualizing processor-intensive enterprise applications such as ERP and CRM. The VMware Virtual Machine File System (VMFS) is a high-performance cluster file system that allows clustering of virtual machines spanning multiple physical servers. By 2010, the number of cores per server CPU is projected to be in the range of with network I/O requirements in the 100 Gbps range. Since most near-term growth in chip-based CPU performance will come from higher core count rather than increased clock rate, the data centers requiring higher application performance will need to place increasing emphasis on technologies such as cluster computing and Virtual SMP. NIC Virtualization: With numerous VMs per physical server, network virtualization has to be extended to the server and its network interface. Each VM is configured with a virtual NIC that shares the resources of the server s array of real NICs. This level of virtualization, together with a virtual switch capability providing inter-vm switching on a physical server, is provided by VMware Infrastructure software. Higher performance I/O virtualization is possible using intelligent NICs that provide hardware support for I/O virtualization, off-loading the processing supporting protocol stacks, virtual NICs, and virtual switching from the server CPUs. NICs that support I/O virtualization as well as protocol offload (e.g., TCP/IP, RDMA, iscsi) are available from Force10 technology partners including NetXen, Neterion, Chelsio, NetEffect, and various server vendors. Benchmark results have shown that protocol offload NICs can dramatically improve network throughput and latency for both data applications (e.g., HPC, clustered databases, and web servers) and network storage access (NAS and iscsi SANs) FORCE10 NETWORKS, INC. [ P AGE 3 OF 11 ]
4 Network Consolidation and Virtualization: Highly scalable and resilient 10 Gigabit Ethernet switch/routers, exemplified by the Force10 E-Series, provide the opportunity to greatly simplify the network design of the POD module, as well as the data center core. Leveraging VLAN technology together with the E-Series scalability and resiliency allows the distinct aggregation and access layers of the legacy data center design to be collapsed into a single aggregation/access layer of switch/routing, as shown in Figure 3. The integrated aggregation/access switch becomes the basic network switching element upon which a POD is built. The benefits of a single layer of switch/routing within the POD include reduced switch-count, simplified traffic flow patterns, elimination of Layer 2 loops and STP scalability issues, and improved overall reliability. The ultra high density, reliability, and performance of the E-Series switch/router maximizes the scalability of the design model both within PODs and across the data center core. The scalability of the E-Series often enables network consolidations with a >3:1 reduction in the number of data center switches. This high reduction factor is due to the combination of the following factors: Elimination of the access switching layer. More servers per POD aggregation switch, resulting in fewer aggregation switches. More POD aggregation switches per core switch, resulting in fewer core switches. Storage Resource Consolidation and Virtualization: Storage resources accessible over the Ethernet/IP data network further simplify the data center LAN by minimizing the number of separate switching fabrics that must be deployed and managed. 10 GbE switching in the POD provides ample bandwidth for accessing unified NAS/iSCSI IP storage devices, especially when compared to the bandwidth available for Fibre Channel SANs. Consolidated, shared, and virtualized storage also facilitates VM-based application provisioning and mobility since each physical server has shared access to the necessary virtual machine images and required application data. The VMFS provides multiple VMware ESX Servers with concurrent read-write access to the same virtual machine storage. The cluster file system thus enables live migration of running virtual machines from one physical server to another, automatic restart of failed virtual machines on a different physical server, and the clustering of virtual machines. Global Virtualization: Virtualization should not be constrained to the confines of the POD, but should be capable of being extended to support a pool of shared resources spanning not only a single POD, but also multiple PODs, the entire data center, or even multiple data centers. Virtualization of the infrastructure allows the PODs to be readily adapted to an SOA application model where the resource pool is called upon to respond rapidly to changes in demand for services and to new services being installed on the network. Figure 3. Consolidation of data center aggregation and access layers 2007 FORCE10 NETWORKS, INC. [ P AGE 4 OF 11 ]
5 Ultra Resiliency/Reliability: As data centers are consolidated and virtualized, resiliency and reliability become even more critical aspects of the network design. This is because the impact of a failed physical resource is now more likely to extend to multiple applications and larger numbers of user flows. Therefore, the virtual data center requires the combination of ultra high resiliency devices, such as the E-Series switch/routers, and an endto-end network design that takes maximum advantage of active-active redundancy configurations, with rapid fail-over to standby resources. Security: Consolidation and virtualization also place increased emphasis on data center network security. With virtualization, application or administrative domains may share a pool of common resources creating the requirement that the logical segregation among virtual resources be even stronger than the physical segregation featured in the legacy data center architecture. This level of segregation is achieved by having multiple levels of security at the logical boundaries of the resources being protected within the PODs and throughout the data center. In the virtual data center, security is provided by: Full virtual machine isolation to prevent ill-behaved or compromised applications from impacting any other virtual machine/application in the environment Application and control VLANs to provide traffic segregation Wire-rate switch/router ACLs applied to intra-pod and inter-pod traffic Stateful virtual firewall capability that can be customized to specific application requirements within the POD Security-aware appliances for load balancing and other traffic management and acceleration functions IDS/IPS appliance functionality at full wire-rate for realtime protection of critical POD resources from both known intrusion methodologies and day-one attacks AAA for controlled user access to the network and network devices to enforce policies defining user authentication and authorization profiles Figure 4 provides an overview of the architecture of a consolidated data center based on 10 Gigabit Ethernet switch/routers providing an integrated layer of aggregation and access switching with Layer 4 Layer 7 services being provided with stand alone appliances. The consolidation of the data center network simplifies deployment of virtualization technologies that will be described in more detail in subsequent sections of this document. Overall data center scalability is addressed by configuring multiple PODs connected to a common set of data center core switches to meet application/service capacity, organizational, and policy requirements. In addition to server connectivity, the basic network design of the POD can be utilized to provide other services on the network, such as ISP connectivity, WAN access, etc. Within an application POD, multiple servers running the same application are placed in the same application VLAN with appropriate load balancing and security services provided by the appliances. Enterprise applications, such as ERP, that are based on distinct, segregated sets of web, application, and database servers can be implemented within a single tier of scalable L2/L3 switching using server clustering and distinct VLANs for segregation of web servers, application servers, and database servers. Alternatively, where greater scalability is required, the application could be distributed across a web server POD, an application server POD, and a database POD. Further simplification of the design is achieved using IP/Ethernet storage attachment technologies, such as NAS and iscsi, with each application s storage resources incorporated within the application-specific VLAN. Figure 4. Reference design for the virtual data center 2007 FORCE10 NETWORKS, INC. [ P AGE 5 OF 11 ]
6 This section of the document focuses on the various design aspects of the consolidated and virtualized data center POD module. Network Interface Contoller (NIC) Teaming As noted earlier, physical and virtual servers dedicated to a specific application are placed in a VLAN reserved for that application. This simplifies the logical design of the network and satisfies the requirement of many clustered applications for Layer 2 adjacency among nodes participating in the cluster. In order to avoid single points of failure (SPOF) in the access portion of the network, NIC teaming is recommended to allow each physical server to be connected to two different aggregation/access switches. For example, a server with two teamed NICs, sharing a common IP address and MAC address, can be connected to both POD switches as shown in Figure 5. The primary NIC is in the active state, and the secondary NIC is in standby mode, ready to be activated in the event of failure in the primary path to the POD. NIC Virtualization When server virtualization is deployed, a number of VMs generally share a physical NIC. Where the VMs are spread across multiple applications, the physical NIC needs to support traffic for multiple VLANs. An elegant solution for multiple VMs and VLANs sharing a physical NIC is provided by VMware ESX Server Virtual Switch Tagging (VST). As shown in Figure 6, each VM s virtual NICs are attached to a port group on the ESX Server Virtual Switch that corresponds to the VLAN associated with the VM s application. The virtual switch then adds 802.1Q VLAN tags to all outbound frames, extending 802.1Q trunking to the server and allowing multiple VMs to share a single physical NIC. NIC teaming can also used for bonding several GbE NICs to form a higher speed link aggregation group (LAG) connected to one of the POD switches. As 10 GbE interfaces continue to ride the volume/cost curve, GbE NIC teaming will become a relatively less cost-effective means of increasing bandwidth per server. Figure 6. VMware virtual switch tagging with NIC teaming Figure 5. NIC teaming for data center servers The overall benefits of NIC teaming and I/O virtualization can be combined with VMware Infrastructure s ESX Server V3.0 by configuring multiple virtual NICs per VM and multiple real NICs per physical server. ESX Server V3.0 NIC teaming supports a variety of fault tolerant and load sharing operational modes in addition to the simple primary/secondary teaming model described at the beginning of this section. Figure 7 shows how VST, together with simple primary/secondary NIC teaming, supports red and green VLANs while eliminating SPOFs in a POD employing server virtualization FORCE10 NETWORKS, INC. [ P AGE 6 OF 11 ]
7 Layer 3 Aggregation/Access Switching Figure 8 shows the logical flow of application traffic through a POD. For web traffic from the Internet, traffic is routed in the following way: Figure 7. VMware virtual switch tagging with NIC teaming As noted earlier, for improved server and I/O performance, the virtualization of NICs, virtual switching, and VLAN tagging can be offloaded to intelligent Ethernet adapters that provide hardware support for protocol processing, virtual networking, and virtual I/O. Layer 2 Aggregation/Access Switching With a collapsed aggregation/access layer of switching, the Layer 2 topology of the POD is extremely simple with servers in each application VLAN evenly distributed across the two POD switches. This distributes the traffic across the POD switches, which form an active-active redundant pair. The Layer 2 topology is free from loops for intra-pod traffic. Nevertheless, for extra robustness, it is recommended that application VLANs be protected from loops that could be formed by configuration errors or other faults, using standard practices for MSTP/RSTP 1.Internet flows are routed with OSPF from the core to a VLAN/security zone for untrusted traffic based on public, virtual IP addresses (VIPs). 2.Load balancers (LBs) route the traffic to another untrusted VLAN, balancing the traffic based on the private, real IP addresses of the servers. Redundant load balancers are configured with VRRP for gateway redundancy. For load balanced applications, the LBs function as the default virtual gateway. 3.Finally, traffic is routed by firewalls (FWs) to the trusted application VLANs on which the servers reside. The firewalls also use VRRP for gateway redundancy. For applications requiring stateful inspection of flows but no load balancing, the firewalls function as the default virtual gateway. Intranet traffic would be routed through a somewhat different set of VLAN security zones based on whether load balancing is needed and the degree of trust placed in the source/destination for that particular application flow. In many cases, Intranet traffic would bypass untrusted security zones, with switch/router ACLs providing ample security to allow Intranet traffic to be routed through the data center core directly from one application VLAN to another without traversing load balancing or firewall appliances. The simplicity of the Layer 2 network makes it feasible for the POD to support large numbers of real and virtual servers, and also makes it feasible to extend application VLANs through the data center core switch/router to other PODs in the data center or even to PODs in other data centers. When VLANs are extended beyond the POD, per-lan MSTP/RSTP is required to deal with possible loops in the core of the network. In addition, it may also be desirable to allocate applications to PODs in a manner that minimizes data flows between distinct application VLAN within the POD. This preserves the POD s horizontal bandwidth for intra-vlan communications between clustered servers and for Ethernet/IP-based storage access. Figure 8. Logical topology for Internet flows in the POD 2007 FORCE10 NETWORKS, INC. [ P AGE 7 OF 11 ]
8 In addition to application VLANs, control VLANs are configured to isolate control traffic among the network devices from application traffic. For example, control VLANs carry routing updates among switch/routers. In addition, a redundant pair of load balancers or stateful firewalls would share a control VLAN to permit traffic flows to failover from the primary to the secondary appliance without loss of state or session continuity. In a typical network design, trunk links carry a combination of traffic for application VLANs and link-specific control VLANs. From the campus core switches through the data center core switches, there are at least two equal cost routes to the server subnets. This permits the core switches to load balance Layer 3 traffic to each POD switch using OSPF ECMP routing. Where application VLANs are extended beyond the POD, the trunks to and among the data center core switches will carry a combination of Layer 2 and Layer 3 traffic. Layer 4-7 Aggregation/Access Switching Because the POD design is based on stand-alone appliances for Layer 4-7 services (including server load balancing, SSL termination/acceleration, VPN termination, and firewalls), data center designers are free to deploy devices with best-in-class functionality and performance that meet the particular application requirements within each POD. For example, Layer 4-7 devices may support a number of advanced features, including: Integrated functionality: For example, load balancing, SSL acceleration, and packet filtering functionality may be integrated within a single device, reducing box count, while improving the reliability and manageability of the POD Device Virtualization: Load balancers and firewalls that support virtualization allow physical device resources to be partitioned into multiple virtual devices, each with its own configuration. Device virtualization within the POD allows virtual appliances to be devoted to each application, with the configuration corresponding to the optimum device behavior for that application type and its domain of administration Active/Active Redundancy: Virtual appliances also facilitate high availability configurations where pairs of physical devices provide active-active redundancy. For example, a pair of physical firewalls can be configured with one set of virtual firewalls customized to each of the red VLANs and a second set customized for each of the green VLANs. The physical firewall attached to a POD switch would have the red firewalls in an active state and its green firewalls in a standby state. The second physical firewall (connected to the second POD switch) would have the complementary configuration. In the event of an appliance or link failure, all of the active virtual firewalls on the failed device would fail over to the standby virtual firewalls on the remaining device Resource Virtualization Within and Across PODs One of the keys to server virtualization within and across PODs is a server management environment for virtual servers that automates operational procedures and optimizes availability and efficiency in utilization of the resource pool. The VMware Virtual Center provides the server management function for VMware Infrastructure, including ESX Server, VMFS, and Virtual SMP. With Virtual Center, virtual machines can be provisioned, configured, started, stopped, deleted, relocated, and remotely accessed. In addition, Virtual Center supports high availability by allowing a virtual machine to automatically fail-over to another physical server in the event of host failure. All of these operations are simplified because virtual machines are completely encapsulated in virtual disk files stored centrally using shared NAS or iscsi SAN storage. The Virtual Machine File System allows a server resource pool to concurrently access the same files to boot and run virtual machines, effectively virtualizing VM storage. Virtual Center also supports the organization of ESX Servers and their virtual machines into clusters allowing multiple servers and virtual machines to be managed as a single entity. Virtual machines can be provisioned to a cluster rather than linked to a specific physical host, adding another layer of virtualization to the pool of computing resources. VMware VMotion enables the live migration of running virtual machines from one physical server to another with zero downtime, continuous service availability, complete transaction integrity, and continuity of network connectivity via the appropriate application VLAN. Live migration of virtual machines enables hardware maintenance without scheduling downtime and resulting disruption of business operations. VMotion also allows virtual machines to be continuously and automatically optimized within resource pools for maximum hardware utilization, flexibility, and availability FORCE10 NETWORKS, INC. [ P AGE 8 OF 11 ]
9 VMware Distributed Resource Scheduler (DRS) works with VMware Infrastructure to continuously automate the balancing of virtual machine workloads across a cluster in the virtual infrastructure. When guaranteed resource allocation cannot be met on a physical server, DRS will use VMotion to migrate the virtual machine to another host in the cluster that has the needed resources. Figure 9 shows an example of server resource re-allocation within a POD. In this scenario, a group of virtual and/or physical servers currently participating in cluster A is re-allocated to a second cluster B running another application. Virtual Center and VMotion are used to de-install the cluster A software images from the servers being transferred and then install the required cluster B image including application, middleware, operating system, and network configuration. As part of the process, the VLAN membership of the transferred servers is changed from VLAN A to VLAN B. Virtualization of server resources, including VMotionenabled automated VM failovers, and resource re-allocation as described above, can readily be extended across PODs simply by extending the application VLANs across the data center core trunks using 802.1Q VLAN trunking. Therefore, the two clusters shown in Figure 9 could just as well be located in distinct physical PODs. With VLAN extension, a virtual POD can be defined that spans multiple physical PODs. Without this form of POD virtualization, it would be necessary to use patch cabling between physical PODs in order to extend the computing resources available to a given application. Patch cabling among physical PODs is an awkward solution for ad hoc connectivity, especially when the physical PODs are on separate floors of the data center facility. As noted earlier, the simplicity of the POD Layer 2 network makes this VLAN extension feasible without running the risk of STP-related instabilities. With application VLANs and cluster membership extended throughout the data center, the data center trunks carry a combination of Layer 3 and Layer 2 traffic, potentially with multiple VLANs per trunk, as shown in Figure 10. The 10 GbE links between the PODs provide ample bandwidth to support VM clustering, VMotion transfers and failovers, as well as access to shared storage resources. Figure 9. Re-allocation of server resources within the POD Figure 10. Multiple VLANs per trunk 2007 FORCE10 NETWORKS, INC. [ P AGE 9 OF 11 ]
10 Resource Virtualization Across Data Centers Resource virtualization can also be leveraged among data centers sharing the same virtual architecture. As a result, Virtual Center management of VMotion-based backup and restore operations can provide redundancy and disaster recovery capabilities among enterprise data center sites. This form of global virtualization is based on an N x 10 GbE Inter-Data Center backbone, which carries a combination of Layer 2 and Layer 3 traffic resulting from extending application and control VLANs from the data center cores across the 10 GbE MAN/WAN network, as shown in Figure 11. Figure 11. Global virtualization In this scenario, policy routing and other techniques would be employed to keep traffic as local as possible, using remote resources only when local alternatives are not appropriate or not currently available. Redundant Virtual Center server management operations centers ensure the availability and efficient operation of the globally virtualized resource pool even if entire data centers are disrupted by catastrophic events. Migration from Legacy Data Center Architectures The best general approach to migrating from a legacy 3-tier data center architecture to a virtual data center architecture is to start at the server level and follow a step-by-step procedure replacing access switches, distribution/aggregation switches, and finally data center core switches. One possible blueprint for such a migration is as follows: 1.Select an application for migration. Upgrade and virtualize the application s servers with VMware ESX Server software and NICs as required to support the desired NIC teaming functionality and/or NIC Virtualization. Install VMware Virtual Center in the NOC. 2.Replace existing access switches specific to the chosen application with E-Series switch/routers. Establish a VLAN for the application if necessary and configure the E-Series switch to conform to the existing access networking model. 3.Migrate any remaining applications supported by the set of legacy distribution switches in question to E-Series access switches. 4.Transition load balancing and firewall VLAN connectivity to the E-Series along with OSPF routing among the application VLANs. Existing distribution switches still provide connectivity to the data center core. 5. Introduce new E-Series data center core switch/ routers with OSPF and 10 GbE, keeping the existing core routers in place. If necessary, configure OSPF in old core switches and re-distribute routes from OSPF to the legacy routing protocol and vice versa. 6.Remove the set of legacy distribution switches and use the E-series switches for all aggregation/access functions. At this point, a single virtualized POD has been created. 7.Now the process can be repeated until all applications and servers in the data center have been migrated to integrated PODs. The legacy data center core switches can be removed either before or after full POD migration FORCE10 NETWORKS, INC. [ P AGE 10 OF 11 ]
11 Summary As enterprise data centers move through consolidation phases toward next generation architectures that increasingly leverage virtualization technologies, the importance of very high performance Ethernet switch/routers will continue to grow. Switch/routers with ultra high capacity coupled with ultra high reliability/resiliency contribute significantly to the simplicity and attractive TCO of the virtual data center. In particular, the E-Series offers a number of advantages for this emerging architecture: Smallest footprint per GbE port or per 10 GbE port due to highest port densities Ultra-high power efficiency requiring only 4.7 watts per GbE port, simplifying high density configurations and minimizing the growing costs of power and cooling Ample aggregate bandwidth to support unification of aggregation and access layers of the data center network plus unification of data and storage fabrics System architecture providing a future-proof migration path to the next generation of Ethernet consolidation/virtualization/unification at 100 Gbps Unparalleled system reliability and resiliency featuring: multi-processor control plane control plane and switching fabric redundancy modular switch/router operating system (OS) supporting hitless software updates and restarts References: General discussion of Data Center Consolidation and Virtualization: E-Series Reliability and Resiliency: Next Generation Terabit Switch/Routers: High Performance Network Security (IPS): iscsi over 10 GbE: VMware Infrastructure 3 Documentation: A high performance 10 GbE switched data center infrastructure provides the ideal complement for local and global resource virtualization. The combination of these fundamental technologies as described in this guide provides the basic SOA-enabled modular infrastructure needed to fully support the next wave of SOA application development where an application s component services may be transparently distributed throughout the enterprise data center or even among data centers. Force10 Networks, Inc. 350 Holger Way San Jose, CA USA PHONE FACSIMILE 2007 Force10 Networks, Inc. All rights reserved. Force10 Networks and E-Series are registered trademarks, and Force10, the Force10 logo, P-Series, S-Series, TeraScale and FTOS are trademarks of Force10 Networks, Inc. All other company names are trademarks of their respective holders. Information in this document is subject to change without notice. Certain features may not yet be generally available. Force10 Networks, Inc. assumes no responsibility for any errors that may appear in this document. WP v FORCE10 NETWORKS, INC. [ P AGE 11 OF 11 ]
White PAPER FTOS: A Modular and Portable Switch/Router Operating System Optimized for Resiliency and Scalability Introduction As Ethernet switch/routers continue to scale in terms of link speed and port
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Next Gen Data Center KwaiSeng Consulting Systems Engineer firstname.lastname@example.org Taiwan Update Feb 08, kslai 2006 Cisco 2006 Systems, Cisco Inc. Systems, All rights Inc. reserved. All rights reserved. 1 Agenda
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
Virtualization. Consolidation. Simplification. Choice. WHITE PAPER Virtualized Security: The Next Generation of Consolidation Virtualized Security: The Next Generation of Consolidation As we approach the
A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease
ware vshield App Design Guide TECHNICAL WHITE PAPER ware vshield App Design Guide Overview ware vshield App is one of the security products in the ware vshield family that provides protection to applications
Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server How to deploy Converged Networking with VMware ESX Server 3.5 Using Emulex FCoE Technology Table of Contents Introduction...
Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office
Building robust private cloud services infrastructures By Brian Gautreau and Gong Wang Private clouds optimize utilization and management of IT resources to heighten availability. Microsoft Private Cloud
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
White Paper Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability The new TCP Chimney Offload Architecture from Microsoft enables offload of the TCP protocol
White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller
Contents Overview...1 Key Implementation Challenges...1 Providing a Solution through Virtualization...1 Benefits of Running SQL Server with VMware Infrastructure...1 Solution Overview 4 Layers...2 Layer
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
BEST PRACTICES GUIDE: Nimble Storage Best Practices for Scale-Out Contents Introduction... 3 Terminology... 3 Planning Scale-Out Clusters and Pools... 3 Cluster Arrays Based on Management Boundaries...
Migrate to iscsi SANs While Leveraging Existing Fibre Channel Infrastructure EXECUTIVE SUMMARY Corporations are always under continuous pressure to reduce the costs of their IT infrastructure. At the same
WHITE PAPER Intel Ethernet 10 Gigabit Server Adapters vsphere* 4 Simplify vsphere* 4 Networking with Intel Ethernet 10 Gigabit Server Adapters Today s Intel Ethernet 10 Gigabit Server Adapters can greatly
. White Paper Energy-Efficient Unified Fabrics: Transform the Data Center Infrastructure with Cisco Nexus Series What You Will Learn The Cisco Nexus family of products gives data center designers the opportunity
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION Network Appliance, Inc. March 2007 TABLE OF CONTENTS 1 INTRODUCTION... 3 2 BACKGROUND...
Journey to the Private Cloud Key Enabling Technologies Jeffrey Nick Chief Technology Officer Senior Vice President EMC Corporation June 2010 1 The current I/T state: Infrastructure sprawl Information explosion
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Best Practices for High Performance NFS Storage with VMware Executive Summary The proliferation of large-scale, centralized pools of processing, networking, and storage resources is driving a virtualization
VMware vsphere 5.0 Boot Camp This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter. Assuming no prior virtualization experience, this
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Data Sheet Cisco Nexus 1000V Switch for Microsoft Hyper-V Product Overview Cisco Nexus 1000V Switches provide a comprehensive and extensible architectural platform for virtual machine and cloud networking.
. White Paper Network Services Virtualization What Is Network Virtualization? Business and IT leaders require a more responsive IT infrastructure that can help accelerate business initiatives and remove
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer email@example.com Agenda Session Length:
April 2009 Cut I/O Power and Cost while Boosting Blade Server Performance 1.0 Shifting Data Center Cost Structures... 1 1.1 The Need for More I/O Capacity... 1 1.2 Power Consumption-the Number 1 Problem...
White Paper AX Series Driving Down the Cost and Complexity of Application Networking with Multi-tenancy February 2013 WP_ADC_ADP_012013.1 Table of Contents 1 Introduction... 3 2 Application Delivery Partition
White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...
EMC VPLEX FAMILY Continuous Availability and data Mobility Within and Across Data Centers DELIVERING CONTINUOUS AVAILABILITY AND DATA MOBILITY FOR MISSION CRITICAL APPLICATIONS Storage infrastructure is
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
Virtualization, SDN and NFV HOW DO THEY FIT TOGETHER? Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers. In order to implement changes,
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
1 CHAPTER Note Important Updated content: The Cisco Virtualized Multi-tenant Data Center CVD (http://www.cisco.com/go/vmdc) provides updated design guidance including the Cisco Nexus Switch and Unified
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support
RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS Server virtualization offers tremendous benefits for enterprise IT organizations server
Mestrado em Engenharia de Redes de Comunicações TÓPICOS AVANÇADOS EM REDES ADVANCED TOPICS IN NETWORKS 2008-2009 Exemplos de Projecto - Network Design Examples 1 Hierarchical Network Design 2 Hierarchical
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
Top-Down Network Design Chapter Five Designing a Network Topology Copyright 2010 Cisco Press & Priscilla Oppenheimer Topology A map of an internetwork that indicates network segments, interconnection points,
Protecting the Data That Drives Business SecureSphere Appliances Scalable. Reliable. Flexible. Imperva SecureSphere appliances provide superior performance and resiliency for demanding network environments.
DMZ Virtualization Using VMware vsphere 4 and the Cisco Nexus 1000V Virtual Switch What You Will Learn A demilitarized zone (DMZ) is a separate network located in the neutral zone between a private (inside)
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea (firstname.lastname@example.org) Senior Solutions Architect, Brocade Communications Inc. Jim Allen (email@example.com) Senior Architect, Limelight
Complete Security and Compliance for Virtual Environments Vyatta takes the concept of virtualization beyond just applications and operating systems and allows enterprise IT to also virtualize network components