Data Center Networking - Current Technologies

Size: px
Start display at page:

Download "Data Center Networking - Current Technologies"

Transcription

1 Affinity-Driven Networking Implementing a dynamic, non-uniform, SDN-driven network that matches capacity directly to application workload or tenant needs Plexxi White PaPer 1 Affinity Networking for Data Centers and Clouds

2 EXECUTIVE SUMMARY Cloud computing, distributed application frameworks and server virtualization solutions are reshaping data center traffic flows and impacting data center network designs. In addition, Big Data initiatives are pushing the envelope of clustered computing and storage architectures, demanding every ounce of performance to execute real-time decision management and data analytics. Legacy data center networks built to enable conventional client-server applications can t meet the agility and price/performance requirements of today s virtualized computing environments or the raw performance needs of clustered computing. This white paper reviews the impact of these new computing models on the data center and describes a new affinity-driven networking model that is specifically designed to address the distinct needs of today s cooperative applications and on-demand services. INTRODUCTION THE DATA CENTER NEEDS A NEW ING MODEL Today s world is awash in data. Nearly 2.5 quintillions bytes of data are created every day. 90% of the data that exists in the world today has been created in the last two years alone. 1 Big Data means big business. Wikibon projects the worldwide Big Data market to grow from $5.1 billion to over $50 billion between now and Enterprises of every size and type are leveraging advanced computing facility architectures such as cloud computing data centers or clustered computing to boost business agility or support Big Data and other ultra-high performance application requirements. A Morgan Stanley survey of 300 IT decision-makers forecasts the percentage of enterprise workloads running in private cloud or virtualized environments to grow from 32% to 52% over the next three years and the portion of enterprise workloads running in public clouds to grow from 10% to 22%. 3 And according to IDC, cloud-enabled software will constitute nearly 24% of all new business software purchases and 13% of worldwide software spending by Virtualization and cloud computing are profoundly impacting the data center. The monolithic computing silos of the past are giving way to virtualized, converged IT environments. As the data center evolves, so to must the data center network. Legacy data center networks originally designed to enable conventional client-server applications are far too complex, costly and rigid for today s cooperative, elastic computing environments. They employ complicated networking protocols and convoluted hierarchies to achieve any-to-any server connectivity, and they rely on over-subscribed network designs that put untenable constraints on workload placement and provide no flexibility for changing workload needs. Simply put, the network has become a barrier to business innovation. This white paper reviews the impact of cloud computing, virtualization and distributed computing frameworks on the data center and makes the case for a new data center networking model built to satisfy the needs of cooperative applications, on-demand services and dynamic workloads. A model that employs a top-down approach that starts with application requirements and ends in the connectivity needed to achieve those needs, rather than the other way around. DATA CENTER TRENDS FROM SILOS TO CLOUDS Over the past twenty years, enterprises have built out autonomous islands of IT infrastructure to satisfy the needs of isolated client-server and mainframe applications (see figure 1). In these stovepipe designs, a unique IT silo, with independent compute, network and storage resources, is dedicated to each business application ERP, CRM, office productivity suites, etc. ERP CRM Stovepipe data centers are costly and inefficient, with poor resource utilization, complex management and excessive power and cooling requirements. In contrast, today s data centers strive for much greater efficiencies in resource utilization. The goal of the modern data center is to maximize computing power per cubic foot of space, while simultaneously minimizing operating costs rack space, power, and cooling. Once the data center architecture has been put in place, the goal shifts to best matching application workloads with available compute resources so the infrastructure is optimally utilized at all times. Advances in information technology increasingly powerful multi-core servers, virtualization solutions and distributed computing frameworks help with this placement task by making compute resources much more flexible and by enabling application components to be more easily distributed across those resources. Legacy disjointed data centers designed for a fading era of client-server and mainframe computing are giving way to converged IT architectures that are more capable of enabling today s cooperative computing applications (Hadoop clusters, big data, grid computing, SOA applications, and OFFICE Figure 1 Conventional stovepipe IT architecture PAYROLL 1 Source: IBM web site 2 Wikibon Big Data Market Size and Vendor Revenues, October 16, Cloud Computing Takes Off, Morgan Stanley Blue Paper, May 23, Market Analysis Perspective: Worldwide SaaS & Cloud Services, 2011: New Models for Delivering Software; IDC December

3 ERP CRM OFFICE PAYROLL Consolidated Virtualized Data Center Infrastructure Figure 2 Virtualized IT architecture Web 2.0 mash-ups) and are better suited for enabling cloud-based services. By pooling compute, network and storage resources into more advanced data centers (see figure 2) where workloads can be more easily applied to available resources, enterprises and cloud providers can contain CAPEX and OPEX, reduce IT sprawl, and improve business and service agility. THE BOTTLENECK Conventional data center networks built around monolithic computing silos are far too complex, costly and rigid for today s converged IT environments. The modern data center requires a fundamentally new data center networking model. Most legacy data center networks are based on oversubscribed, hierarchical designs. A typical data center network includes an access tier,5 an aggregation tier6 and a core tier (see figure 3). The access tier is made up of low-cost ToR (top-of-rack) Ethernet switches connecting rack or blade servers and IP-based storage devices (typically 100Mbps or 1GbE connections). The access switches are connected via Ethernet to a set of more expensive (and power hungry) aggregation switches (typically 10GbE connections) which in turn are connected to a layer of core switches which connect the data center to the rest of the enterprise (an intranet) and the outside world (the Internet). CORE AGGREGATION ACCESS Figure 3 Conventional hierarchical data center network Sometimes referred to as the edge tier Sometimes referred to as the distribution tier

4 APP 1 APP 2 APP 1 APP 2 APP 3 APP 2 APP 3 Conventional Stovepipe IT Architecture Conventional Virtualized IT Architecture Figure 4 Virtualized IT architectures reshape data center traffic flows Hierarchical networks are inherently costly and inefficient. They do not scale linearly. As the network expands, additional tiers are layered on, and increasing numbers of expensive aggregation switch ports are added to the mix. And with the adoption of increasingly-denser servers with 10 GbE and 40 GbE network interfaces the problem only becomes worse. Hierarchical networks also impede the performance of contemporary applications. While ideal for transporting conventional north-south client-server traffic flows in and out of the data center, legacy networks aren t well suited for the bandwidth-intensive, delay-sensitive east-west traffic flows that dominate the modern data center. Hierarchical switched networks worked well in traditional clientserver and mainframe environments where most of the computation was performed on a host and the majority of traffic flowed in and out of the data center to and from clients. But server virtualization solutions and new distributed computing frameworks and cooperative applications are radically reshaping data center traffic patterns. (see figure 4) In today s world, more often than not, computation is distributed across multiple servers. Workloads are divided into smaller tasks and processed in parallel on separate physical or virtual machines. And virtual machines can migrate from server to server in response to changing demands or conditions. The vast majority of today s data center traffic stays within the data center, flowing within and between servers. (See sidebar) Web clusters, database clusters, clustered file systems, vmotion, Map-Reduce processing etc. all make extensive use of messaging and data movement/replication and require low latency, lossless, predictable connectivity. But the oversubscribed, hierarchical nature of the network hinders the performance of these applications. Eastwest traffic must traverse multiple switching tiers. Each intermediary switch adds latency. And high oversubscription ratios lead to network congestion, dropped packets and poor application performance. The Cisco Global Cloud Index reveals 76% of today s data center traffic stays within the data center and forecasts intra-data center traffic volumes to grow at a steep 27% CAGR between 2012 and 2016 as cloud computing and virtualization initiatives skyrocket. ZATTABYTES PER YEAR Within data center Data center to data center Data center to user Source: Cisco Global Cloud Index: Forecast and Methodology,

5 HOW DID WE GET HERE? HOW DO WE MOVE FORWARD? from point A to point B the challenge in the data center is a multidimensional problem we call affinity. Today s data center network designs trace their roots back to the earliest days of client-server computing. Back in the 1980s organizations began implementing shared media LANs (Ethernet, Token Ring, even Token Bus) to connect PCs and workstations to UNIX and Windows servers installed departmentally or in a central computer room. Individual LANs were interconnected using bridges and ultimately routers. Over time Ethernet switching emerged as a more cost-effective and scalable alternative to conventional shared media LANs. The first Ethernet switches were introduced in 1990 and by the mid-1990s businesses began aggressively deploying Ethernet switching technology and implementing hierarchical switched networks as a more efficient way of interconnecting and scaling intranets. 7 With the advent of rack servers and blade servers, enterprises began consolidating computing resources into centralized data centers. They implemented data center networks using equipment, architectures and network protocols originally designed for interconnecting distributed autonomous subnets. But in fact, data center networks are far different than geographically-dispersed campus or wide area networks with distinct design considerations and constraints. In order to make these products work for data centers, they were configured into large switched hierarchies using two and three tiers creating uniformly distributed capacity networks and using existing Layer 2 and Layer 3 protocols to help manage the distribution of workloads across those networks. The Layer 2 and Layer 3 networking protocols developed in the 1980 s and 1990 s were introduced to solve what we ll call the Internet problem. The Internet is an arbitrary interconnection of links, of arbitrary quality, and fluctuating state extending across distinct administrative domains. The layered protocol stack evolved as a way to reliably deliver packets from point A to point B across this unbounded, unknown, uncontrollable environment. The idea of creating a top-down networking model based on an holistic view of the Internet and the performance requirements of the upper applications was unfathomable. So protocol designers took a bottomup approach, crafting link layer and network layer protocols that could be executed using the limited memory and processing environments of the switches and routers of the era. But the data center network is inherently different than the Internet. The data center is bounded and manageable. Administrators have complete visibility and control over the entire environment, and connectivity is predictable and reliable. Complex network topology protocols aren t required in the data center. Connectivity, for all intents and purposes, is a given. The challenge in the data center is not a one-dimensional problem of connectivity delivering packets 7 According to IDC, the Ethernet switching market grew a remarkable 390% from 1995 to 1996 (based on port counts). FITTING AFFINITIES INTO THE Affinity refers to the relationship between data center resources required to execute a given application workload or tenant - the physical or virtual compute, storage and network resources. We call any given set of these resources an Affinity Group. In conventional networking, the goal is to make these affinities irrelevant, or to strive for a completely uniform network from any given point to any given point. However, what should actually be the goal is to satisfy the needs of the Affinity Groups, first and foremost. So rather than providing equal connectivity to all resources regardless of their need for connectivity with each other, we should optimize the network around the needs of the Affinity Groups. By satisfying affinity requirements we can ensure all workloads have the full set of resources required to meet their unique performance needs and service level commitments, and instead of scaling the network equally in all directions, we scale it with the needs of the Affinity Groups. Affinity needs are knowable as first configuration principles. Workloads are served by compute and storage resources that have specifiable affinities to each other and to other workload resources, and this affinity is generally obtainable from configuration and orchestration systems. By taking a top-down approach and dynamically engineering the network around these affinities we can satisfy application in order to fully implement affinity needs in the data center, the physical network itself should have the ability to be reorganized non-uniformly such that capacity is greatest in places where it is needed and nominal in places where it is not needed. and as the needs change, the network must also continually change to ensure that the affinity needs are always met. performance and service level requirements while making optimal use of network resources. Rather than switching or routing we call this operation fitting. The notion of affinities is not new. For years, data center architects have understood that to maintain peak application performance, the optimal interconnection of interdependent application resources (such as servers, VMs, databases, network services, storage, etc.) is paramount. Until now, application architects have implemented affinity concepts manually by co-locating instances of a workload on the same physical switch or adjacent switches. But as workloads become larger, more portable, and more complex it is becoming increasingly difficult - if not impossible - to implement affinities by hand. In order to fully implement affinity needs in the data center, the physical network itself should have the ability to be reorganized 5

6 6 non-uniformly such that capacity is greatest in places where it is needed and nominal in places where it is not needed. And as the needs change, the network must also continually change to ensure that the affinity needs are always met. To that end, Plexxi has created a new networking solution that allows data center operators to build and manage a network from the perspective of the data center workloads and implement those workloads directly into a flexible Virtual Multi-core network that ensures capacity is available when and where it needs to be. The Plexxi solution is based on a top-down model that starts with application requirements and ends in the connectivity needed to achieve those needs, rather than the other way around. It features intelligent application software that automatically discovers workload affinity requirements and dynamically configures a network to meets those needs exactly using state-of-the-art Ethernet switching systems. Public cloud providers and enterprises building private clouds can leverage the Plexxi solution to implement flatter, more agile and cost-effective networks to handle the bandwidth-intensive, delay-sensitive east-west traffic flows that accompany today s distributed computing and virtualization initiatives. Whether supporting a single application for a single user-base or a massive internet-scale data center or public cloud, Plexxi Affinity Networking provides direct server-to-server network capacity that can be controlled and fully managed from the application perspective. CONCLUSION Enterprises are turning to much more advanced data center architectures to improve business agility, reduce expenses and accelerate business innovation and are implementing clustered/highperformance computing environments to increase application performance. These new computing architectures redefine the way IT assets are deployed and consumed, and dramatically affect the way data center networks are architected and managed. Conventional hierarchical data center networks built to support stovepipe IT architectures can t meet the agility, price/performance or raw performance requirements of today s advanced computing environments. Public cloud providers and enterprises deploying private clouds or compute clusters must implement flatter, simpler, affinity-driven data center networks to support the bandwidthintensive, delay-sensitive server-to-server traffic flows that dominate today s cooperative, elastic computing environments. Plexxi offers the first and only Affinity Networking solution, specifically built for today s high performance, on-demand data centers. Only Plexxi offers a unique network that s built from an apps-down not wires up approach to enable dynamically-orchestrated, direct server-to-server network capacity based on workload affinities and an innovative multi-core network that can implement non-uniform capacity based on these affinity needs. Using Plexxi s intelligent Control Copyright 2012 Plexxi, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Plexxi and the Plexxi logo are registered trademarks of Plexxi, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. The Role of Software-Defined Networking in the Cloud Conventional decentralized networking architectures aren t well suited for delivering on-demand services. They implement both forwarding and control functions at the device level, with each switch or router providing an embedded, low-level proprietary management interface. Provisioning and troubleshooting end-to-end network services can be time-consuming and errorprone tasks, involving multiple devices and distinct management interfaces. And integrating devices into higher-level operations support systems and business support systems can be an expensive, protracted undertaking. Software-defined networking (SDN) is emerging as potentially a more scalable, extensible and open way of building networks. By decoupling forwarding and control functions, centralizing network intelligence and state information, and providing an abstraction layer with open APIs, SDN allows enterprises and service providers to build more scalable, agile and easilymanageable networks. In reality, SDN, as currently implemented, has many definitions. Often it is described as the decoupling of the control plane from a network switch and the methodology by which that device then can be controlled. While this is a necessary basis for more advanced networks, Affinity Networking goes far beyond device level control. It uses the Control network orchestration software platform to define the network topology. Affinity Networking tackles the challenge of deterministically programming the entire network as a unified ecosystem including the physical topology, and to allow workload management tools to tell the network exactly what they expect from the network to ensure workload performance. Affinity Networking delivers on the true promise of SDN Asking the application workloads what they need; Building a dynamic physical infrastructure where edge capacity can be directly mated to workload needs; Using SDN to connect the virtual applications workloads and physical infrastructure together. software and innovative Plexxi Switch Ethernet switching technology, enterprises and cloud providers can build flat, low-latency, highperformance networks that readily accommodate the delay-sensitive, bandwidth-intensive east-west traffic flows that accompany today s advanced computing initiatives. To learn how Plexxi can help you build a more scalable, agile and cost-effective data center network please visit us on the web at or contact a Plexxi sales representative at Dec 2012