1 Enterprise Strategy Group Getting to the bigger truth. Lab Review Cloud Networking for the Enterprise with Arista Universal Spine Date: March 216 Author: Jack Poller, Senior Lab Analyst Abstract: This ESG Lab Review documents hands-on testing of Arista Extensible Operating Systems (EOS) and Arista s recently introduced 75R modular network switch family with a focus on validating the availability, scalability, and agility of the Arista modern data center networking infrastructure. The Challenges In a recent ESG survey, respondents were asked to identify their top IT priorities for 216. Top cited priorities included business intelligence and data analytics (cited by 23% of respondents), data growth (22%) and integration (21%), improving backup and recovery (2%), major application upgrades (2%), server (2%) and desktop (2%) virtualization, and business continuity and disaster recovery (18%). 1 These priorities place significant reliability, agility, and scalability demands on the data center network. As a result, a growing number of organizations are evaluating the benefits of software-defined networking (SDN), with 66% of those surveyed indicating that hyper-scale SDN is their future network architecture as they transition to the modern software-defined data center (SDDC). 2 Figure 1. Future Direction of Network Architecture Which statement best reflects the future direction of your organization s network architecture? (Percent of respondents, N=276) Don t know, 2% Our network and workloads are different enough from those of (hyper-scale) cloud service providers so we have no aspirations to emulate their network infrastructure designs, 33% We aspire to emulate the network infrastructure designs of (hyperscale) cloud service providers in our own network architecture, 66% 1 Source: ESG Research Report, 216 IT Spending Intentions Survey, February Source: ESG Research Report, Data Center Networking Trends, February 216. This ESG Lab Review was commissioned by Arista and is distributed under license from ESG.
2 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 2 Traditional chassis-based core switches and independent top of rack distribution switches were well suited for traditional data center network topologies. The rapid transition to the modern data center poses challenges to using these traditional networking devices and tools as they were designed for physical, relatively static infrastructures. The software-defined data center brings with it the need for new cloud network solutions to provide the requisite flexibility, agility, scalability, reliability, programmability, and performance. The Solution: Arista EOS, Arista CloudVision, and Arista Spine Switches Arista designed the Extensible Operating System (EOS) from the ground up using the principles of the cloud. EOS, a modular network switch operating system for the modern data center, was built on top of a standard Linux kernel. EOS deploys as a single software image that runs across the entire portfolio of Arista s network switches as well as in a virtual machine instance (veos). The OS is open; users can run standard Linux tools and applications inside the environment, enabling automation and integration with a large library of system management solutions. EOS provides consistent operations, workflow automation, and high availability, and separates protocol processing from switch state and application logic. Using separate protected memory spaces for every process, EOS delivers robustness, reliability, and security. State and configuration information is exchanged through NetDB, an in-memory database using a publish/subscribe messaging architecture. Arista used the same pub/sub architecture when designing CloudVision, its network-wide solution for workload automation and orchestration, and state and topology monitoring and visibility. Figure 2. Arista EOS and Arista CloudVision Publish/Subscribe Architecture Arista designed CloudVision using hyper-scale principles to enable enterprises to leverage cloud-class automation without investing significant resources in programming or internal development programs. CloudVision provides network-wide workload orchestration and workflow automation, delivering a turnkey solution for cloud networking. With CloudVision, network architects benefit from: Network-wide EOS a single network-wide database for aggregating and accessing state and configuration. Single point of control for physical network integration with third-party controllers, orchestration solutions, security, and other network services. Workflow automation network automation with prebuilt workflows for a variety of ongoing network tasks.
3 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 3 Arista has developed a complete portfolio of modular network switches ideally suited for the modern leaf-spine network architecture. Leveraging the latest merchant silicon, Arista network switches are driven by EOS, providing maximum system uptime, stateful fault repair, zero touch provisioning, latency analysis, and a fully accessible Linux shell. The comprehensive switch portfolio spans everything from 1U top of rack switches ideally suited for the leaf role to chassis-based switches ideally suited for the spine role in the modern leaf-spine network architecture. The newly introduced Arista 75R Series module network switch is the latest iteration of the established 75 Series (see Figure 3). Available as a 4-, 8-, or 12-slot chassis, the 75R Series provides scalable L2 and L3 switching and routing, and is backwards compatible with previous versions of the 75 Series, providing investment protection. The Arista 75R Series offers over 115 Tbps of throughput. With up to 288 GB of packet memory, it supports more than one million routes and 1, tunnels. The switch is 4G ready, and supports flexible combinations of 1G, 25G, 4G, 5G, and 1G Ethernet modes on a single port. Figure 3. Arista 75R Series Modular Network Switches for Universal Spine The 75R Series include a combination of features and capabilities extending its applicability from the spine to more universal applications, which need more bandwidth and performance, including data center interconnects, peering relationships, and Internet routing. Features that support using the 75R series in the universal spine role include: Highly efficient architecture with VOQ and deep buffers eliminates head-of-line (HOL) blocking using a deep buffer virtual output queue (VOQ) architecture, effectively eliminating dropped packets in extremely congested network scenarios. Cell-based fabric is 1% efficient regardless of traffic type or packet sizes. Flexible ports supports 1, 25, 4, 5, and 1Gb/sec Ethernet, and future compatibility and support for 4 Gb/sec provides investment insurance and support for network scaling without forklift upgrades. Internet-scale routing FlexRoute expands merchant silicon routing tables to support more than 1,, IPv4 and IPv6 routes, and more than 1, tunnels. Hitless upgrades continues to move network traffic during upgrades, maximizing uptime and supporting the most stringent SLAs. Programmable directly integrates into the SDDC and network automation.
4 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 4 Up to 128-way ECMP and MLAG equal-cost multi-path routing (ECMP) and multi-chassis link aggregation (MLAG) support up to 128 links, and enable the 75R to grow to support large-scale two-tier leaf-spine and three-tier leafspine-universal-spine architectures. The traditional data center architecture, shown on the left in Figure 4, uses a three-tiered model where network administrators logically group top of rack edge switches, aggregation switches, and chassis-based core switches into a single network fabric, even though each physical component is separately managed. Network components are redundantly cross-connected for fault tolerance, which can create loops. Complex L2 protocols are typically used to enforce loop-free topologies, increasing the difficulty of managing these networks. This network fabric is used for both east-west traffic (server to server inside the data center) and north-south traffic (client to server). North-south traffic is facilitated through additional, redundant, core routers that support relatively few high-bandwidth Internet or data center connections. Ever increasing use of virtualization and the transition to the cloud are changing Internet, intra-, and inter-data center traffic patterns and volumes. As a result, the traditional three-tier model often suffers from oversubscription and struggles to support the wide variety of applications, data patterns, and bandwidth requirements of the modern data center. Figure 4. Legacy Data Center Architecture versus Modern Leaf-spine-universal-spine Architecture The enterprise cloud data center architecture, shown on the right in Figure 4, uses a leaf-spine fabric topology for the internal east-west traffic. With the leaf-spine architecture, every physical leaf switch is interconnected with every spine switch (full-mesh). All devices are exactly the same number of segments away from one another, which provides a predictable and consistent amount of latency as well as high availability. A spine switch is now capable of being used at the routing layer. Called a universal spine, these switches are increasingly deployed in a second spine layer, a topology adopted from hyper-scale data centers. This universal spine moves northsouth interconnect traffic, connecting the leaf-spine network to other networks. Like the leaf-spine, the spine and universal spine are connected as a full mesh, maximizing the bandwidth available for traffic to other data centers, public peering relationships, and the Internet. The universal spine also provides high availability and a predictable and consistent amount of latency.
5 Blackhole Time (seconds) Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 5 Validating Internet Route Scaling ESG Lab performed hands-on testing of Arista EOS and the 75R switch with a focus on validating the Internet scalability of the solution, as well as its ability to continue virtually uninterrupted operations during unplanned outages and planned maintenance. Arista designed the 75R Series with Internet-scale in mind. Leveraging EOS FlexRoute and NetDB, the 75R Series can manage large-scale route tables and complex configurations, extending the 75R from spine roles to data center interconnect, and Internet peering and routing. ESG Lab began by validating the suitability of the 75R Series for Internet transit and peering by simulating an environment with a router connected with four upstream routers, as shown on the left hand side of Figure 5. Each of the four upstream routers announced a full internet routing table of approximately 575, IPv4 and 35, IPv6 route prefixes. Combined, this represented more than 2.4 million BGP paths. We used network test equipment from Ixia to simulate network traffic, advertise BGP paths, and to measure the response to various actions. First, we configured the Ixia to generate the maximum amount of traffic through multiple links, simulating a maximally loaded application and network infrastructure. Once the traffic had reached steady state, we flushed the route table from the router. The router blackholed the data (dropped all packets), as there were no routes. Next, we configured the Ixia to advertise new routes with BGP at the maximum possible rate. The router installed the routes, and started routing traffic. We used the Ixia s graphing tool to graph the traffic flow rate, and measure the time until traffic flows had reached steady state, indicating that the router had completely installed all routes. As shown in on the right hand side of Figure 5, The Arista 75R installed an Internet-scale route table in 32 seconds. A comparable router from an established networking vendor took 144 seconds, or 4.5 times as long. During this period, data was blackholed, and was not being routed by the switch. Figure 5. Installing Internet-scale (2,4,+) BGP Route Table in Arista 75R 16 Route Table Programming Internet-scale IPv4 & IPv6 Route Tables Arista 75R Vendor X
6 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 6 Why This Matters As the number of internal and external applications grows, requiring investments in supporting infrastructure, IT is still hearing the mantra do more with less. ESG s 216 IT Spending Intentions Survey revealed that reducing costs was the second most-cited business initiative driving technology spending at respondent organizations this year (behind increasing cybersecurity). This is especially true for network infrastructure, where the goal is to optimize the use of available resources while connecting ever more systems and data centers. ESG Lab validated that the Arista 75R Series scales to support full Internet-sized routing tables. The 75R installed 2.4 million+ BGP routes in 32 seconds, 4.5 times faster than the 144 seconds for a comparable solution from an established networking vendor. We found that the Arista 75R Series is capable of extending beyond the traditional spine role, and can be deployed for Internet edge and peering, where historically traditional chassis routers would have been required. Validating Virtually Uninterrupted Operation During Unplanned Outages ESG Lab next validated the speed and resiliency of Arista EOS when faced with an unplanned network failure. We used a complex leaf-spine network in a controlled lab setting to simulate an enterprise data center. In these networks, each spine node provides a redundant path through the network and EOS uses equal cost multi path routing (ECMP). All leaf and spine nodes were connected using 1 Gb/sec Ethernet. A standard enterprise with a four-node spine was simulated using four-way ECMP, while the largest enterprise cloud with 64 spine nodes was simulated using 64-way ECMP. The standard enterprise test bench topology is shown in Figure 6. We configured the Ixia test equipment to generate the maximum amount of traffic flows, simulating a maximally loaded application and network infrastructure. Once the traffic had reached steady state, a network cable failure was simulated by pulling out a link. Figure 6. Standard Enterprise (4-way ECMP) Test Bench Reacting to the link failure, the network found a new path for the traffic. We used the Ixia s measurement capabilities to measure the maximum time that the application data flow was interrupted, which represents the worst-case downtime of the simulated application. We repeated each test four times, averaging the results, and measured the reaction time for both IPv4 and IPv6. To provide a reference, we repeated the tests using comparable solutions from two leading vendors providing traditional network equipment. We also tested with a comparable industry-leading Linux-based white box solution. The results for the standard enterprise simulation with four-way ECMP are shown in Figure 7.
7 Recovery Time (ms) Recovery Time (ms) Recovery Time (ms) Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 7 Figure 7. Recovery from 1-link Failure, Worst-case Traffic Flow Interruption, Standard Enterprise (4-way ECMP) 1, 8 Recovery from 1-Link Failure Worst-case IPv4 Traffic Flow 4-Way ECMP Recovery from 1-Link Failure Worst-case IPv4 Traffic Flow 4-Way ECMP 83 1, 8, Recovery from 1-Link Failure Worst-case IPv4 Trafic Flow 4-Way ECMP 7,356 8, X X , 4, 432X 2,893 4, Arista 75R 115 Vendor X Arista 75R Vendor Y 2, Arista 75R White Box Across the four tests, Arista EOS and Arista 75R returned fast, consistent, and repeatable results, recovering from a single link failure with an average interruption in the traffic flow of just 14 ms. This interruption would most likely not be noticed by users or cause any problems with applications. The traditional equipment vendors demonstrated wide variability in response to failures. The white box solution, with more than two seconds interruption in data flow, was not suited for enterprise-scale applications. We repeated the tests again, simulating a 2-link failure; the times and performance relative to Arista 75R are shown in Table 1. Table 1. Recovery from 1-link and 2-link Failure, Worst-case Traffic Flow Interruption, Standard Enterprise (4-way ECMP) IPv4 IPv6 Arista 75R Vendor X Vendor Y White Box Arista 75R Vendor X Vendor Y White Box Absolute Performance 1-link Failure 14 ms 555 ms 55 ms 5,962 ms 18 ms 557 ms 55 ms 8,968 ms 2-link Failure 64 ms 722 ms 84 ms 9,472 ms 64 ms 745 ms 84 ms 12,111 ms Performance Relative to Arista 75R 1-link Failure link Failure The solutions from the traditional equipment vendors demonstrated longer interruptions in the traffic flow up to 4 times longer for a 1-link failure using IPv4. Worse, the other vendor solutions showed great variability, returning different recovery times across the four tests. The Linux-based white box solution interrupted IPv4 traffic flow for almost six seconds and IPv6 traffic flow for almost nine seconds, demonstrating the lack of suitability for enterprise deployments. The duration, consistency, and comparable performance for both IPv4 and IPv6 delivered by Arista can give network administrators confidence in the reliability, resiliency, and availability of their network infrastructure. Next, ESG Lab repeated the tests, this time using a simulated large enterprise cloud network configuration with 64 spine nodes and 64-way ECMP, as shown in Figure 8.
8 Recovery Time (ms) Recovery Time (ms) Recovery Time (ms) Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 8 Figure 8. Large Enterprise Cloud (64-way ECMP) Test Bench With more components and interconnections comes the probability for more simultaneous failures. Thus, we simulated a single link failure, a 4-link failure, and a linecard failure with 16-links on it. The results are shown in Figure 9 and Table 2. Figure 9. Recovery from Link Failure, Worst-case Traffic Flow Interruption, Large Enterprise Cloud (64-way ECMP) Recovery from 16-Link Failure Worst-case Traffic Flow 64-Way ECMP 2, 1,889 1,82 Recovery from 16-Link Failure Worst-case Traffic Flow 64-Way ECMP 24,118 25, Recovery from 16-Link Failure Worst-case Traffic Flow 64-Way ECMP 38,318 4, 34,618 1,5 2, 17,19 3, 1, 5 4X Arista 75R 16-Link IPv4 Vendor X 16-Link IPv6 15, 1, 5, 39X Arista 75R Vendor Y 16-Link IPv4 16-Link IPv6 2, 79X 1, Arista 75R White Box 16-Link IPv4 16-Link IPv6 Table 2. Recovery from Link Failure, Worst-case Traffic Flow, Large-Enterprise Cloud (64-way ECMP) IPv4 IPv6 Arista 75R Vendor X Vendor Y White Box Arista 75R Vendor X Vendor Y White Box Absolute Performance 1-link Failure 27 ms 534 ms 44 ms 34,34 ms 29 ms 59 ms 44 ms 38,279 ms 4-link Failure 166 ms 627 ms 9,944 ms 34,455 ms 165 ms 599 ms 11,423 ms 38,528 ms 16-link Failure 437 ms 1,82 ms 17,19 ms 34,618 ms 437 ms 1,889 ms 24,118 ms 38,318 ms Performance Relative to Arista 75R 1-link Failure , ,337 4-link Failure link Failure
9 Recovery Time (ms) Recovery Time (ms) Recovery Time (ms) Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 9 Arista EOS and 75R delivered rapid, consistent results, comparable to the standard enterprise configuration. In the worst case, when 25% of the links failed (16 out of 64), the Arista interrupted data flows for an average of 437 ms, or less than a half second. The solutions from the traditional network vendors demonstrated longer interruptions in the traffic flow from just under two seconds to more than 24 seconds. The other vendor solutions returned inconsistent results, with Vendor Y interrupting traffic flow from 14 to 18 seconds for a 16-link IPv4 failure. The white box solution interrupted traffic flow for a minimum of 34 seconds for a 1-link failure, demonstrating its unsuitability for deployment in cloud-scale enterprise network infrastructures. In critical situations or for the ability to meet exacting SLAs, network architects provision standby links, providing additional redundancy in the case of link failures. We simulated this architecture using a 16-way ECMP configuration with an additional eight failover links. We simulated a failure of half of the active links (8/16), and measured the interruption of traffic until new routes were established using the eight failover links. We repeated the test four times and averaged the results, which are shown in Figure 1. Figure 1. Recovery from Link Failure, Worst-case Traffic Flow Interruption, 16-way ECMP with 8 Failover Links 3, 2,5 2, 1,5 1, 5 Recovery from 8-Link Failure Worst-case IPv4 Traffic Flow 16-Way ECMP w/ 8 Failover Links 316 8X 347 Arista 75R IPv4 IPv6 2,7462,717 Vendor X 5, 4, 3, 2, 1, Recovery from 8-Link Failure Worst-case IPv4 Traffic Flow 16-Way ECMP w/ 8 Failover Links X 347 Arista 75R IPv4 IPv6 3,972 Vendor Y 3,968 4, 3, 2, 1, Recovery from 8-Link Failure Worst-case IPv4 Traffic Flow 16-Way ECMP w/ 8 Failover Links 95X Arista 75R IPv4 IPv6 33,717 3,6 White Box Arista EOS demonstrated consistent, rapid results, with less than one-third of a second of traffic interruption until the failover links where enabled and new routes established. The traditional solutions interrupted traffic for more than two seconds, and the white box solution dropped data for more than 3 seconds. All of the comparable solutions demonstrated inconsistent results across the repeated simulations. Why This Matters Network availability is a key factor for organizations supporting around-the-clock business-critical applications. Despite advances in hardware, software, and infrastructure robustness, components still fail, resulting in downtime from short periods to hours or even days. Unplanned outages affect not only IT they can have a significant material and financial impact on the business. ESG Lab validated that Arista EOS and Arista 75R minimized downtime and the impact of unplanned outages. In an enterprise network environment, a single link failure resulted in less than 64 milliseconds of traffic flow interruption. With the worst-case scenario, in a large enterprise cloud network environment, a loss of 25% of the network links interrupted traffic flow for less than one half second. This miniscule downtime will not cause an interruption to applications, and will most likely not be noticed by users or applications. Using Arista EOS and 75R will enable IT to easily maintain SLAs.
10 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 1 Validating Virtually Uninterrupted Operation During Planned Maintenance Modern data center and cloud network topologies feature many redundancies, enabling bandwidth aggregation, load distribution, resiliency in the face of failures, and the ability to perform rolling upgrades, where the network is partitioned into redundant groups, and each group is upgraded in sequence. During the upgrade/reboot cycle, redundancies in the network enable uninterrupted traffic flow, albeit with reduced bandwidth. Arista EOS Smart System Upgrade (SSU) is a feature designed to reduce the burden of network upgrades, minimizing application downtime, reducing risks, and enabling network administrators to maintain and upgrade their network infrastructure without systemic outages. ESG validated EOS SSU by first performing maintenance on a spine node and then a leaf node in our simulated standard enterprise network with 4-way ECMP. The first step was to place the spine in Maintenance Mode, which gracefully removes a spine component or the entire spine node from the network by rerouting traffic, taking advantage of the inherent characteristics of redundancy through network design and distributed protocols. Maintenance Mode can be applied to an interface, a line card, or the system. Using the EOS command line interface, we put the spine node into Maintenance Mode using the command sequence maintenance; unit system; quiesce. We verified that traffic was still flowing through the network, but was bypassing the spine node, and that no packets were dropped during the transition into Maintenance Mode. We then upgraded and rebooted the router and verified that no traffic was lost during the reboot. Finally, we exited Maintenance Mode for the spine with the single command no maintenance, and then verified that the spine node was now participating in the network and routing traffic. We repeated the test, this time putting a single linecard into Maintenance Mode using the command sequence maintenance; unit linecard4; quiesce. Finally, we repeated the test, this time putting a single interface into Maintenance Mode using the command sequence maintenance; interface Ethernet 3/2/1; quiesce. In all cases, the system rapidly drained, removing the specified component from the routing tables and thus from the network. Arista EOS Maintenance Mode enables administrators to easily fix, replace, and upgrade components without interrupting traffic flow or operations. Next, ESG Lab upgraded the EOS version on a leaf node. Taking full advantage of all of the features of the latest merchant silicon, Arista EOS is able to maintain data forwarding while the switch software is upgraded and the switch management processor is rebooted. Arista claims maximum downtime while the merchant silicon reboots is less than 2 ms. We first verified the version of EOS running on the leaf node. Next, we configured the leaf node to load a new version of EOS after reload using the command boot system flash:eos4.15.4f. Finally, we instructed the leaf node to perform a hitless upgrade using the command reload hitless, and observed that the node rebooted to the new version of EOS. We used the Ixia test equipment to measure downtime. The average downtime over four tests was 17 ms. Arista CloudVision can be used to automate the process of upgrading EOS across an entire leaf-spine network. Using CloudVision, a network administrator can transition from per-device management to network-wide management. With a single action, CloudVision will perform hitless upgrades on each node, ensuring the entire network continues to operate without interruption, and simplifying the maintenance burden on IT staff.
11 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 11 Why This Matters Data analytics, data growth, and the rapid proliferation of virtualized applications are increasing the cost and complexity of the network infrastructure, and IT organizations running mission-critical applications need to guard against service interruptions. This becomes even more critical as the data center transitions to the cloud architecture, and must maintain always-on, always-available service. Routine equipment upgrades, firmware updates, and hardware refreshes can require equipment to be taken out of service. An always-available solution with management tools that make it easy to centrally manage and maintain a network infrastructure reduces time, cost, and risk. ESG Lab validated that Arista EOS provides the ability for network administrators to perform routine planned maintenance and upgrades without data interruption. Arista EOS SSU and hitless upgrades can satisfy the most stringent business continuity and SLA requirements. The Bigger Truth The modern enterprise data center presents multiple networking challenges to IT organizations, including unplanned outages, ongoing management, and scaling the infrastructure as the business grows. Using traditional networking devices and tools in modern data center topologies can be quite challenging, since those tools were designed for a relatively static infrastructure. A network architecture that borrows heavily from the hyper-scale data center and provides the same reliability, availability, scalability, and manageability as server and application virtualization will be required as public and private cloud solutions mature and become ubiquitous. Arista has developed a modern network infrastructure solution that brings hyper-scale to the reach of both small- and large-scale enterprises. Arista Extensible Operating System (EOS) forms the foundation for this software-defined networking system. A modular OS built on top of a standard Linux kernel, EOS provides a single image that runs on the entire Arista portfolio of switches. FlexRoute Engine and NetDB provide the ability to manage 1,,+ entry route tables, far more than the default specification of merchant silicon, while Smart System Upgrade enables non-stop operation during maintenance and upgrades. NetDB is EOS centralized in-memory database for maintaining configuration and state, accessed with scalable, programmable publish/subscribe protocols. Arista applied the same methodology to CloudVision, enabling network administrators to expand their view from unit level to network-wide level for managing, maintaining, upgrading, and automating their network infrastructure. Arista designed its complete portfolio of modular network switches for the enterprise-scale software-defined data center. Leveraging the latest merchant silicon, Arista s recently introduced 75R Series switches are targeted at the spine network tier. Virtual output queueing and deep buffers eliminate head-of-line blocking and dropped packets during network congestion. Using EOS, the 75R supports up to 128-way ECMP and MLAG, and Internet-size routing tables, making the switch ideal for spine, universal spine, and Internet peering applications. In hands-on testing, ESG Lab validated that Arista EOS and the 75R were extremely resilient, interrupting traffic flow for less than a half second in even the worst-case scenarios where 25% of available links were cut in a large enterprise cloud 64-way ECMP leaf-spine network. EOS was even quicker when switching to failover links, with only one-third of a second of interrupted traffic flow when 5% of available links were cut in a 16-way ECMP network. EOS response to cut links was extremely consistent and repeatable over time.
12 Lab Review: Cloud Networking for the Enterprise with Arista Universal Spine 12 Our testing demonstrated that the 75R was able to install and use over 2,4, BGP routes in 32 seconds, making it suitable for the largest scale applications large-scale enterprise networks, hyper-scale cloud deployments, and Internet peering. We reviewed EOS Smart System Upgrade and hitless upgrade capabilities, and found that we could take spine nodes offline for maintenance without interrupting traffic. We were also able to upgrade leaf nodes, suffering less than 2 ms of traffic interruption minimizing maintenance effects for applications and users. Enterprises looking to take advantage of the latest innovations from the world of hyper-scale data centers to build highly reliable, resilient, maintainable, and programmable network infrastructures would be smart to take a close look at Arista EOS, the recently introduced 75R Series and the entire portfolio of Arista switches. All trademark names are property of their respective companies. Information contained in this publication has been obtained by sources The Enterprise Strategy Group (ESG) considers to be reliable but is not warranted by ESG. This publication may contain opinions of ESG, which are subject to change. This publication is copyrighted by The Enterprise Strategy Group, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of The Enterprise Strategy Group, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact ESG Client Relations at The goal of ESG Lab reports is to educate IT professionals about data center technology products for companies of all types and sizes. ESG Lab reports are not meant to replace the evaluation process that should be conducted before making purchasing decisions, but rather to provide insight into these emerging technologies. Our objective is to go over some of the more valuable feature/functions of products, show how they can be used to solve real customer problems and identify any areas needing improvement. ESG Lab's expert third-party perspective is based on our own hands-on testing as well as on interviews with customers who use these products in production environments. Enterprise Strategy Group is an IT analyst, research, validation, and strategy firm that provides market intelligence and actionable insight to the global IT community by The Enterprise Strategy Group, Inc. All Rights Reserved. P
ESG Lab Review Always Available Dell Storage SC Series Date: October 2015 Author: Brian Garrett, VP ESG Lab Abstract: This report documents ESG Lab testing of Dell Storage SC Series with a focus on the
Switching Fabric Designs for Data Centers David Klebanov Technical Solutions Architect, Cisco Systems firstname.lastname@example.org @DavidKlebanov 1 Agenda Data Center Fabric Design Principles and Industry Trends
Fall 08 Arista EOS: Smart System Upgrade KEEPING PACE WITH DATA CENTER INNOVATION Deploying and taking advantage of new technology is top of mind for most organizations. Balancing the business benefits
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
Data Center Use Cases and Trends Amod Dani Managing Director, India Engineering & Operations http://www.arista.com Open 2014 Open Networking Networking Foundation India Symposium, January 31 February 1,
Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 Network Virtualization Overview... 1 Network Virtualization Key Requirements to be validated...
Cloud Networking: A Network Approach that Meets the Requirements of Cloud Computing CQ2 2011 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms
Arista FlexRouteTM Engine Arista Networks award-winning Arista 7500 Series was introduced in April 2010 as a revolutionary switching platform, which maximized data center performance, efficiency and overall
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Simplify Your Data Center Network to Improve Performance and Decrease Costs Summary Traditional data center networks are struggling to keep up with new computing requirements. Network architects should
ESG Lab Review HP 3PAR Peer Motion Software Date: December 2011 Authors: Tony Palmer, Sr. ESG lab Analyst and Ajen Johan, ESG Lab Engineer Abstract: This ESG Lab review documents hands-on testing of HP
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
White Paper Building Next Generation Data Centers Implications for I/O Strategies By Bob Laliberte, Senior Analyst August 2014 This ESG White Paper was commissioned by Emulex and is distributed under license
Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by
Arista 7060X and 7260X series: Q&A Product Overview What are the 7060X & 7260X series? The Arista 7060X and 7260X series are purpose-built 40GbE and 100GbE data center switches in compact and energy efficient
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
Introduction Ethernet networks have evolved significantly since their inception back in the 1980s, with many generational changes to where we are today. Networks are orders of magnitude faster with 10Gbps
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Product Brief NetScout Expands ngenius Monitoring Switch Portfolio Date: January 2013 Author: Bob Laliberte, Senior Analyst and Perry Laberis, Senior Research Associate Abstract: NetScout has announced
Introduction The past two decades have seen dramatic shifts in data center design. As application complexity grew, server sprawl pushed out the walls of the data center, expanding both the physical square
The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment
Testing Software Defined Network (SDN) For Data Center and Cloud VERYX TECHNOLOGIES Table of Contents Introduction... 1 SDN - An Overview... 2 SDN: Solution Layers and its Key Requirements to be validated...
` ESG Solution Showcase EMC Isilon: Data Lake 2.0 Date: November 2015 Author: Scott Sinclair, Analyst Abstract: With the rise of new workloads such as big data analytics and the Internet of Things, data
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
ESG Lab Review Symantec OpenStorage Date: February 2010 Author: Tony Palmer, Senior ESG Lab Engineer Abstract: This ESG Lab review documents hands-on testing of consolidated management and automated data
ESG Lab Review EMC MPFSi: Accelerating Network Attached Storage with iscsi A Product Review by ESG Lab May 2006 Authors: Tony Asaro Brian Garrett Copyright 2006, Enterprise Strategy Group, Inc. All Rights
White Paper TOPOLOGY-INDEPENDENT IN-SERVICE SOFTWARE UPGRADES ON THE QFX5100 Juniper Innovation Brings ISSU to Data Center Top-of-Rack Switches Copyright 2014, Juniper Networks, Inc. 1 Table of Contents
White Paper Application Centric Infrastructure Overview: Implement a Robust Transport Network for Dynamic Workloads What You Will Learn Application centric infrastructure (ACI) provides a robust transport
Mestrado em Engenharia de Redes de Comunicações TÓPICOS AVANÇADOS EM REDES ADVANCED TOPICS IN NETWORKS 2009-2010 Projecto de Rede / Sistema - Network / System Design 1 Hierarchical Network Design 2 Hierarchical
Overview: Virtualization takes IT by storm The adoption of virtualization in datacenters creates the need for a new class of networks designed to support elasticity of resource allocation, increasingly
DEDICATED NETWORKS FOR IP STORAGE ABSTRACT This white paper examines EMC and VMware best practices for deploying dedicated IP storage networks in medium to large-scale data centers. In addition, it explores
Network Virtualization for Today's Business Introduction Today's enterprises rely on Information Technology resources and applications, for accessing business-critical information and for day-to-day work.
Reasons to Choose the Juniper ON Enterprise Network Juniper s enterprise access products meet the always-on needs of today s enterprises by delivering solutions that are reliable, simple, and smart. The
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
White Paper Intel Ethernet Multi-Port Server Adapters Using Multi-Port Intel Ethernet Server Adapters to Optimize Server Virtualization Introduction As enterprise-class server infrastructures adopt virtualization
IP Fabric Design December 2016 Americas Principal Systems Engineering Overview Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments.
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
White Paper Integrate Cisco Application Centric Infrastructure with Existing Networks What You Will Learn Cisco Application Centric Infrastructure (ACI) offers a revolutionary way of deploying, managing,
Arista 7280E Series: Q&A Product Overview What are the 7280E Series? The 7280E are a series of purpose built fixed configuration 1RU form factor switches designed with deep buffers, virtual output queues,
WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table
Cisco ASR 1000 Series Aggregation Services Routers: ISSU Deployment Guide and Case Study In most networks, a significant cause of downtime is planned maintenance and software upgrades. The Cisco ASR 1000
How Microsoft Designs its Cloud-Scale Servers How Microsoft Designs its Cloud-Scale Servers Page 1 How Microsoft Designs its Cloud-Scale Servers How is cloud infrastructure server hardware design different
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Data Sheet Cisco Unified Computing System with NetApp Storage for SAP HANA Introduction SAP HANA The SAP High-Performance Analytic Appliance (HANA) is a new non-intrusive hardware and software solution
Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology Sujal Das Product Marketing Director Network Switching Karthik Mandakolathur Sr Product Line
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Alcatel-Lucent OmniSwitch 10K Adaptable LAN Solution for High-Performance Networking The Alcatel-Lucent OmniSwitch 10K Modular Ethernet LAN Chassis the first of a new generation of modular, network adaptable
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
ESG Lab Review i365 EVault for Microsoft System Center Data Protection Manager Date: November 2010 Authors: Ginny Roth, Lab Engineer, and Tony Palmer, Senior Engineer Abstract: This ESG Lab review documents
AlcAtel-lucent enterprise AnD sdnsquare sdn² network solution enabling highly efficient, volumetric, time-critical data transfer over ip networks Internet technology has completely changed the networking
Mestrado em Engenharia de Redes de Comunicações TÓPICOS AVANÇADOS EM REDES ADVANCED TOPICS IN NETWORKS 2008-2009 Exemplos de Projecto - Network Design Examples 1 Hierarchical Network Design 2 Hierarchical
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
WHITE PAPER Data Center Fabrics Why the Right Choice is so Important to Your Business Introduction Data center fabrics are emerging as the preferred architecture for next-generation virtualized data centers,
White Paper Meeting the Five Key Needs of Next-Generation Cloud Computing Networks Cloud computing promises to bring scalable processing capacity to a wide range of applications in a cost-effective manner.
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications
PRODUCT SHEET: CA Process Automation we can CA Process Automation CA Process Automation enables enterprise organizations to design, deploy and administer automation of manual, resource-intensive and often
White Paper Data Management and Analysis Cisco and Microsoft: Optimal Infrastructure Strategies By Mark Bowker, Senior Analyst February 2015 This ESG White Paper was commissioned by Cisco and is distributed
Enterprise Strategy Group Getting to the bigger truth. White Paper Lenovo: Software-defined Storage for a New Generation of Information Technology An investigation into software-defined storage, and the
Virtualization takes IT by storm The Impact of Virtualization on Cloud Networking The adoption of virtualization in data centers creates the need for a new class of networking designed to support elastic
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from
Introduction The strategy and architecture to establish and maintain infrastructure and network security is in a rapid state of change new tools, greater intelligence and managed services are being used
ESG Lab Review Acopia ARX: Transparent, Heterogeneous Data Migrations for NAS Storage A Product Review by ESG Lab February 2006 Authors: Tony Asaro Brian Garrett Copyright 2006,, Inc. All Rights Reserved
ESG Lab Review Fast and Efficient Network Load Balancing from KEMP Technologies Date: February 2016 Author: Tony Palmer, Senior Lab Analyst, Vinny Choinski, Senior Lab Analyst, and Jack Poller, Lab Analyst
Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...
Digital Content Streaming Cloud content streaming has become a necessity in media and entertainment. Media consumers expect streamed music and video to be available through the Internet to any internet
White Paper Getting on the Path to SDN: Leveraging Juniper Networks Virtual Chassis Technology on Juniper EX Series Switches to Improve SDN Adoption in Your Network By Bob Laliberte, Senior Analyst September
Load DynamiX Storage Performance Validation: Fundamental to your Change Management Process By Claude Bouffard Director SSG-NOW Labs, Senior Analyst Deni Connor, Founding Analyst SSG-NOW February 2015 L
Solving Scale and Mobility in the Data Center A New Simplified Approach Table of Contents Best Practice Data Center Design... 2 Traffic Flows, multi-tenancy and provisioning... 3 Edge device auto-attachment.4
Enterprise Strategy Group Getting to the bigger truth. White Paper Hyperconverged Transformation: Getting the Software-defined Data Center Right By Colm Keegan, ESG Senior Analyst February 2016 This ESG
Introduction The rapid adoption of virtualization technologies are driving server consolidation, data center optimization and application mobility. IT organizations are adopting new data center architectures,
Leveraging SDN and NFV in the WAN Introduction Software Defined Networking (SDN) and Network Functions Virtualization (NFV) are two of the key components of the overall movement towards software defined
The Value of Open vswitch, Fabric Connect and Fabric Attach in Enterprise Data Centers Table of Contents Enter Avaya Fabric Connect. 2 A typical data center architecture with Avaya SDN Fx... 3 A new way:
White Paper The Converged Network By Bob Laliberte November, 2009 2009, Enterprise Strategy Group, Inc. All Rights Reserved White Paper: The Converged Network 2 Contents Introduction... 3 What s Needed...
REMOVING THE BARRIERS FOR DATA CENTRE AUTOMATION The modern data centre has ever-increasing demands for throughput and performance, and the security infrastructure required to protect and segment the network
Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers s Executive Summary Cloud computing, video streaming, and social media are contributing to a dramatic rise in metro
TechBrief Introduction Leveraging Redundancy to Build Fault-Tolerant Networks The high demands of e-commerce and Internet applications have required networks to exhibit the same reliability as the public