Brocade VDX TM 6740 Top-of-Rack Switch Performance and Power Test
|
|
|
- Blaise Atkinson
- 10 years ago
- Views:
Transcription
1 Brocade VDX TM Top-of-Rack Switch Performance and Power Test A Report on the: Brocade VDX TM Top-of-Rack Switch December 2013 Lippis Enterprises, Inc. 2013
2 Forward by Scott Bradner It is now almost twenty years since the Internet Engineering Task Force (IETF) initially chartered the Benchmarking Methodology Working Group (bmwg) with me as chair. The aim of the working group was to develop standardized terminology and testing methodology for various performance tests on network devices such as routers and switches. At the time that the bmwg was formed, it was almost impossible to compare products from different vendors without doing testing yourself because each vendor did its own testing and, too often, designed the tests to paint their products in the best light. The RFCs produced by the bmwg provided a set of standards that network equipment vendors and testing equipment vendors could use so that tests by different vendors or different test labs could be compared. Since its creation, the bmwg has produced 23 IETF RFCs that define performance testing terminology or methodology for specific protocols or situations and are currently working on a half dozen more. The bmwg has also had three different chairs since I resigned in 1993 to join the IETF s steering group. The performance tests in this report are the latest in a long series of similar tests produced by a number of testing labs. The testing methodology follows the same standards as I was using in my own test lab at Harvard in the early 1990s thus the results are comparable. The comparisons would not be all that useful since I was dealing with far slower speed networks, but a latency measurement I did in 1993 used the same standard methodology as do the latency tests in this report. Considering the limits on what I was able to test way back then, the Ixia test setup used in these tests is very impressive indeed. It almost makes me want to get into the testing business again. Scott is the University Technology Security Officer at Harvard University. He writes a weekly column for Network World, and serves as the Secretary to the Board of Trustees of the Internet Society (ISOC). In addition, he is a trustee of the American Registry of Internet Numbers (ARIN) and author of many RFC network performance standards used in this industry evaluation report. License Rights 2013 Lippis Enterprises, Inc. All rights reserved. This report is being sold solely to the purchaser set forth below. The report, including the written text, graphics, data, images, illustrations, marks, logos, sound or video clips, photographs and/or other works (singly or collectively, the Content ), may be used only by such purchaser for informational purposes and such purchaser may not (and may not authorize a third party to) copy, transmit, reproduce, cite, publicly display, host, post, perform, distribute, alter, transmit or create derivative works of any Content or any portion of or excerpts from the Content in any fashion (either externally or internally to other individuals within a corporate structure) unless specifically authorized in writing by Lippis Enterprises. The purchaser agrees to maintain all copyright, trademark and other notices on the Content. The Report and all of the Content are protected by U.S. and/or international copyright laws and conventions, and belong to Lippis Enterprises, its licensors or third parties. No right, title or interest in any Content is transferred to the purchaser. Purchaser Name: 2 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
3 Acknowledgements We thank the following people for their help and support in making possible this first industry evaluation of 10GbE private/public data center cloud Ethernet switch fabrics. Errol Ginsberg, CEO of Ixia for his continuing support of these industry initiatives throughout its execution since Leviton for the use of fiber optic cables equipped with optical SFP+ connectors to link Ixia test equipment to 10GbE switches under test. Siemon for the use of copper and fiber optic cables equipped with QSFP+ connectors to link Ixia test equipment to 40GbE switches under test. Michael Githens, Lab Program Manager at Ixia for his technical competence, extra effort and dedication to fairness as he executed test week and worked with participating vendors to answer their many questions. Jim Smith, VP of Marketing at Ixia for his support and contribution to creating a successful industry event. Mike Elias, Photography and Videography at Ixia for video podcast editing. All participating vendors and their technical plus marketing teams for their support of not only the test event and multiple test configuration file iterations but to each other, as many provided helping hands on many levels to competitors. Bill Nicholson for his graphic artist skills that make this report look amazing. Jeannette Tibbetts for her editing that makes this report read as smooth as a test report can. 3 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
4 Table of Contents Background...5 Participating Supplier Brocade VDX TM Data Center Switch...6 Cross Vendor Analysis Top-of-Rack Switches Latency...12 Throughput...17 Power...26 Test Methodology...27 Terms of Use...32 About Nick Lippis Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
5 Background To assist IT business leaders with the design and procurement of their private or public data center cloud fabrics, the Lippis Report and Ixia have conducted an open industry evaluation of 10GbE and 40GbE data center switches. In this report, IT architects are provided the first comparative 10 and 40 Gigabit Ethernet (GbE) Switch performance information to assist them in purchase decisions and product differentiation. It s our hope that this report will remove performance, power consumption, virtualualization scale and latency concern from the purchase decision, allowing IT architects and IT business leaders to focus on other vendor selection criteria, such as post sales support, platform investment, vision, company financials, etc. The Lippis test reports based on independent validation at Ixia s isim City, communicate credibility, competence, openness and trust to potential buyers of 10GbE and 40GbE data center switching equipment as the tests are open to all suppliers and are fair, thanks to RFC and custom-based tests that are repeatable. The private/public data center cloud 10GbE and 40GbE fabric test was free for vendors to participate and open to all industry suppliers of 10GbE and 40GbE switching equipment, both modular and fixed configurations. The tests conducted were IETF RFC-based performance and latency test, power consumption measurements and a custom cloud-computing simulation of large north-south plus east-west traffic flows. Virtualization scale was calculated and reported. Both Top-of-Rack (ToR) and Core switches were evaluated, but in this report we focus on ToRs that are greater than 24 ports of 10GbE. ToR switches evaluated are: OmniSwitch X 10/40G Data Center Switch Brocade VDX TM Data Center Switch IBM RackSwitch TM Summit Dell Force10 S-Series This report communicates the results of six sets of tests, which took place during the week of December 6 to 10, 2010, April 11 to 15, 2011, October 3 to 14, 2011, March 26 to 30, 2012, December 12, 2012 and October of 2013 in the modern Ixia test lab, isimcity, located in Santa Clara, CA. Ixia supplied all test equipment needed to conduct the tests while Leviton provided optical SPF+ connectors and optical cabling. Siemon provided copper and optical cables equippped with QSFP+ connectors for 40GbE connections. Each 10GbE and 40GbE supplier was allocated lab time to run the test with the assistance of an Ixia engineer. Each switch vendor configured its equipment while Ixia engineers ran the test and logged the resulting data. 5 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
6 Brocade VDX Data Center Switch For this Lippis/Ixia evaluation at isimcity, Brocade submitted its latest VDX Top-of-Rack (ToR) switch, the Brocade, that s built with Brocade s VCS technology, an Ethernet fabric innovation that addresses the unique requirements of highly virtualized, cloud data center environments. The line-rate, low latency Brocade VDX fixed configuration switch is available in two models: the Brocade VDX with up to 64 1/10 GbE SFP+ ports, and the Brocade VDX T with 48 1/10 GBASE-T ports and four 40 GbE SFP+ ports. The Brocade VDX series switches are based on a single custom ASIC fabric switching technology and runs the feature-rich Brocade Network Operating System (NOS) with VCS Fabric technology. Ethernet fabrics are driving new levels of efficiency and automation in modern data centers and cloud infrastructures. Ethernet fabrics Device under test Brocade VDX TM Test Configuration* Hardware Software Version Port Density VDX TM Brocade NOS 4.0b Test Equipment Ixia XG12 High Performance Chassis IxOS 6.50 EA SP1 IxNetwork 7.0 EA SP1 (1) Xcellon FlexFE40G4Q (4 port 40G) (3) Xcellon FlexAP10G16S (16 port 10G) Cabling Siemon Moray Low Power Active Optical Cable Assemblies Single Mode, QSFP+ 40GbE optical cable QSFP meters built on Brocade VCS Fabric technology provide automation, efficiency and VM awareness compared to traditional network architectures and competitive fabric offerings. Brocade VCS Fabric technology increases flexibility and IT agility, enabling organizations to transition to elastic, mission-critical networks in highly virtualized cloud data centers. This innovative Brocade solution delivers: 64 (40-10GbE GbE) Management simplicity, a VCS domain being operated as a single logical switch Optimized for east-to-west traffic Ease of integration into cloud environment with native support for virtualization and multi-tenancy 6 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
7 Brocade VDX TM RFC2544 Layer 2 Latency Test ns Throughput % Line Rate Brocade VDX TM RFC2544 Layer 3 Latency Test* Brocade VDX TM RFC 2544 L2 & L3 Throughput Test 100% 80% 60% Max Latency Avg Latency Min Latency Avg Delay Variation ns Max Latency (ns) Avg Latency (ns) Min Latency (ns) Max Latency Avg Latency Min Latency Avg Delay Variation *Test was done between far ports to test across chip performance/latency Max Latency (ns) Avg Latency (ns) Min Latency (ns) Layer 2 Layer 3 The VDX ToR switch was tested across 40 ports of 10GbE and four ports of 40GbE. Its average latency ranged from a low of 790 ns to a high of 1122 ns for Layer 2 traffic forwarding measured in cut-through mode. The VDX demonstrated very competitive latency results for a switch of its class. Its average delay variation ranged between 4 and 9 ns, providing competitive and consistent latency across all packet sizes at full line rate a comforting result given the VDX s design focus on high throughput and low latency critical to virtualized data centers. As in the RFC2544 L2 latency test, the VDX ToR switch was tested across 40 ports of 10 GbE and four ports of 40 GbE. Its average latency ranged from a low of 776 ns to a high of 1120 ns for L3 traffic forwarding and average delay variation ranged from 5.2 ns to 9.3 ns. These results are the second lowest the Lippis Report has tested for a ToR switch in the RFC2544 L3 latency test. The Brocade VDX demonstrated 100% throughput as a percentage of line rate across all 40 10GbE and four 40GbE ports. In other words, not a single packet was dropped while the Brocade VDX was presented with enough traffic to populate its 40 10GbE and four 40GbE ports at line rate. 40% 20% 0% 64* Layer 2 100% 100% 100% 100% 100% 100% 100% 100% 100% Layer 3 100% 100% 100% 100% 100% 100% 100% 100% 100% 7 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
8 Brocade VDX TM RFC 2889 Congestion Test - 10GbE % Line Rate Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking no no no no no no no no no L3 Head of Line Blocking no no no no no no no no no L2 Back Pressure yes yes yes yes yes yes yes yes yes L3 Back Pressure yes yes yes yes yes yes yes yes yes L2 Agg Flow Control Frames L3 Agg Flow Control Frames Brocade VDX TM RFC 2889 Congestion Test - 40GbE % Line Rate Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate Layer2 Agg. Forwarding Rate Layer 3 Agg. Forwarding Rate L2 Head of Line Blocking no no no no no no no no no L3 Head of Line Blocking no no no no no no no no no L2 Back Pressure yes yes yes yes yes yes yes yes yes L3 Back Pressure yes yes yes yes yes yes yes yes yes L2 Agg Flow Control Frames L3 Agg Flow Control Frames The Brocade VDX demonstrated 100% of aggregated forwarding rate as percentage of line rate during congestion conditions. The Brocade VDX was configured with two groups of four ports one group with 10GbE and the other with 40GbE to measure congestion performance at 10GbE and 40GbE line rate. For 10GbE and 40GbE groups, a single port was flooded with 150% of line rate traffic while Ixia test gear measured packet loss, Head of Line (HOL) and back pressure or pause frames. The Brocade VDX did not show HOL blocking behavior, which means that as the 10 and 40GbE ports became congested, it did not impact the performance of other ports, assuring reliable operation during congestion conditions. As with most ToR switches, the Brocade VDX did use back pressure as the Ixia test gear detected flow control frames. The Brocade VDX demonstrated 0 packet loss across all packet sizes while Ixia test gear transmitted IP multicast traffic to the VDX. IP multicast latencies ranged from a low of 733 ns at 64 byte size packet to a high of 8101 ns at 9216 byte size packets. These test results are very competitive to other IP multicast forwarding measurements we have observed at the Ixia isimcity lab during the Lippis/Ixia tests. 8 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
9 Agg Tput Rate (%) Agg Average Latency (ns) ns Brocade VDX TM RFC 3918 IP Multicast Test Agg Tput Rate (%) Brocade VDX TM IxCloud Performance Test (avg latency) % Load 60% Load 70% Load 80% Load 90% Load 100% Load ns The Brocade VDX switches performed flawlessly over the six Lippis Cloud Performance iterations. Not a single packet was dropped as the mix of east-west and north-to-south traffic increased in load from 50% to 100% of link capacity. Average latency varied across protocol/traffic type. Average latency within protocol/traffic type was stubbornly consistent as aggregate traffic load was increased. The difference in latency measurements between 50% and 100% of load across protocols was 2439 nanoseconds, 47 nanoseconds, 2992 nanoseconds, 68 nanoseconds, 36 nanoseconds, 4126 nanoseconds, and 33 nanoseconds, respectively, for HTTP at 10GbE, HTTP at 40GbE, YouTube at 10GbE, YouTube at 40GbE, iscsi, Database and Microsoft Exchange. The VDX was one of the only ToRs to perform at zero packet loss at 100% load! GbE 40GbE NS 10GbE NS 40GbE EW iscsi EW DB EW MS Exchange 9 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
10 Brocade VDX TM Power Consumption Test Watts ATIS /10GbE port Year Cost/Watts ATIS /10GbE $9.39 Total power cost/3-year $ yr energy cost as a % of list price 1.67% TEER Value 368 Cooling Front to Back, Reversible Power Cost/10GbE/Yr $3.13 List price $35,999 Price/port $ The Brocade VDX represents a new breed of cloud network ToR switches with power efficiency being a core value. Its WattsATIS/port is 3.2Watts and TEER value is 299. Remember high TEER values are better than low. The Brocade VDX was equipped with two power supplies per industry standard. Considering its Watts/Port, Cost/Watts/Year/10GbE, the Brocade VDX is one of the lowest power consuming ToR switches on the market. The Brocade VDX power cost per 10GbE is calculated at $3.85 per year. The three-year cost to power the Brocade VDX is estimated at $ and represents less than 1.80% of its list price. Keeping with data center best practices, its cooling fans flow air front to back. 10 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
11 Discussion New Ethernet fabric based architectures are emerging to support highly virtualized data center environments and the building of cloud infrastructures, allowing IT organizations to be more agile and responsive to customers. Ethernet fabrics will transition the data center networking from the inefficient and cumbersome Spanning Tree Protocol (STP) networks, to high performance, flat networks built for the east-west traffic demanded by server virtualization and highly clustered applications. Rather than STP, the Brocade VDX series utilizes TRILL (Transparent Interconnection of Lots of Links) and a hardware-based Inter-Switch Link (ISL) trunking algorithm, enabling all links to be active. This feature was not tested during the Fall Lippis/Ixia test. The Brocade VDX with VCS technology offers up to 64-wire speed 10GbE ports of L2 or 48-wire speed 10GbE plus four 40GbE ports of switching in a 1 RU footprint. Optimized for highly virtualized data center environments, the Brocade VDX provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data with large data-center grade buffers to keep traffic moving. This claim was verified in the Lippis/Ixia RFC2544 latency test where average latency across all packet sizes was measured between 790 to 1122 ns for L2 and L3 forwarding. Further, there was minimal delay variation, between 4 and 9 ns, across all packet sizes, demonstrating the switch architecture s ability to forward packets consistently and validating that the VDX will perform well in highly dense virtualized environments, shared storage and LAN deployments. In addition, the VDX demonstrated 100% throughput at all packet sizes in the latency test as well as across a wide range of protocols in the IxCloud test. Redundant power and fans, along with various high availability features, ensure that the VDX is always available and reliable for data center operations. Brocade delivers an excellent ToR switch in its new VDX. 11 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
12 ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss Cut-Through Mode Testing ns 2000 Fall 2010 (smaller values are better) Spring 2011 Fall 2011 Spring 2012 Fall 2012 (smaller values are better) Fall IBM RackSwitch TM Brocade VDX TM Layer 2 Layer 3 IBM RackSwitch TM Brocade VDX TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes The 7124SX,, 7150S- 24, Brocade VDX TM , , Summit and IBM RackSwitch TM, G8124E switches were configured and tested via cut-through test method, while Dell/ Force10 was configured and tested via store-and-forward method. This is Frames/Packet Size (bytes) Extreme Networks Layer 2 Layer 3 IBM RackSwitch TM Cut-Through Mode Testing Brocade VDX TM due to the fact that some switches are store and forward while others are cut through devices. During store-and-forward testing, the latency of packet serialization delay (based on packet size) is removed from the reported latency number, by test equipment. Therefore, to compare cut-through to store-andforward latency measurement, packet serialization delay needs to be added to store-and-forward latency number. For example, in a store-and-forward latency number of 800ns for a 1,518 byte size packet, the additional latency of 1240ns (serialization delay of a 1518 byte packet at 10Gbps) is required to be Extreme Networks IBM RackSwitch TM Brocade VDX TM , , , , , Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
13 added to the store-and-forward measurement. This difference can be significant. Note that other potential device specific factors can impact latency too. This makes comparisons between two testing methodologies difficult. As in the 24 port ToR switches, we show average latency and average delay variation across all packet sizes for layer 2 and 3 forwarding. Measurements were taken from ports that were far away from each other to demonstrate the consistency of performance across the single chip design. We separate cut-through and store and forward results. The IBM RackSwitch TM and offers nearly identical low layer 2 latency with the IBM RackSwitch TM offering lower latency across all packet sizes except 128 bytes with the Extreme offering approximately an additional 140ns at packet sizes 1,024 to 9,216 bytes. But the newcomer to this group, the Brocade VDX TM, offers the fastest forwarding of this category for both layer 2 and 3 forwarding across all packet sizes. The VDX TM is faster by ns across all of the above cut through ToR switches. For layer 3 forwarding, the difference between IBM RackSwitch TM and latency is measured in the 10s of ns. At the lower packet size range between 64 and 512 bytes the Extreme, IBM RackSwitch TM and latency is within 10s of ns from each other, but at the higher packet sizes between 1,024 to 9,216 bytes the Extreme adds approximately 100ns of latency to IBM RackSwitch TM and 7050S- 64 latency measurements. 13 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
14 ToR Switches RFC 2544 Latency Test - Average Latency All Switches Performed at Zero Frame Loss ns Store & Forward Mode Testing Fall 2010 Spring 2011 (smaller values are better) Fall 2011 Spring 2012 Fall * Layer 2 Layer 3 * 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Store & Forward Mode Testing Layer 2 Layer 3 Frames/Packet Size (bytes) , , , , , New to this category is the Alcatel- Lucent, which delivered the lowest latency for layer 2 forwarding of all products in this category of store and forward ToR Switches. The Dell/ Force10 was approximately 10 ns faster than the X40 at layer 3 forwarding across nearly all packet sizes, which is within the average delay variation range placing both switches essentially equal in layer 3 forwarding. 14 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
15 ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss Fall 2010 Cut-Through Mode Testing ns 12 (smaller values are better) Spring 2011 Fall 2011 Spring 2012 Fall 2012 (smaller values are better) Fall IBM RackSwitch TM Brocade VDX TM Layer 2 Layer 3 IBM RackSwitch TM Brocade VDX TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Frames/Packet Size (bytes) Layer 2 Layer 3 IBM RackSwitch TM Cut-Through Mode Testing Brocade VDX TM IBM RackSwitch TM Brocade VDX TM , , , , , As in the 24 port ToR switches, the 48 port and above ToR devices show slight difference in average delay variation between, Brocade VDX TM, and IBM RackSwitch TM thus proving consistent latency under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. Difference does reside within average latency between suppliers as detailed above. Just as in the 24 port ToR switches, the range of average delay variation is 5ns to 10ns. 15 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
16 ns ToR Switches RFC 2544 Latency Test - Average Delay Variation All Switches Performed at Zero Frame Loss Store & Forward Mode Testing (smaller values are better) Fall 2010 Spring 2011 Fall 2011 Spring 2012 (smaller values are better) Fall * Layer 2 Layer 3 * 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Store & Forward Mode Testing Layer 2 Layer 3 Frames/Packet Size (bytes) , , , , , The 48 port and above ToR devices latency were measured in store and forward mode and show slight difference in average delay variation between and Dell/ Force10. All switches demonstrate consistent latency under heavy load at zero packet loss for Layer 2 and Layer 3 forwarding. The and have a range of average delay variation between 5ns to 10ns. 16 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
17 ToR Switches RFC 2544 Throughput Test % Line Rate 100% Fall 2010 Spring 2011 Fall 2011 Spring 2012 Fall 2012 Fall % 60% 40% 20% 0% Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM Extreme Networks Layer 2 Layer 3 Alcatel- Lucent IBM RackSwitch TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Brocade VDX TM Layer 2 Layer 3 Frames/ Packet Size (bytes) Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 1, % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 1, % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 1, % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 2, % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 9, % 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% As expected, all switches are able to forward L2 and L3 packets at all sizes at 100% of line rate with zero packet loss, proving that these switches are high performance wire-speed devices. 17 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
18 ToR Switches RFC 2889 Congestion Test 150% of line rate into single 10GbE % Line Rate Fall 2010 Spring 2011 Fall 2011 Spring 2012 Fall 2012 Fall % 80% 60% 40% 20% 0% Extreme Networks IBM RackSwitch TM Brocade VDX TM Extreme Networks Layer 2 Layer 3 IBM RackSwitch TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Brocade VDX TM Layer 2 Layer 3 Frames/ Packet Size (bytes) Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 2, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 9, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames. For the,, Brocade VDX TM, IBM RackSwitch TM G8164,, and, these products performed as expected that is no HOL blocking during the L2 and L3 congestion test assuring that a congested port does not impact other ports, and thus the network, i.e., congestion is contained. In addition, these switches delivered 100% aggregated forwarding rate by implementing back pressure or signaling the Ixia test equipment with control frames to slow down the rate of packets entering 18 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
19 the congested port, a normal and best practice for ToR switches. While not evident in the above graphic, uniquely the did indicate to Ixia test gear that it was using back pressure, however there were no flow control frames detected and in fact there were none. Ixia and other test equipment calculate back pressure per RFC 2889 paragraph , which states that if the total number of received frames on the congestion port surpasses the number of transmitted frames at MOL (Maximum Offered Load) rate then back pressure is present. Thanks to the s generous and dynamic buffer allocation it can overload ports with more packets than the MOL, therefore, the Ixia or any test equipment calculates/sees back pressure, but in reality this is an anomaly of the RFC testing method and not the. The is designed with a dynamic buffer pool allocation such that during a microburst of traffic as during RFC 2889 congestion test when multiple traffic sources are destined to the same port, packets are buffered in packet memory. Unlike other architectures that have fixed perport packet memory, the uses Dynamic Buffer Allocation (DBA) to allocate packet memory to ports that need it. Under congestion, packets are buffered in shared packet memory of 9 MBytes. The uses DBA to allocate up to 5MB of packet memory to a single port for lossless forwarding as observed during this RFC 2889 congestion test. 19 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
20 ToR Switches RFC 2889 Congestion Test 150% of line rate into single 40GbE % Line Rate 100% Fall 2010 Spring 2011 Fall 2011 Spring 2012 Fall 2012 Fall % 60% 40% 20% 0% IBM RackSwith TM Brocade VDX TM Layer 2 Layer 3 IBM RackSwith TM Brocade VDX TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Layer 2 Layer 3 Frames/Packet Size (bytes) Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM Extreme Networks Alcatel- Lucent IBM RackSwitch TM Brocade VDX TM AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP AFR% HOL BP % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 1, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 2, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 9, % N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y 100% N Y * AFR% = Agg Forwarding Rate (% Line Rate) HOL = Head of Line Blocking BP = Back Pressure Information was recorded and is available for Head of Line Blocking, Back Pressure and Agg. Flow Control Frames. In the fall of 2011, a 40GbE congestion test was added to the Lippis/Ixia industry test. The following are 40GbE test results. The X40, Brocade VDX TM, IBM Rack- Switch TM G8164 and delivered 100% throughput as a percentage of line rate during congestion conditions thanks to back pressure or flow control. Non of these switches exhibited HOL blocking. These switches offer 40GbE congestion performance in line with their 10GbE uplinks, a major plus. 20 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
21 ToR Switches RFC 3918 IP Multicast Test % Line Rate Fall 2010 Spring 2011 Fall 2011 Spring 2012 Fall 2012 Fall % 8000 ns (smaller values are better) 80% 60% % % 0% Throughput IBM RackSwitch TM Brocade VDX TM 0 IBM RackSwitch TM Agg Average Latency Brocade VDX TM 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes **Brocade VDX smallest frame size was 70 Bytes. Also the 9216 Byte size packet was 9018 Bytes Throughput (% LIne Rate) Agg Average Latency (ns) Frames/Packet Size (bytes) IBM RackSwitch TM Brocade VDX M ** IBM RackSwitch TM Brocade VDX M ** % 100% 100% 100% % 100% 100% 100% % 100% 100% 100% % 100% 100% 100% , % 100% 100% 100% , % 100% 100% 100% , % 100% 100% 100% , % 100% 100% 100% , % 100% 100% 100% The, Brocade VDX TM, and IBM RackSwitch TM switches performed as expected in RFC 3918 IP Multicast test delivering 100% throughput with zero packet loss. In terms of IP Multicast average latency between the, Brocade VDX TM,, IBM RackSwitch TM, the Extreme offered the lowest multicast latency at packet sizes between 256 and 9,216 bytes. The IBM RackSwitch TM measured approximately 100ns more than Extreme s while s measured approximately 100ns more than IBM s Rack- Switch TM. The Brocade VDX TM showed greater latency then other ToRs at the higher packet sizes starting at 1,024 Bytes., Brocade VDX TM, IBM and ToR switches can be configured in a combination of 10GbE ports plus 2 or 4 40GbE uplink ports or 64 10GbE ports. 21 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
22 ToR Switches RFC 3918 IP Multicast Test - Cut Through Mode % Line Rate 100% Fall ns Spring 2011 Fall 2011 Spring 2012 Fall 2012 (smaller values are better) 80% 60% % % 0% Throughput Agg Average Latency 64-bytes 128-bytes 256-bytes 512-bytes 1,024-bytes 1,280-bytes 1,518-bytes 2,176-bytes 9,216-bytes Frames/Packet Size (bytes) Layer 2 Layer % 100% % 100% % 100% % 100% , % 100% , % 100% , % 100% , % 100% , % 100% The and Dell/ Force10 were tested in cutthrough mode and performed as expected in RFC 3918 IP Multicast test delivering 100% throughput with zero packet loss. In terms of IP Multicast average latency between the delivered lower latencies at all packet sizes except 64 bytes than the. But at the higher packet range, which is the most important the offers more than 300ns faster forwarding than the. 22 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
23 Cloud Simulation ToR Switches - Cut Through Mode Zero Packet Loss: Latency Measured in ns ns 12,000 Fall 2010 Spring 2011 Fall 2011 Spring 2012 (smaller values are better) 9,000 6,000 3,000 0 IBM RackSwitch TM EW Database_to_Server EW Server_to_Database EW HTTP EW iscsi-server_to_storage EW iscsi-storage_to_server NS Client_to_Server NS Server_to_Client Cloud Simulation ToR Switches - Cut Through Mode Zero Packet Loss: Latency Measured in ns Company Product EW Database_ to_server EW Server_ to_database EW HTTP EW iscsi- Server_to_ Storage EW iscsi- Storage_to_ Server NS Client_ to_server NS Server_ to_client IBM RackSwitch TM The,, and IBM RackSwitch TM delivered 100% throughput with zero packet loss and nanosecond latency during cloud simulation test. These cloud simulation latency signatures vary with the and IBM RackSwitch TM measuring the lowest latency and being nearly identical. and showed latency spikes at EW Database-to-Server, EW HTTP and EW iscsi-storage-to-server cloud protocols. 23 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
24 Brocade VDX TM IxCloud Performance Test Latency Measured in ns % Load 60% Load 70% Load 80% Load 90% Load 100% Load GbE NS 10GbE NS 40GbE EW iscsi EW DB EW MS Exchange Iteration: Aggregate Traffiic Load 10GbE 40GbE NS 40GbE NS 40GbE EW iscsi EW DB EW MSExchange 50% % % % % % In the Fall of 2013 we changed the icloud test to demonstrate how a switch responds to increasing load of both east-west and north-south traffic. The Brocade VDX TM was the first ToR switch to undergo this test and it performed flawlessly over the six Lippis Cloud Performance iterations. Not a single packet was dropped as the mix of east-west and north-to-south traffic increased in load from 50% to 100% of link capacity. Average latency varied across protocol/traffic type. Average latency within protocol/traffic type was stubbornly consistent as aggregate traffic load was increased. The difference in latency measurements between 50% and 100% of load across protocols was 2439 nanoseconds, 47 nanoseconds, 2992 nanoseconds, 68 nanoseconds, 36 nanoseconds, 4126 nanoseconds, and 33 nanoseconds, respectively, for HTTP at 10GbE, HTTP at 40GbE, YouTube at 10GbE, YouTube at 40GbE, iscsi, Database and Microsoft Exchange. Based upon our experience with this test in other settings, we found that the VDX was one of the only ToRs to perform at zero packet loss at 100% load! 24 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
25 Cloud Simulation ToR Switches - Store and Forward Mode Zero Packet Loss: Latency Measured in ns ns 4,000 Fall 2010 Spring 2011 Fall 2011 Spring 2012 Fall 2012 (smaller values are better) 3,000 2,000 1,000 0 EW Database_to_Server EW Server_to_Database EW HTTP EW iscsi-server_to_storage EW iscsi-storage_to_server NS Client_to_Server NS Server_to_Client Cloud Simulation ToR Switches - Store and Forward Mode Zero Packet Loss: Latency Measured in ns Company Product EW Database_ to_server EW Server_ to_database EW HTTP EW iscsi- Server_to_ Storage EW iscsi- Storage_to_ Server NS Client_ to_server NS Server_ to_client OmniSwitch The delivered 100% throughput with zero packet loss and nanosecond latency during the cloud simulation test. The offer the lowest overall latency in cloud simulation measured thus far in store and forward mode. 25 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
26 $15 $10 $5 $ % 3% 2% 1% 0% Power Consumption ToR Switches Watts ATIS /10GbE Port Force10 Force10 Force10 Force10 3-Year Cost/Watts ATIS /10GbE TEER Fall 2010 Spring Year Energy Cost as a % of List Price Fall 2011 Spring 2012 (larger values are better) Fall 2012 IBM RackSwitch TM IBM RackSwitch TM IBM RackSwitch TM IBM RackSwitch TM Fall 2013 Brocade VDX TM Brocade VDX TM Brocade VDX TM Brocade VDX TM The and Brocade VDX TM are the standouts in this group of ToR switches with low 2.3 and 2.6 WATTS ATIS and TEER value of 404 and 368 respectively. Remember the higher the TEER value the better. These power consumption measurements are the lowest making the 7050S- 64 and Brocade VDX TM the most power efficient 48 port + ToR switches measured to date. A close second is a near tie between the IBM RackSwitch TM, and at 3.0, 3.2 and 3.4 WATTS ATIS respectively and 314, 295 and 280 TEER values respectively. All switches are highly energy efficient ToR switches with high TEER values and low WATTS ATIS. The IBM RackSwitch TM is available in a reversal airflow model providing either front-to-back or back-to-front cooling. While there is difference in the threeyear energy cost as a percentage of list price, this is due to the fact that these products were configured differently during the Lippis/Ixia test with the, Brocade VDX TM and IBM RackSwitch TM was configured with 48x10GbE ports plus 4x40GbE ports while the was configured as 48 10GbE ports. The was configured with 48-10GbE and 4-40GbE ports. The was configured with 40-10GbE and 6-40GbE ports. These variations in configuration results in different pricing. 3 yr Energy Cost per 10GbE Port 3 yr Energy Cost as a % of List Price Company / Product Watts ATIS /port Cost/ Watts ATIS /port TEER Cooling 4.0 $4.91 $ % Front-to-Back 2.3 $2.85 $ % Front-to-Back** 3.2 $3.91 $ % Front-to-Back OmniSwitch 3.4 $4.12 $ % Front-to-Back IBM RackSwitch TM 3.0 $3.66 $ % Front-to-Back** Brocade VDX TM 2.6 $3.13 $ % Front-to-Back** ** Supports reversible air flow, front-to-back and back-to-front. 26 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
27 The Lippis Report Test Methodology To test products, each supplier brought its engineers to configure its equipment for test. An Ixia test engineer was available to assist each supplier through test methodologies and review test data. After testing was concluded, each supplier s engineer signed off on the resulting test data. We call the following set of testing conducted The Lippis Test. The test methodologies included: Throughput Performance: Throughput, packet loss and delay for L2 unicast, L3 unicast and L3 multicast traffic was measured for packet sizes of 64, 128, 256, 512, 1024, 1280, 1518, 2176, and 9216 bytes. In addition, a special cloud computing simulation throughput test consisting of a mix of north-south plus east-west traffic was conducted. Ixia s IxNetwork RFC 2544 Throughput/Latency quick test was used to perform all but the multicast tests. Ixia s IxAutomate RFC 3918 Throughput No Drop Rate test was used for the multicast test. Latency: Latency was measured for all the above packet sizes plus the special mix of north-south and east-west traffic blend. Two latency tests were conducted: 1) latency was measured as packets flow between two ports on different modules for modular switches, and 2) between far away ports (port pairing) for ToR switches to demonstrate latency consistency across the forwarding engine chip. Latency test port configuration was via port pairing across the entire device versus side-by-side. This meant that a switch with N ports, port 1 was paired with port (N/2)+1, port 2 with port (N/2)+2, etc. Ixia s IxNetwork RFC 2544 Throughput / Latency quick test was used for validation. Jitter: Jitter statistics was measured during the above throughput and latency test using Ixia s IxNetwork RFC 2544 Throughput/Latency quick test. Congestion Control Test: Ixia s IxNetwork RFC 2889 Congestion test was used to test both L2 and L3 packets. The objective of the Congestion Control Test is to determine how a Device Under Test (DUT) handles congestion. Does the device implement congestion control and does congestion on one port affect an uncongested port? This procedure determines if HOL blocking and/or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. Therefore, the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also losing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then back pressure is present. Video feature: Click to view a discussion on the Lippis Report Test Methodology RFC 2544 Throughput/Latency Test Test Objective: This test determines the processing overhead of the DUT required to forward frames and the maximum rate of receiving and forwarding frames without frame loss. Test Methodology: The test starts by sending frames at a specified rate, usually the maximum theoretical rate of the port while frame loss is monitored. Frames are sent from and received at all ports on the DUT, and the transmission and reception rates are recorded. A binary, step or combo search algorithm is used to identify the maximum rate at which no frame loss is experienced. To determine latency, frames are transmitted for a fixed duration. Frames are tagged once in each second and during half of the transmission duration, then tagged frames are transmitted. The receiving and transmitting timestamp on the tagged frames are compared. The difference between 27 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
28 the two timestamps is the latency. The test uses a one-toone traffic mapping. For store and forward DUT switches, latency is defined in RFC 1242 as the time interval starting when the last bit of the input frame reaches the input port and ending when the first bit of the output frame is seen on the output port. Thus latency is not dependent on link speed only, but processing time too. Results: This test captures the following data: total number of frames transmitted from all ports, total number of frames received on all ports, percentage of lost frames for each frame size plus latency, jitter, sequence errors and data integrity error. The following graphic depicts the RFC 2554 throughput performance and latency test conducted at the isimcity lab for each product. Test Methodology: If the ports are set to half duplex, collisions should be detected on the transmitting interfaces. If the ports are set to full duplex and flow control is enabled, flow control frames should be detected. This test consists of a multiple of four ports with the same MOL. The custom port group mapping is formed of two ports, A and B, transmitting to a third port C (the congested interface), while port A also transmits to port D (uncongested interface). Test Results: This test captures the following data: intended load, offered load, number of transmitted frames, number of received frames, frame loss, number of collisions and number of flow control frames obtained for each frame size of each trial are captured and calculated. The following graphic depicts the RFC 2889 Congestion Control test as conducted at the isimcity lab for each product. Port A Port B Port C Port D RFC 2889 Congestion Control Test Test Objective: The objective of the Congestion Control Test is to determine how a DUT handles congestion. Does the device implement congestion control and does congestion on one port affect an uncongested port? This procedure determines if HOL blocking and/or if back pressure are present. If there is frame loss at the uncongested port, HOL blocking is present. If the DUT cannot forward the amount of traffic to the congested port, and as a result, it is also losing frames destined to the uncongested port. If there is no frame loss on the congested port and the port receives more packets than the maximum offered load of 100%, then Back Pressure is present. RFC 3918 IP Multicast Throughput No Drop Rate Test Test Objective: This test determines the maximum throughput the DUT can support while receiving and transmitting multicast traffic. The input includes protocol parameters Internet Group Management Protocol or IGMP, Protocol Independent Multicast or PIM, receiver parameters (group addressing), source parameters (emulated PIM routers), frame sizes, initial line rate and search type. Test Methodology: This test calculates the maximum DUT throughput for IP Multicast traffic using either a binary or a linear search, and to collect Latency and Data 28 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
29 Integrity statistics. The test is patterned after the ATSS Throughput test; however this test uses multicast traffic. A one-to-many traffic mapping is used, with a minimum of two ports required. If choosing OSPF (Open Shortest Path First) or ISIS (Intermediate System-Intermediate System) as IGP (Internet Gateway Protocol) protocol routing, the transmit port first establishes an IGP routing protocol session and PIM session with the DUT. IGMP joins are then established for each group, on each receive port. Once protocol sessions are established, traffic begins to transmit into the DUT and a binary or linear search for maximum throughput begins. If choosing none as IGP protocol routing, the transmit port does not emulate routers and does not export routes to virtual sources. The source addresses are the IP addresses configured on the Tx ports in data frame. Once the routes are configured, traffic begins to transmit into the DUT and a binary or linear search for maximum throughput begins. Test Results: This test captures the following data: maximum throughput per port, frame loss per multicast group, minimum/maximum/average latency per multicast group and data errors per port.the following graphic depicts the RFC 3918 IP Multicast Throughput No Drop Rate test as conducted at the isimcity lab for each product. Test Objective: This test determines the Energy Consumption Ratio (ECR), the ATIS (Alliance for Telecommunications Industry Solutions) TEER during a L2/L3 forwarding performance. TEER is a measure of networkelement efficiency quantifying a network component s ratio of work performed to energy consumed. Test Methodology: This test performs a calibration test to determine the no loss throughput of the DUT. Once the maximum throughput is determined, the test runs in automatic or manual mode to determine the L2/L3 forwarding performance while concurrently making power, current and voltage readings from the power device. Upon completion of the test, the data plane performance and Green (ECR and TEER) measurements are calculated. Engineers followed the methodology prescribed by two ATIS standards documents: ATIS : Energy Efficiency for Telecommunication Equipment: Methodology for Measuring and Reporting for Router and Ethernet Switch Products, and ATIS : Energy Efficiency for Telecommunication Equipment: Methodology for Measuring and Reporting - General Requirements The power consumption of each product was measured at various load points: idle 0%, 30% and 100%. The final power consumption was reported as a weighted average calculated using the formula: WATIS = 0.1*(Power draw at 0% load) + 0.8*(Power draw at 30% load) + 0.1*(Power draw at 100% load). All measurements were taken over a period of 60 seconds at each load level, and repeated three times to ensure result repeatability. The final WATIS results were reported as a weighted average divided by the total number of ports per switch to derive at a WATTS per port measured per ATIS methodology and labeled here as WATTS ATIS. Power Consumption Test Port Power Consumption: Ixia s IxGreen within the IxAutomate test suite was used to test power consumption at the port level under various loads or line rates. Test Results: The L2/L3 performance results include a measurement of WATIS and the DUT TEER value. Note that a larger TEER value is better as it represents more work done at less energy consumption. In the graphics throughout this report, we use WATTS ATIS to identify ATIS power consumption measurement on a per port basis. 29 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
30 With the WATTS ATIS we calculate a three-year energy cost based upon the following formula. Cost/Watts ATIS /3-Year = ( WATTS ATIS /1000)*(3*365*24)*(0.1046)*(1.33), where WATTS ATIS = ATIS weighted average power in Watts 3*365*24 = hrs/day = U.S. average retail cost (in US$) of commercial grade power as of June 2010 as per Dept. of Energy Electric Power Monthly ( = Factor to account for power costs plus cooling 33% of power costs. The following graphic depicts the per port power consumption test as conducted at the isimcity lab for each product. the following parameters: frame rate, frame size distribution, offered traffic load and traffic mesh. The following traffic types are used: web (HTTP), database-server, serverdatabase, iscsi storage-server, iscsi server-storage, clientserver plus server-client. The north-south client-server traffic simulates Internet browsing, the database traffic simulates server-server lookup and data retrieval, while the storage traffic simulates IP-based storage requests and retrieval. When all traffic is transmitted, the throughput, latency, jitter and loss performance are measured on a per traffic type basis. Test Results: This test captures the following data: maximum throughput per traffic type, frame loss per traffic type, minimum/maximum/average latency per traffic type, minimum/maximum/average jitter per traffic type, data integrity errors per port and CRC errors per port. For this report, we show average latency on a per traffic basis at zero frame loss. The following graphic depicts the Cloud Simulation test as conducted at the isimcity lab for each product. Public Cloud Simulation Test Test Objective: This test determines the traffic delivery performance of the DUT in forwarding a variety of northsouth and east-west traffic in cloud computing applications. The input parameters include traffic types, traffic rate, frame sizes, offered traffic behavior and traffic mesh. Test Methodology: This test measures the throughput, latency, jitter and loss on a per application traffic type basis across M sets of 8-port topologies. M is an integer and is proportional to the number of ports with which the DUT is populated. This test includes a mix of north-south traffic and east-west traffic, and each traffic type is configured for 40GbE Testing: For the test plan above, 24 40GbE ports were available for those DUT that support 40GbE uplinks or modules. During this Lippis/Ixia test, 40GbE testing included latency, throughput congestion, IP Multicast, cloud simulation and power consumption test. ToR switches with 4 40GbE uplinks are supported in this test. 30 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
31 Virtualization Scale Calculation We observe technical specifications of MAC address size, /32 IP host route table size and ARP entry size to calculate the largest number of VMs supported in a L2 domain. Each VM will consume a switch s logical resources of, especially a modular/core switch: (1) MAC entry, (1) /32 IP host route, and (1) ARP entry. Further, each physical server may use (2) MAC/IP/ARP entries--one for management, one for IP storage and/or other uses. Therefore, the lowest common denominator of the three (MAC/IP/ARP) entries will determine the total number of VM and physical machines that can reside within a L2 domain. If a data center switch has a 128K MAC table size, 32K IP host routes, and 8K ARP table, then it could support 8,000 VM (Virtual Machines) + Physical (Phy) servers. From here we utilize a VM:Phy consolidation ratio to determine the approximate maximum # of physical servers. 30:1 is a typical consolidation ratio today but more dense 12 core processors entering the market may increase this ratio to 60:1. With 8K total IP endpoints at a 30:1 VM:Phy ratio, this calculates to an approximate 250 Phy servers, and 7,750 VMs. 250 physical servers can be networked into approximately port ToR switches, assuming each server connects to 2 ToR swiches for redundancy. If each ToR has 8x10G links to each core switch, that s 96 10GbE ports consumed on the core switch. So we can begin to see how the table size scalability of the switch determines how many of its 10GbE ports, for example, can be used, and therefore how large a cluster can be built. This calculation assumes the core is the L2/3 boundary, providing a L2 infrastructure for the 7,750 VMs. More generally, for a L2 switch positioned at ToR, for example, the specification that matters for virtualization scale is the MAC table size. So if an IT leader purchases a L2 ToR switch, say switch A, and its MAC table size is 32K, then switch A can be positioned into a L2 domain supporting up to ~32K VMs. For a L3 core switch (aggregating ToRs), the specifications that matter are MAC table size, ARP table size, IP host route table size. The smaller of the three tables is the important and limiting virtualization scale number. If an IT leader purchases a L3 core switch, say switch B, and its IP host route table size is 8K, and MAC table size is 128K, then this switch can support a VM scale of ~8K. If an IT architect deploys the L2 ToR switch A supporting 32K MAC addresses and connects it to a core switch B, then the entire L2 virtualization domain scales to the lowest common denominator in the network, the 8K from switch B. 31 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
32 Terms of Use This document is provided to help you understand whether a given product, technology or service merits additional investigation for your particular needs. Any decision to purchase a product must be based on your own assessment of suitability based on your needs. The document should never be used as a substitute for advice from a qualified IT or business professional. This evaluation was focused on illustrating specific features and/or performance of the product(s) and was conducted under controlled, laboratory conditions. Certain tests may have been tailored to reflect performance under ideal conditions; performance may vary under real-world conditions. Users should run tests based on their own real-world scenarios to validate performance for their own networks. Reasonable efforts were made to ensure the accuracy of the data contained herein but errors and/or oversights can occur. The test/ audit documented herein may also rely on various test tools, the accuracy of which is beyond our control. Furthermore, the document relies on certain representations by the vendors that are beyond our control to verify. Among these is that the software/ hardware tested is production or production track and is, or will be, available in equivalent or better form to commercial customers. Accordingly, this document is provided as is, and Lippis Enterprises, Inc. (Lippis), gives no warranty, representation or undertaking, whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accuracy, completeness, usefulness or suitability of any information contained herein. By reviewing this document, you agree that your use of any information contained herein is at your own risk, and you accept all risks and responsibility for losses, damages, costs and other consequences resulting directly or indirectly from any information or material available on it. Lippis is not responsible for, and you agree to hold Lippis and its related affiliates harmless from any loss, harm, injury or damage resulting from or arising out of your use of or reliance on any of the information provided herein. Lippis makes no claim as to whether any product or company described herein is suitable for investment. You should obtain your own independent professional advice, whether legal, accounting or otherwise, before proceeding with any investment or project related to any information, products or companies described herein. When foreign translations exist, the English document is considered authoritative. To assure accuracy, only use documents downloaded directly from No part of any document may be reproduced, in whole or in part, without the specific written permission of Lippis. All trademarks used in the document are owned by their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive or in a manner that disparages us or our information, projects or developments. 32 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
33 About Nick Lippis Nicholas J. Lippis III is a world-renowned authority on advanced IP networks, communications and their benefits to business objectives. He is the publisher of the Lippis Report, a resource for network and IT business decision makers to which over 35,000 executive IT business leaders subscribe. Its Lippis Report podcasts have been downloaded over 200,000 times; ITunes reports that listeners also download the Wall Street Journal s Money Matters, Business Week s Climbing the Ladder, The Economist and The Harvard Business Review s IdeaCast. He is also the co-founder and conference chair of the Open Networking User Group, which sponsors a bi-annual meeting of over 200 IT business leaders of large enterprises. Mr. Lippis is currently working with clients to design their private and public virtualized data center cloud computing network architectures with open networking technologies to reap maximum business value and outcome. He has advised numerous Global 2000 firms on network architecture, design, implementation, vendor selection and budgeting, with clients including Barclays Bank, Eastman Kodak Company, Federal Deposit Insurance Corporation (FDIC), Hughes Aerospace, Liberty Mutual, Schering-Plough, Camp Dresser McKee, the state of Alaska, Microsoft, Kaiser Permanente, Sprint, Worldcom, Cisco Systems, Hewlett Packet, IBM, Avaya and many others. He works exclusively with CIOs and their direct reports. Mr. Lippis possesses a unique perspective of market forces and trends occurring within the computer networking industry derived from his experience with both supply- and demand-side clients. Mr. Lippis received the prestigious Boston University College of Engineering Alumni award for advancing the profession. He has been named one of the top 40 most powerful and influential people in the networking industry by Network World. TechTarget, an industry on-line publication, has named him a network design guru while Network Computing Magazine has called him a star IT guru. Mr. Lippis founded Strategic Networks Consulting, Inc., a well-respected and influential computer networking industry-consulting concern, which was purchased by Softbank/Ziff-Davis in He is a frequent keynote speaker at industry events and is widely quoted in the business and industry press. He serves on the Dean of Boston University s College of Engineering Board of Advisors as well as many start-up venture firms advisory boards. He delivered the commencement speech to Boston University College of Engineering graduates in Mr. Lippis received his Bachelor of Science in Electrical Engineering and his Master of Science in Systems Engineering from Boston University. His Masters thesis work included selected technical courses and advisors from Massachusetts Institute of Technology on optical communications and computing. 33 Lippis Enterprises, Inc Evaluation conducted at Ixia s isimcity Santa Clara Lab on Ixia test equipment
ADTRAN NetVanta 5660
ADTRAN NetVanta 5660 Lab Testing Detailed Report 08January2015 Miercom www.miercom.com Contents 1.0 Executive Summary... 3 2.0 Product Tested... 4 3.0 Test Bed How We Did It... 6 4.0 Performance Testing...
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Arista 7060X and 7260X series: Q&A
Arista 7060X and 7260X series: Q&A Product Overview What are the 7060X & 7260X series? The Arista 7060X and 7260X series are purpose-built 40GbE and 100GbE data center switches in compact and energy efficient
Lab Testing Summary Report
Lab Testing Summary Report November 2011 Report 111018 Product Category: Supervisor Engine Vendor Tested: Product Tested: Catalyst 4500E Supervisor Engine 7L-E Key findings and conclusions: Cisco Catalyst
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
Five Pillars: Assessing the Cisco Catalyst 4948E for Data Center Service
Five Pillars: Assessing the Cisco Catalyst 4948E for Data Center Service August 2010 Page2 Contents Executive Summary... 3 Features and Manageability... 3 Fast Convergence With Flex Link... 4 Control Plane
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution
Cloud Fabric Data Center Network Solution Cloud Fabric Data Center Network Solution Product and Solution Overview Huawei CloudEngine (CE) series switches are high-performance cloud switches designed for
A 10 GbE Network is the Backbone of the Virtual Data Center
A 10 GbE Network is the Backbone of the Virtual Data Center Contents... Introduction: The Network is at the Epicenter of the Data Center. 1 Section II: The Need for 10 GbE in the Data Center 2 Section
CCNA R&S: Introduction to Networks. Chapter 5: Ethernet
CCNA R&S: Introduction to Networks Chapter 5: Ethernet 5.0.1.1 Introduction The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
SX1012: High Performance Small Scale Top-of-Rack Switch
WHITE PAPER August 2013 SX1012: High Performance Small Scale Top-of-Rack Switch Introduction...1 Smaller Footprint Equals Cost Savings...1 Pay As You Grow Strategy...1 Optimal ToR for Small-Scale Deployments...2
Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking
Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking Important Considerations When Selecting Top-of-Rack Switches table of contents + Advantages of Top-of-Rack Switching.... 2 + How to Get from
Data Center Convergence. Ahmad Zamer, Brocade
Ahmad Zamer, Brocade SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS
全 新 企 業 網 路 儲 存 應 用 THE STORAGE NETWORK MATTERS FOR EMC IP STORAGE PLATFORMS Enterprise External Storage Array Capacity Growth IDC s Storage Capacity Forecast = ~40% CAGR (2014/2017) Keep Driving Growth!
IxNetwork IxCloudPerf QuickTest
IxNetwork IxCloudPerf QuickTest Technology Overview The networking industry is in the midst of a fundamental change towards centralization of application delivery via concentrated and dense private and
High-Performance Automated Trading Network Architectures
High-Performance Automated Trading Network Architectures Performance Without Loss Performance When It Counts Introduction Firms in the automated trading business recognize that having a low-latency infrastructure
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Virtual PortChannels: Building Networks without Spanning Tree Protocol
. White Paper Virtual PortChannels: Building Networks without Spanning Tree Protocol What You Will Learn This document provides an in-depth look at Cisco's virtual PortChannel (vpc) technology, as developed
Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches
Migration Guide Migrate from Cisco Catalyst 6500 Series Switches to Cisco Nexus 9000 Series Switches Migration Guide November 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is
Layer 3 Network + Dedicated Internet Connectivity
Layer 3 Network + Dedicated Internet Connectivity Client: One of the IT Departments in a Northern State Customer's requirement: The customer wanted to establish CAN connectivity (Campus Area Network) for
Addressing Scaling Challenges in the Data Center
Addressing Scaling Challenges in the Data Center DELL PowerConnect J-Series Virtual Chassis Solution A Dell Technical White Paper Dell Juniper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology. August 2011
FlexNetwork Architecture Delivers Higher Speed, Lower Downtime With HP IRF Technology August 2011 Page2 Executive Summary HP commissioned Network Test to assess the performance of Intelligent Resilient
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing
Ultra-Low Latency, High Density 48 port Switch and Adapter Testing Testing conducted by Solarflare and Force10 shows that ultra low latency application level performance can be achieved with commercially
Ethernet Fabrics: An Architecture for Cloud Networking
WHITE PAPER www.brocade.com Data Center Ethernet Fabrics: An Architecture for Cloud Networking As data centers evolve to a world where information and applications can move anywhere in the cloud, classic
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09
Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09 Authors: Amphenol, Cisco, Dell, Fulcrum Microsystems, Intel, Ixia, JDSU, Mellanox, NetApp, Panduit, QLogic, Spirent, Tyco Electronics,
Blade Switches Don t Cut It in a 10 Gig Data Center
Blade Switches Don t Cut It in a 10 Gig Data Center Zeus Kerravala, Senior Vice President and Distinguished Research Fellow, [email protected] Introduction: Virtualization Drives Data Center Evolution
A Guide to Simple IP Camera Deployment Using ZyXEL Bandwidth Solutions
A Guide to Simple IP Camera Deployment Using ZyXEL Bandwidth Solutions 2015/7/22 ZyXEL Communications Corporation Barney Gregorio Overview: This article contains guidelines on how to introduce IP cameras
How To Switch A Layer 1 Matrix Switch On A Network On A Cloud (Network) On A Microsoft Network (Network On A Server) On An Openflow (Network-1) On The Network (Netscout) On Your Network (
Software- Defined Networking Matrix Switching January 29, 2015 Abstract This whitepaper describes a Software- Defined Networking use case, using an OpenFlow controller and white box switches to implement
Isilon IQ Network Configuration Guide
Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3
A Whitepaper on. Building Data Centers with Dell MXL Blade Switch
A Whitepaper on Building Data Centers with Dell MXL Blade Switch Product Management Dell Networking October 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Exhibit n.2: The layers of a hierarchical network
3. Advanced Secure Network Design 3.1 Introduction You already know that routers are probably the most critical equipment piece in today s networking. Without routers, internetwork communication would
Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter
Simplify Virtual Machine Management and Migration with Ethernet Fabrics in the Datacenter Enabling automatic migration of port profiles under Microsoft Hyper-V with Brocade Virtual Cluster Switching technology
Switching Solution Creating the foundation for the next-generation data center
Alcatel-Lucent Enterprise Data Center Switching Solution Creating the foundation for the next-generation data center a new network paradigm What do the following trends have in common? Virtualization Real-time
Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers
WHITE PAPER www.brocade.com Data Center Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers At the heart of Brocade VDX 6720 switches is Brocade Virtual Cluster
Top of Rack: An Analysis of a Cabling Architecture in the Data Center
SYSTIMAX Solutions Top of Rack: An Analysis of a Cabling Architecture in the Data Center White paper Matthew Baldassano, Data Center Business Unit CommScope, Inc, June 2010 www.commscope.com Contents I.
Simplifying the Data Center Network to Reduce Complexity and Improve Performance
SOLUTION BRIEF Juniper Networks 3-2-1 Data Center Network Simplifying the Data Center Network to Reduce Complexity and Improve Performance Challenge Escalating traffic levels, increasing numbers of applications,
Performance of Cisco IPS 4500 and 4300 Series Sensors
White Paper Performance of Cisco IPS 4500 and 4300 Series Sensors White Paper September 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of
SX1024: The Ideal Multi-Purpose Top-of-Rack Switch
WHITE PAPER May 2013 SX1024: The Ideal Multi-Purpose Top-of-Rack Switch Introduction...1 Highest Server Density in a Rack...1 Storage in a Rack Enabler...2 Non-Blocking Rack Implementation...3 56GbE Uplink
Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance
Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance This white paper compares the performance of blade-to-blade network traffic between two enterprise blade solutions: the Dell
Juniper Update Enabling New Network Architectures. Debbie Montano Chief Architect, Gov t, Edu & Medical dmontano@juniper.
Juniper Update Enabling New Network Architectures Debbie Montano Chief Architect, Gov t, Edu & Medical [email protected] Feb 1, 2010 DISCLAIMER This statement of direction sets forth Juniper Networks
Whitepaper. 10 Things to Know Before Deploying 10 Gigabit Ethernet
Whitepaper 10 Things to Know Before Deploying 10 Gigabit Ethernet Table of Contents Introduction... 3 10 Gigabit Ethernet and The Server Edge: Better Efficiency... 3 SAN versus Fibre Channel: Simpler and
Virtual Private LAN Service (VPLS) Conformance and Performance Testing Sample Test Plans
Virtual Private LAN Service (VPLS) Conformance and Performance Testing Sample Test Plans Contents Overview...3 1. VPLS Traffic CoS Test...3 2. VPLS VSI Isolation Test...5 3. VPLS MAC Address Purge Test...7
- Hubs vs. Switches vs. Routers -
1 Layered Communication - Hubs vs. Switches vs. Routers - Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing
White Paper. Network Simplification with Juniper Networks Virtual Chassis Technology
Network Simplification with Juniper Networks Technology 1 Network Simplification with Juniper Networks Technology Table of Contents Executive Summary... 3 Introduction... 3 Data Center Network Challenges...
Chapter 5. Data Communication And Internet Technology
Chapter 5 Data Communication And Internet Technology Purpose Understand the fundamental networking concepts Agenda Network Concepts Communication Protocol TCP/IP-OSI Architecture Network Types LAN WAN
Simplifying Data Center Network Architecture: Collapsing the Tiers
Simplifying Data Center Network Architecture: Collapsing the Tiers Abstract: This paper outlines some of the impacts of the adoption of virtualization and blade switches and how Extreme Networks can address
10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable
10 Port L2 Managed Gigabit Ethernet Switch with 2 Open SFP Slots - Rack Mountable StarTech ID: IES101002SFP The IES101002SFP 10-port Ethernet switch delivers flexibility and control of your network by
Demonstrating the high performance and feature richness of the compact MX Series
WHITE PAPER Midrange MX Series 3D Universal Edge Routers Evaluation Report Demonstrating the high performance and feature richness of the compact MX Series Copyright 2011, Juniper Networks, Inc. 1 Table
Ixia Director TM. Powerful, All-in-One Smart Filtering with Ultra-High Port Density. Efficient Monitoring Access DATA SHEET
Ixia Director TM Powerful, All-in-One Smart Filtering with Ultra-High Port Density The Ixia Director TM is a smart filtering appliance that directs traffic of interest to your monitoring tools. Now you
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications
VXLAN: Scaling Data Center Capacity. White Paper
VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
CORPORATE NETWORKING
CORPORATE NETWORKING C. Pham Université de Pau et des Pays de l Adour Département Informatique http://www.univ-pau.fr/~cpham [email protected] Typical example of Ethernet local networks Mostly based
6/8/2011. Document ID: 12023. Contents. Introduction. Prerequisites. Requirements. Components Used. Conventions. Introduction
Page 1 of 9 Products & Services Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches Document ID: 12023 Contents Introduction Prerequisites Requirements Components Used Conventions
Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage
Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage David Schmeichel Global Solutions Architect May 2 nd, 2013 Legal Disclaimer All or some of the products detailed in this presentation
The Ethernet Roadmap Applications, servers, optics, and switches
The Ethernet Roadmap Applications, servers, optics, and switches Scott Kipp April 15, 2015 www.ethernetalliance.org Agenda 4:40- :52 The 2015 Ethernet Roadmap Scott Kipp, Brocade 4:52-5:04 Optical Ethernet
4 Delivers over 20,000 SSL connections per second (cps), which
April 21 Commissioned by Radware, Ltd Radware AppDirector x8 and x16 Application Switches Performance Evaluation versus F5 Networks BIG-IP 16 and 36 Premise & Introduction Test Highlights 1 Next-generation
High Performance 10Gigabit Ethernet Switch
BDCOM S3900 Switch High Performance 10Gigabit Ethernet Switch BDCOM S3900 is a standard L3 congestion-less switch series, which are capable of multi-layer switching and wire-speed route forwarding. Its
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
Arista and Leviton Technology in the Data Center
Visit: www.leviton.com/aristanetworks Arista and Leviton Technology in the Data Center Customer Support: 1-408-547-5502 1-866-476-0000 Product Inquiries: 1-408-547-5501 1-866-497-0000 General Inquiries:
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
Arista Application Switch: Q&A
Arista Application Switch: Q&A Q. What is the 7124FX Application Switch? A. The Arista 7124FX is a data center class Ethernet switch based on the Arista 7124SX, our ultra low-latency L2/3/4 switching platform.
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Installation Guide for GigaBit Fiber Port Aggregator Tap with SFP Monitor Ports
Installation Guide for GigaBit Fiber Port Aggregator Tap with SFP Monitor Ports (800-0037) Doc. PUBTPASXSFPU Rev., 07/08 Contents Introduction.... Key Features... Unpacking and Inspection....3 Product
Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect
White PAPER Low Latency 10 GbE Switching for Data Center, Cluster and Storage Interconnect Introduction: High Performance Data Centers As the data center continues to evolve to meet rapidly escalating
Block based, file-based, combination. Component based, solution based
The Wide Spread Role of 10-Gigabit Ethernet in Storage This paper provides an overview of SAN and NAS storage solutions, highlights the ubiquitous role of 10 Gigabit Ethernet in these solutions, and illustrates
Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide
January 8, 2016 Nutanix Hyperconverged Appliance with the Brocade VDX ToR Switch Deployment Guide 2016 Brocade Communications Systems, Inc. All Rights Reserved. Brocade, Brocade Assurance, the B-wing symbol,
Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches
Simplified 40-Gbps Cabling Deployment Solutions with Cisco Nexus 9000 Series Switches Panduit and Cisco Accelerate Implementation of Next-Generation Data Center Network Architecture 2013 Cisco Panduit.
WHITE PAPER. Copyright 2011, Juniper Networks, Inc. 1
WHITE PAPER Network Simplification with Juniper Networks Technology Copyright 2011, Juniper Networks, Inc. 1 WHITE PAPER - Network Simplification with Juniper Networks Technology Table of Contents Executive
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center
ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center A NEW NETWORK PARADIGM What do the following trends have in common? Virtualization Real-time applications
VMware Virtual SAN 6.2 Network Design Guide
VMware Virtual SAN 6.2 Network Design Guide TECHNICAL WHITE PAPER APRIL 2016 Contents Intended Audience... 2 Overview... 2 Virtual SAN Network... 2 Physical network infrastructure... 3 Data center network...
Introducing Brocade VCS Technology
WHITE PAPER www.brocade.com Data Center Introducing Brocade VCS Technology Brocade VCS technology is designed to revolutionize the way data center networks are architected and how they function. Not that
SDN CENTRALIZED NETWORK COMMAND AND CONTROL
SDN CENTRALIZED NETWORK COMMAND AND CONTROL Software Defined Networking (SDN) is a hot topic in the data center and cloud community. The geniuses over at IDC predict a $2 billion market by 2016
The Future of Cloud Networking. Idris T. Vasi
The Future of Cloud Networking Idris T. Vasi Cloud Computing and Cloud Networking What is Cloud Computing? An emerging computing paradigm where data and services reside in massively scalable data centers
Fibre Channel over Ethernet: Enabling Server I/O Consolidation
WHITE PAPER Fibre Channel over Ethernet: Enabling Server I/O Consolidation Brocade is delivering industry-leading oe solutions for the data center with CNAs, top-of-rack switches, and end-of-row oe blades
Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers
White Paper Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th As the deployment of bandwidth-intensive applications such as public and private cloud computing continues to increase, IT administrators
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Building Tomorrow s Data Center Network Today
WHITE PAPER www.brocade.com IP Network Building Tomorrow s Data Center Network Today offers data center network solutions that provide open choice and high efficiency at a low total cost of ownership,
LAN Switching and VLANs
26 CHAPTER Chapter Goals Understand the relationship of LAN switching to legacy internetworking devices such as bridges and routers. Understand the advantages of VLANs. Know the difference between access
Flexible SDN Transport Networks With Optical Circuit Switching
Flexible SDN Transport Networks With Optical Circuit Switching Multi-Layer, Multi-Vendor, Multi-Domain SDN Transport Optimization SDN AT LIGHT SPEED TM 2015 CALIENT Technologies 1 INTRODUCTION The economic
Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers
SOLUTION BRIEF Enterprise Data Center Interconnectivity Increase Simplicity and Improve Reliability with VPLS on the Routers Challenge As enterprises improve business continuity by enabling resource allocation
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER
Extreme Networks: Building Cloud-Scale Networks Using Open Fabric Architectures A SOLUTION WHITE PAPER WHITE PAPER Building Cloud- Scale Networks Abstract TABLE OF CONTENTS Introduction 2 Open Fabric-Based
Versalar Switch Router 15000. Market Opportunity and Product Overview
Versalar Switch Router 15000 Market Opportunity and Product Overview Services Driven Market Value-added services ($15B) e-commerce, IP Telephony, IP Fax, Security services, VPN, customer care, flexible
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
