Cloud-Based Apps Drive the Need for Frequency-Flexible Clock Generators in Converged Data Center Networks



Similar documents
Clocking Solutions. Wired Communications / Networking Wireless Communications Industrial Automotive Consumer Computing. ti.

White Paper Solarflare High-Performance Computing (HPC) Applications

Glitch Free Frequency Shifting Simplifies Timing Design in Consumer Applications

Selecting the Optimum PCI Express Clock Source

3G Converged-NICs A Platform for Server I/O to Converged Networks

Block based, file-based, combination. Component based, solution based

Any-Rate Precision Clocks

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

100 Gigabit Ethernet is Here!

Deploying 10 Gigabit Ethernet into HPC Clusters, Server-to-Server/Server-to-Storage Infrastructure and Workstations over CX4 Copper Interconnects

Next Steps Toward 10 Gigabit Ethernet Top-of-Rack Networking

SummitStack in the Data Center

Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet

Upgrading Data Center Network Architecture to 10 Gigabit Ethernet

Pericom PCI Express 1.0 & PCI Express 2.0 Advanced Clock Solutions

Broadcom 10GbE High-Performance Adapters for Dell PowerEdge 12th Generation Servers

Ethernet: THE Converged Network Ethernet Alliance Demonstration as SC 09

Virtualizing the SAN with Software Defined Storage Networks

Windows TCP Chimney: Network Protocol Offload for Optimal Application Scalability and Manageability

QLogic 16Gb Gen 5 Fibre Channel in IBM System x Deployments

Unified Computing Systems

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

Cut I/O Power and Cost while Boosting Blade Server Performance

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

SummitStack in the Data Center

CloudEngine Series Data Center Switches. Cloud Fabric Data Center Network Solution

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

PCI Express* Ethernet Networking

Cisco SFS 7000D Series InfiniBand Server Switches

Enabling Technologies for Distributed Computing

PCI Express Overview. And, by the way, they need to do it in less time.

A Platform Built for Server Virtualization: Cisco Unified Computing System

InfiniBand -- Industry Standard Data Center Fabric is Ready for Prime Time

PCI Express and Storage. Ron Emerick, Sun Microsystems

PCI Express IO Virtualization Overview

Mellanox Academy Online Training (E-learning)

LAYER3 HELPS BUILD NEXT GENERATION, HIGH-SPEED, LOW LATENCY, DATA CENTER SOLUTION FOR A LEADING FINANCIAL INSTITUTION IN AFRICA.

Ultra Low Latency Data Center Switches and iwarp Network Interface Cards

State of the Art Cloud Infrastructure

Virtual Compute Appliance Frequently Asked Questions

Converged Networking Solution for Dell M-Series Blades. Spencer Wheelwright

Enabling Technologies for Distributed and Cloud Computing

XMC Modules. XMC-6260-CC 10-Gigabit Ethernet Interface Module with Dual XAUI Ports. Description. Key Features & Benefits

Introduction. Need for ever-increasing storage scalability. Arista and Panasas provide a unique Cloud Storage solution

Business Case for BTI Intelligent Cloud Connect for Content, Co-lo and Network Providers

Cisco Nexus 5000 Series Switches: Decrease Data Center Costs with Consolidated I/O

Introduction to Silicon Labs. November 2015

InfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity

High speed pattern streaming system based on AXIe s PCIe connectivity and synchronization mechanism

Flexible Modular Data Center Architecture Simplifies Operations

Optimizing Infrastructure Support For Storage Area Networks

Juniper Networks QFabric: Scaling for the Modern Data Center

Simplifying the Data Center Network to Reduce Complexity and Improve Performance

AN862. OPTIMIZING Si534X JITTER PERFORMANCE IN NEXT GENERATION INTERNET INFRASTRUCTURE SYSTEMS. 1. Introduction

CUTTING-EDGE SOLUTIONS FOR TODAY AND TOMORROW. Dell PowerEdge M-Series Blade Servers

ethernet alliance Overview of Requirements and Applications for 40 Gigabit and 100 Gigabit Ethernet Version 1.0 August 2007 Authors:

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

Cisco UCS Business Advantage Delivered: Data Center Capacity Planning and Refresh

How To Design A Data Centre

Product Brief. R7A-200 Processor Card. Rev 1.0

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Storage Architectures. Ron Emerick, Oracle Corporation

A 10 GbE Network is the Backbone of the Virtual Data Center

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

Connecting the Clouds

Layer T and Layer C: Collapsing Communications Networks into Transport and Cloud Services

Overview and Frequently Asked Questions Sun Storage 10GbE FCoE PCIe CNA

The Need for Low-Loss Multifiber Connectivity

ZigBee Technology Overview

Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 SMB Direct

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.

Unified Fabric: Cisco's Innovation for Data Center Networks

High-Speed SERDES Interfaces In High Value FPGAs

Optical Interconnect Technology for High-bandwidth Data Connection in Next-generation Servers

Managing Data Center Power and Cooling

Addressing Scaling Challenges in the Data Center

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

Global Headquarters: 5 Speen Street Framingham, MA USA P F

Wideband: Delivering the Connected Life

Full-Band Capture Cable Digital Tuning

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

SMB Direct for SQL Server and Private Cloud

Top Ten Reasons for Deploying Oracle Virtual Networking in Your Data Center

ALCATEL-LUCENT ENTERPRISE DATA CENTER SWITCHING SOLUTION Automation for the next-generation data center

Direct Scale-out Flash Storage: Data Path Evolution for the Flash Storage Era

Converging Data Center Applications onto a Single 10Gb/s Ethernet Network

Driving Revenue Growth Across Multiple Markets

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack

Cisco Unified Computing System: Meet the Challenges of Virtualization with Microsoft Hyper-V

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Private cloud computing advances

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Alteon Switched Firewall

Transcription:

Cloud-Based Apps Drive the Need for Frequency-Flexible Generators in Converged Data Center Networks Introduction By Phil Callahan, Senior Marketing Manager, Timing Products, Silicon Labs Skyrocketing network bandwidth demands driven by consumer mobile devices and cloud-based streaming services, such as Netflix, Hulu, YouTube, Spotify, Pandora, online gaming and others, are pushing Internet Infrastructure suppliers to develop data center systems that support dramatically higher data rates, such as 10G, 40G and 100 Gbps. In addition, the increasing popularity of commercial cloud computing, which offers network-based computing and storage as a service, is further accelerating the demand for application-flexible, high-bandwidth networks in today s data centers. Figure 1 illustrates the impact of these popular cloud-based streaming services on the growth of Internet traffic bandwidth. Cisco s Visual Networking Index (VNI) Forecast (June 2014) projects the following market trends: Cloud applications and services such as Netflix, YouTube, Pandora, and Spotify, will account for 90 percent of total mobile data traffic by 2018. Global network traffic will be three times larger in 2018 than in 2013, equivalent to streaming 33B DVDs/month, or 46M DVDs/hour. By 2018, consumer online gaming traffic will be four times higher than it was in 2013. Figure 1. Cisco VNI June 2014 Silicon Labs Rev 1.0 1

Cloud-Based Apps Drive Network Convergence in the Data Center To reliably deliver a Netflix video or a Spotify high-quality audio stream, service providers must be equipped with data center hardware that supports three primary networks, as shown in Figure 2: /WAN networks commonly comprise 1 Gb, 10 Gb, and/or 100 Gb Ethernet switches connected in a mesh switch fabric for the data center, and OTN (Optical Transport Networking) interconnects to the WAN. These networks deliver the content from the data center to the cloud and, ultimately, to the user. networks comprise many server and switch blades interconnected using copper cables, PCB backplanes or optical links. These interconnects use a combination of 1 Gb, 10 Gb Ethernet, PCIe, and in some cases, InfiniBand. Network interfaces in compute networks must support not only high data rates but also very low latency, which is critical for streaming video and audio service quality. networks are primarily based on Fiber Channel, Gb or 10Gb Ethernet switches and direct connections to storage subsystems using PCIe. These networks store considerable amounts of content, requiring multi-gigabit capable protocols. YouTube, Netflix, Hulu, Spotify, Pandora... Users Network Cloud WAN 10GbE, 100 GbE over OTN Ethernet over DWDM 10GbE, 100 GbE over OTN Ethernet Ethernet Converged Data Center Server, Data Center Blade 10/40 GbE backplane Data Center Blade Fibre Channel, FCoE Server Figure 2. Data Center Network Overview To meet the rapidly expanding Internet bandwidth demands of content providers, compute and storage networks for data centers must become flatter and more horizontally interconnected. Known as the converged data center, this flatter architecture is required to improve server-to-server and server-tostorage communication within the data center, which directly impacts latency and the quality of streaming services. In addition to delivering latency performance advantages, the converged data center architecture is highly scalable and lends itself to software virtualization of compute server and storage hardware resources, supporting rapid changes in service bandwidth demands. Some vendors refer to this architecture as Software Defined Networking (SDN). Silicon Labs Rev 1.0 2

Traditional Tree Designs for Converged Data Centers Are Too Complex As data center compute and storage networks become horizontally interconnected with multi-gigabit Ethernet, Fibre Channel, and PCIe links embedded into pluggable, high-density blades, they place new demands on system engineers, especially the clock tree designers. Designers must find clock tree solutions that support both increasing functional densities and the multitude of high-bandwidth network protocols while reducing PCB footprints, power and costs. Let s consider a traditional clock tree design approach for a data center switch blade, as shown in Figure 3. Whether this blade is implemented on a PCB using multiple ICs or based primarily on a single systemon-a-chip (SoC) solution, the compute switch blade s primary function is to support simultaneous, highbandwidth, low-latency communications between the, compute server blades and storage devices. Data center switch blades support the consolidation of multi-gigabit and multi-protocol storage traffic into highly scalable networks. However, the traditional clock tree used to support data center switch blades is complicated (see Figure 3), requiring eight clock tree components: Three crystal oscillators (XOs) Three buffer ICs Two clock generator ICs. Data Center Blade CPU / NPU 75 to 150 MHz 1.8V CML IC 100 MHz 2.5V HCSL Security Processor Controller Management Multi-core Processor L2 Fabric Management and I/O Control Ethernet MACs Memory Control Memory 166.66... MHz IC 50 MHz X8 PCIe slot Octal 10GbE 10GbE FCoE GbE Octal GbE Quad 10G Quad 10G 10/40 GbE 156.25 MHz 125 MHz 161.1328125 MHz 3.3V LVPECL 10 GbE Backplane 156.25 MHz XO 125 MHz XO Dual XO 161.1328125 & 156.25 MHz Figure 3. Data Center Blade Using Traditional Tree Multi-Lane and Reference s A major reason for clock tree complexity is that high-speed communications links fundamentally rely on multi-lane, multi-gigabit serializer/de-serializers () and physical layer devices (s) for each network interface type. chips and s are critical building blocks for data center switch blades. Depending on the network type (/ WAN, compute, storage), protocol (, Fibre Channel, PCIe), and transmission medium (fiber optic cabling, copper cables or PCB backplanes), each multigigabit or device requires a low-jitter reference clock, and many operate at different frequencies. Due to protocol and physical media standard differences, these reference clocks are seldom integer-related. For example, the 161.1328125 MHz clock is fractionally related (by 66/64) to the 156.25 MHz clock. This fractional relationship makes the simultaneous generation of low-jitter clocks much more challenging, as fractional dividers must be used. Fractional dividers used in traditional clock Silicon Labs Rev 1.0 3

generators produce significantly higher jitter than integer dividers used in integer-only PLL clock generators, forcing designers to use more expensive, dedicated XOs to generate each unique frequency. CPU, Memory and s While the jitter requirements of some ICs (such as and clocks) may be very strict, other switch blade functions have less stringent requirements (100 MHz PCIe, variable 75 to 150 MHz CPU, and 166.66 MHz memory clocks). However, given the limited flexibility and integration level of traditional solutions, clock tree designers have been forced to use multiple clock generators and crystal oscillators (XOs) and buffers to complete the clock tree. Therefore, to meet increasing demands for higher network port density and bandwidth in data center switch blades, clock tree designers need clock generators that offer: Multiple, low-phase jitter and reference clocks that are fully compliant with the stringent jitter performance specifications required by the dominant networking (1/10/100G Ethernet), storage (Fibre Channel, PCIe) and computing (PCIe, Infiniband) standards. Generally, the jitter specifications range from about 1 ps RMS to less than 300 fs RMS (12 k to 20 MHz). Frequency flexibility to enable simultaneous generation of a wide range of integer and fractionallyrelated clock frequencies while adhering to the stringent network, compute and storage, and clock jitter specifications. The ability to change frequencies on the fly, without affecting other outputs, is also highly desirable. For example, this enables speed-grading CPUs to meet different product cost and market needs. Highest level of integration to provide significant reductions in PCB area, cost and component count and maximize system port densities and cost per bit. A New Approach to Tree Design for Converged Data Centers In contrast to traditional clock generators, next-generation clocking solutions, such as Silicon Labs Si5341/40 clock family, leverage fractional- and integer-frequency-synthesis flexibility and higher levels of integration. This architectural approach delivers an efficient, cost-effective, single-chip solution that integrates all discrete timing functions into a single IC without sacrificing jitter performance. Silicon Labs proprietary fractional divider technology is key to enabling the Si5341 clock to simultaneously generate any integer or fractional frequency up to 800 MHz on any output, with typical jitter < 150 fs. SEL Si5341 As shown in Figure 4, the Si5341 clock uses a single, lowpower VCO to drive five independent fractional dividers, which are connected via a non-blocking cross point switch to an array of 10 clock outputs. In the first stage of this architecture, the high-speed fractional-n divider seamlessly switches between the two closest integer divider values to produce an exact output clock frequency with 0 ppm frequency synthesis error. To eliminate phase errors generated by this process, calculates the relative phase difference between the clock produced by the fractional-n divider and the desired output clock and dynamically adjusts the phase to match the ideal clock waveform. This novel approach makes it possible to generate any-output clock frequencies from 1 khz to 800 MHz with 0 ppm error. The result is better than 100 fs RMS phase jitter performance (12 khz to 20 MHz) in integer mode and less than 150 fs in its synthesis mode, which simultaneously generates both fractional and integer-related clocks. To learn Silicon Labs Rev 1.0 4 XTAL IN0 IN1 IN2 XA XB FB_IN OSC P0 P1 P2 I 2 C/ SPI NVM Pfb Status PLL Nn1 Nd1 Nn2 Nd2 Nn3 Nd3 Multi Nn4 Synth Nd4 Multi Nn5 Synth Nd5 Multi Nn6 Synth Nd6 Multi Nn7 Synth Nd7 Multi Nn8 Synth Nd8 Nn9 Nd9 R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 OUT0 OUT1 OUT2 OUT3 OUT4 OUT5 OUT6 OUT7 OUT8 OUT9 Figure 4. Si5341 Functional Diagram

more, see the related white paper, Innovative DSPLL and Architecture Enables High- Density 10/40/100G Line Card Designs. Frequency Flexibility Transforms Tree Designs for Data Center By using the frequency-flexible, ultra-low jitter, 10-output Si5341 clock generator, developers can reduce the clock tree for a data center switch blade from eight discrete components to just one high-performance clock. (See Figure 5.) Traditional Tree for Data Center Blade Silicon Labs Solution for Data Center Blade 50 MHz (LVDS) 166.66... MHz (LVDS) 50 MHz (LVDS) 166.66... MHz (LVDS) 75 to 150 MHz (LVDS) 100 MHz (HCSL) CPU 10G Si5341 75 to 150 MHz (LVDS) 100 MHz (HCSL) CPU 10G Figure 5. Traditional vs. Si5341-Based Tree Traditional Tree Challenges Many clocks with diverse frequencies and clocks require very low jitter Many different signaling formats are required Silicon Labs Solution Any-frequency, Any format, Any output < 100 fs jitter (integer mode); < 150 fs (fractional) Ten output clocks consolidate the clock tree Summary Cloud-based streaming services are driving growing demands for higher data rates. To meet these demands, high-speed networking and data center equipment requires frequency-flexible clock generator IC solutions to support faster data rates. High-performance, frequency-flexible Si5341/40 clock generators are capable of generating any frequency on any output with best-in-class jitter performance (< 100 fs RMS in integer mode and < 150 fs RMS in fractional synthesis mode). Data center clock tree designers can leverage these new clock products and Silicon Labs Builder Pro software to minimize the timing component BOM count and complexity required to build highly flexible, high-bandwidth Internet Infrastructure equipment for the converged data center. Silicon Labs Rev 1.0 5