NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure



Similar documents
VXLAN Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Virtual Network Exceleration OCe14000 Ethernet Network Adapters

Why Use 16Gb Fibre Channel with Windows Server 2012 Deployments

Accelerating Network Virtualization Overlays with QLogic Intelligent Ethernet Adapters

ConnectX -3 Pro: Solving the NVGRE Performance Challenge

OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

White Paper. SDN 101: An Introduction to Software Defined Networking. citrix.com

Virtualization, SDN and NFV

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Emulex OneConnect 10GbE NICs The Right Solution for NAS Deployments

Network Virtualization for Large-Scale Data Centers

Application Note. Introduction. Instructions

Software-Defined Networks Powered by VellOS

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Optimizing Data Center Networks for Cloud Computing

Scalable Approaches for Multitenant Cloud Data Centers

Hyper-V Network Virtualization Gateways - Fundamental Building Blocks of the Private Cloud

Optimize Server Virtualization with QLogic s 10GbE Secure SR-IOV

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

Analysis of Network Segmentation Techniques in Cloud Data Centers

VXLAN: Scaling Data Center Capacity. White Paper

White Paper. Advanced Server Network Virtualization (NV) Acceleration for VXLAN

Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam

Global Headquarters: 5 Speen Street Framingham, MA USA P F

The Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Multitenancy Options in Brocade VCS Fabrics

Accelerating Micro-segmentation

Increase Simplicity and Improve Reliability with VPLS on the MX Series Routers

Network Function Virtualization Using Data Plane Developer s Kit

Windows Server 2008 R2 Hyper-V Live Migration

CLOUD NETWORKING FOR ENTERPRISE CAMPUS APPLICATION NOTE

Configuring Oracle SDN Virtual Network Services on Netra Modular System ORACLE WHITE PAPER SEPTEMBER 2015

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

White Paper. Juniper Networks. Enabling Businesses to Deploy Virtualized Data Center Environments. Copyright 2013, Juniper Networks, Inc.

CoIP (Cloud over IP): The Future of Hybrid Networking

Brocade Solution for EMC VSPEX Server Virtualization

What is SDN? And Why Should I Care? Jim Metzler Vice President Ashton Metzler & Associates

Networking Issues For Big Data

Data Center Networking Designing Today s Data Center

Network Virtualization Solutions

Brocade One Data Center Cloud-Optimized Networks

Enhancing Cisco Networks with Gigamon // White Paper

Deliver the Next Generation Intelligent Datacenter Fabric with the Cisco Nexus 1000V, Citrix NetScaler Application Delivery Controller and Cisco vpath

Data Center Network Virtualisation Standards. Matthew Bocci, Director of Technology & Standards, IP Division IETF NVO3 Co-chair

Pluribus Netvisor Solution Brief

Emulex s OneCore Storage Software Development Kit Accelerating High Performance Storage Driver Development

Flash Storage Gets Priority with Emulex ExpressLane

SINGLE-TOUCH ORCHESTRATION FOR PROVISIONING, END-TO-END VISIBILITY AND MORE CONTROL IN THE DATA CENTER

How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan

Extending Networking to Fit the Cloud

Windows Server 2008 R2 Hyper-V Live Migration

Using Network Virtualization to Scale Data Centers

WHITE PAPER. Data Center Fabrics. Why the Right Choice is so Important to Your Business

Technical Brief: Egenera Taps Brocade and Fujitsu to Help Build an Enterprise Class Platform to Host Xterity Wholesale Cloud Service

SDN v praxi overlay sítí pro OpenStack Daniel Prchal daniel.prchal@hpe.com

Transform Your Business and Protect Your Cisco Nexus Investment While Adopting Cisco Application Centric Infrastructure

The Road to SDN: Software-Based Networking and Security from Brocade

Lecture 02b Cloud Computing II

BUILDING A NEXT-GENERATION DATA CENTER

Simplifying Virtual Infrastructures: Ethernet Fabrics & IP Storage

Impact of Virtualization on Cloud Networking Arista Networks Whitepaper

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Quantum Hyper- V plugin

SOFTWARE DEFINED NETWORKING

SDN in the Public Cloud: Windows Azure. Albert Greenberg Partner Development Manager Windows Azure Networking

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Virtual Machine in Data Center Switches Huawei Virtual System

State of the Art Cloud Infrastructure

Data Center Convergence. Ahmad Zamer, Brocade

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

Cisco and Citrix: Building Application Centric, ADC-enabled Data Centers

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

TRILL for Service Provider Data Center and IXP. Francois Tallet, Cisco Systems

Visibility into the Cloud and Virtualized Data Center // White Paper

Cloud Optimize Your IT

Cisco and Citrix: Building Application Centric, ADC-enabled Data Centers

Brocade VCS Fabrics: The Foundation for Software-Defined Networks

OPTIMIZING SERVER VIRTUALIZATION

Best Practices Guide: Network Convergence with Emulex LP21000 CNA & VMware ESX Server

SOLUTIONS FOR DEPLOYING SERVER VIRTUALIZATION IN DATA CENTER NETWORKS

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

SPEED your path to virtualization.

Data Center Virtualization and Cloud QA Expertise

SOFTWARE-DEFINED NETWORKING AND OPENFLOW

Software Defined Network (SDN)

Emulex 16Gb Fibre Channel Host Bus Adapter (HBA) and EMC XtremSF with XtremSW Cache Delivering Application Performance with Protection

The Open Cloud Near-Term Infrastructure Trends in Cloud Computing

Broadcom Smart-NV Technology for Cloud-Scale Network Virtualization. Sujal Das Product Marketing Director Network Switching

Building the Virtual Information Infrastructure

Cloud Fabric. Huawei Cloud Fabric-Cloud Connect Data Center Solution HUAWEI TECHNOLOGIES CO.,LTD.

Microsoft System Center

Ethernet-based Software Defined Network (SDN) Cloud Computing Research Center for Mobile Applications (CCMA), ITRI 雲 端 運 算 行 動 應 用 研 究 中 心

Cisco Which VPN Solution is Right for You?

WHITE PAPER. Network Virtualization: A Data Plane Perspective

Leveraging SDN and NFV in the WAN

Avoiding Network Polarization and Increasing Visibility in Cloud Networks Using Broadcom Smart- Hash Technology

Transcription:

W h i t e p a p e r NVGRE Overlay Networks: Enabling Network Scalability for a Cloud Infrastructure

Table of contents Executive Summary.... 3 Cloud Computing Growth.... 3 Cloud Computing Infrastructure Optimization... 3 The Economics Driving Overlay Networks... 4 Cloud Computing s Network Scalability Challenge... 4 Limitations to security (network isolation) using 802.1Q VLAN grouping.... 5 Capacity expansion across server racks or server clusters (use cases)........................................ 5 Limitations of Spanning Tree Protocol (STP).... 5 Infrastructure over provisioning capital expenditure (CAPEX)................................................ 5 Addressing Network Scalability with NVGRE... 6 NVGRE a solution to the network scalability challenge.... 6 What is NVGRE?... 6 NVGRE Ethernet frame encapsulation................................................................. 7 VM-to-VM communications in an NVGRE environment.................................................... 7 NVGRE Enabling Network Scalability in the Cloud..................................................... 9 Looking Ahead.... 10 NVGRE Microsoft s Call to Action... 10 Conclusion... 10 2

Executive Summary The paradigm shift from a virtualization driven data center toward an infrastructure for cloud computing includes shifting the focus from applications consolidation to tenant consolidation. While virtualization has reduced the cost and time required to deploy a new application from weeks to minutes, reducing the costs from thousands of dollars to a few hundred, reconfiguring the network for a new or migrated virtual workload can take a week and cost thousands of dollars. Scaling existing networking technology for multi-tenant infrastructures requires solutions enabling virtual machine (VM) communication and migration across Layer 3 boundaries without impacting connectivity while ensuring isolation for hundreds of thousands of logical network segments and maintaining existing VM addresses and Media Access Control (MAC) addresses. NVGRE, an IETF proposed standard co-authored by Microsoft, Emulex and other leading organizations address these challenges. Cloud Computing Growth Cloud computing adoption, both public and private, continues unabated with server revenues in 2014 forecasted to grow to $12.5 Billion (IDC) and Cloud Services becoming a $35 Billion/year market by 2013 (Cisco). Almost 80 percent of respondents to a recent study (ESG) indicated having three or more data centers worldwide, and 54 percent have started or completed data center consolidation projects. This trend toward data center consolidation is a key driver of new requirements for multi-tenant cloud based private infrastructures that are manageable and scalable. Andrew McAfee of MIT s Sloan School describes cloud computing a deep and permanent change in how computing power is generated and consumed. Cloud computing, with any service model approach, public, private or hybrid, delivers multiple benefits including: n Extracting increased hardware efficiencies from server consolidation via virtualization. n Enabling IT agility through optimized placement or relocation of workloads. n Optimizing IT capacity through distribution of workloads between private and public infrastructure. n Ensuring business continuity and disaster recovery. Server virtualization has laid the foundation for the growth of cloud computing, with hypervisor offerings such as Microsoft Hyper-V. Cloud Computing Infrastructure Optimization Ensuring optimized growth of cloud computing infrastructure, and achieving investment return, rests on three fundamental tenets based on optimization of three data center infrastructures: Server The cloud is a natural evolution of server virtualization and advances including powerful multi-core processors, server clusters/pods and standardized low cost servers; together, they offer a plethora of options to design an optimized core cloud infrastructure. Storage Storage virtualization, de-duplication, on-demand and simplified provisioning, disaster recovery and scalable file storage using the Hadoop File Storage System (HDFS) provide the infrastructure fabric capabilities to optimally design for cloud based storage. Network Forrester s research has indicated that 50 percent of companies that didn t invest in the network suffered from degradation of services or had noticeable increases in operational costs. Advances in NIC adapter and Ethernet switching technologies are simultaneously driving the optimization of data center networking for cloud infrastructure. The growth of 10Gb Ethernet (10GbE) networking, accelerating with the launch of Intel s Xeon E5 (Romley) processors, coupled with other adapter card advances including SR-IOV (Single Root I/O Virtualization) and Emulex s vengine protocol offload technology, available with the XE4310R Ethernet controller, enables greater VM consolidation on a host server without impacting networking I/O throughput. 3

However, when customers deploy private or hybrid clouds, a good architecture will always have spare capacity or spare hosts (over provision) for failover. A primary reason is that a VM can only communicate or be seamlessly migrated within a Layer 2(L2) domain (L2 subnet). Network optimization and network scalability will be critical to achieve the full potential of cloud based computing. The Economics Driving Overlay Networks Overlay Networks are being driven by, simply put, the need to reduce the costs of deploying a new server workload and managing it. Here is an interesting analysis from a presentation at the Open Networking Summit in the spring of 2012: 1 Server virtualization has drastically reduced the cost and time to deploy a new server workload. Before virtualization, it would take ten weeks and cost $10,000, plus software costs, to deploy a new server workload. With virtualization, a new server workload can be deployed in a few minutes at a deployment cost of $300, plus software costs. But, the network hasn t been virtualized, and the network really hasn t changed during the last ten years. From a network management perspective, it will take five days and cost $1,800 to reconfigure the network devices, including routers, IP and MAC addresses, firewalls, intrusion detection systems, etc. Overlay Networks is the first step to Network Virtualization and promises to significantly reduce the costs of network management for virtualized workloads and hybrid cloud deployments. Cloud Computing s Network Scalability Challenge This paradigm shift from a virtualization-driven data center to an infrastructure architecture for supporting cloud computing will also shift the focus from applications consolidation to tenant consolidation. A tenant can be a department or a Line of Business (LoB) in a private cloud or a distinct corporate entity in a public or hybrid cloud. For a cloud infrastructure, the importance of network scalability extends beyond the immediately obvious bandwidth driven economics, to that of increasing VM density on the physical host server and extending the reach of tenants. Let us examine challenges to achieving network scale in a cloud based data center. Reviewing the OSI 7-layer architectural model in Figure 1 below, we will want to focus on the critical communication layers 2 and 3 as they pertain to VM-to-VM communications and VM migration. Figure 1 Specifically, we will examine how to ensure intra-tenant VM-to-VM communications isolation (security) while enabling a tenant s resource to seamlessly scale at layer 2 (L2) and, as equally important, address the challenges of elastically adding additional resources at layer 3 (L3). 1 http://opennetsummit.org 4

Limitations to security (network isolation) using 802.1Q vlan grouping VMs belonging to a group (Departmental, LoB, Corporation, etc.) each require Virtual Local Area Network tagging (vlan tagging) based L2 segment isolation for their network traffic (since VMs communicate at L2). However, the IEEE 802.1Q standard, with its 12-Bit namespace for a vlan Identifier (VID), allows only 4094 vlans (Note: The maximum number of vlans that can be supported is 4094 rather than 4096, as the VID values of 0 and FFF are reserved). A Top-of-Rack switch may connect to dozens of physical servers, each hosting a dozen or more VMs that can each belong to at least one vlan. A data center can easily contain enough switches, resulting in the vlan ID limit of 4094 being exceeded quickly. Furthermore, VMs running a single application may reside in geographically disparate data centers. These VMs may have to communicate via an L2 data link, so vlan identifiers must remain unique across geographies. Capacity expansion across server racks or server clusters (use cases) A single rack with multiple host servers, or multiple server racks residing in a common cluster or pod, effectively providing a small virtualized environment for a single tenant. Each server rack, may constitute an L2 subnet, while multiple host server racks in a cluster will likely be assigned their own L3 network. Expanding (elastic) demand for additional computing resource by the tenant may require more servers or VMs on a different rack, within a different L2 subnet, or on a different host cluster, in a different L3 network. These additional VMs need to communicate with the applications VM on the primary rack or cluster. A second use case is the need to migrate a VM(s) to another server cluster, that is, inter-host workload mobility to optimize usage of the underlying server hardware resources. This may be driven by scenarios such as: n Migration from, and shut down of, underutilized hosts and consolidating on another more fully utilized host to reduce energy costs. n Decommissioning one or more older hosts and bringing up workloads on newly added infrastructure. These use cases require stretching the tenant s L2 subnet to connect the servers or VM since VM-to-VM communications and/or VM migration between hosts currently requires the hosts to be in the same L2 subnet. Limitations of Spanning Tree Protocol (STP) STP effectively limits the number of VMs in a vlan or IP network segment to 200-300 resulting in: n Wasted links where links in the networking infrastructure are effectively turned off to avoid connectivity loops. n Failures or changes in link status that result in loss of several seconds to re-establish connectivity an unacceptable delay in many current applications scenarios. n Limited ability to provide multipathing and resulting networking resiliency due to its (STP s) requirement for a single active path between network nodes. Infrastructure over provisioning capital expenditures (CAPEX) Emulex has learned from working with end users and talking with industry leaders that it isn t uncommon to over provision a data center by as much as 10 percent to provide spare capacity for server failover. This over provisioning practice means data center managers allocate one spare physical host server for every 11, based on an approximate 23:1 consolidation ratio with the L2 domain limited by STP of 254 devices (network nodes). Over provisioning spans the entire infrastructure and covers servers, racks, power and cooling, networking, administration and management, and some storage. NVGRE, on the other hand, breaks the STP boundary, allowing data center administrators to reduce their over provisioning to a much lower number. Since servers are fairly reliable, if the annual failure rate for servers is two percent, then the cost of the over provisioned infrastructure can be reduced by a factor of five. Optimizing the failover infrastructure is another opportunity for NVGRE to significantly reduce IT costs. 5

In summary, the current practices of host-centric segmentation based on vlans/l2 subnets, with server clusters separated by L3 physical IP networks limit today s multi-tenant cloud infrastructure from meaningful network scaling for longer term growth. Figure 2 describes these network scaling challenges associated with the deployment of a cloud based server infrastructure. Figure 2 Addressing Network Scalability with NVGRE NVGRE a solution to the network scalability challenge Microsoft and Emulex, along with other industry leaders, have outlined a vision for a new virtualized Overlay Network architecture that enables the efficient and fluid movement of virtual resources across cloud infrastructures, leading the way for large scale and cloud scale VM deployments. NVGRE (Network Virtualization using Generic Routing Encapsulation), is a proposed new IETF standard co-authored by Microsoft, Emulex and HP amongst a group of other leading companies. The NVGRE standard enables Overlay (virtual) Networks, and focuses on eliminating the constraints described earlier, thus enabling virtualized workloads to seamlessly communicate or move across server racks, clusters and data centers. What is NVGRE? At its core, NVGRE is simply an encapsulation of an Ethernet L2 Frame in IP that enables the creation of virtualized L2 subnets that can span physical L3 IP networks. Looking back at the use case described earlier, NVGRE enables the connection between two or more L3 networks and makes it appear like they share the same L2 subnet. This now allows inter-vm communications or VM migrations across L3 networks while operating as if they were attached to the same L2 subnet. NVGRE is an L2 overlay scheme over an L3 network. 6

NVGRE Ethernet frame encapsulation A unique 24-bit ID called a Tenant Network Identifier (TNI) is added to the L2 Ethernet frame, using the lower 24 bits of the GRE Key field. This new 24-bit TNI now enables more than 16 million L2 (logical) networks to operate within the same administrative domain, a scalability improvement of many orders of magnitude over the 4,094 vlan segment limit discussed above. The L2 frame with GRE encapsulation is then encapsulated with an outer IP header and finally an outer MAC address. A simplified representation of the NVGRE frame format and encapsulation is shown in Figure 3 below. Figure 3 NVGRE is a tunneling scheme, using the GRE routing protocol as defined by RFC 2784 and extended by RFC 2890. Each TNI is associated with an individual GRE tunnel and uniquely identifies, as its name suggests, a cloud tenant s unique virtual subnet. NVGRE thus isn t a new standard as such, since it uses the already established GRE protocol between hypervisors. VM-to-VM communications in an NVGRE environment The value of NVGRE technology is in enabling inter-vm communications and VM migration for hosts in different L2 subnets as discussed earlier. A simplified explanation of the process is described below and demonstrated in Figures 4 and 5. The NVGRE endpoint, residing in the server (or a switch or physical 10GbE NIC) encapsulates the VM traffic, adding the 24-Bit TNI, and sends it through a GRE tunnel. At the destination, the endpoint de-encapsulates the incoming packets and presents the destination VM with the original Ethernet L2 packet. (NOTE: The VM originating the communication is unaware of this TNI tag.) The current IETF draft specification explicitly avoids specifying the method of obtaining a destination VM address in the Ethernet frame, leaving that detail to a future version of the specification or new network management techniques such as Software Defined Networking (SDN). The inner IP address is called the Customer Address (CA). The outer IP address is called the Provider Address (PA). When an NVGRE endpoint needs to send a packet to the destination VM, it needs to know the PA of the destination NVGRE endpoint. The mechanism to maintain a mapping between PAs and CAs will be more fully defined in additional white papers. 7

Figure 4 Figure 5 8

NVGRE Enabling Network Scalability in the Cloud We have seen how NVGRE technology addresses some of the key tactical challenges associated with networking within a cloud infrastructure. Figure 6, a simplified representation, demonstrates the benefits of adopting NVGRE. Figure 6 In summary, with NVGRE: n The 4094 vlan limit for ensuring computing privacy is addressed with the NVGRE 24-bit TNI construct that enables 16 million isolated tenant networks. VMs need to be on the same TNI to communicate with each other. This delivers the isolation demanded in a multi-tenant architecture keeping associated VMs within the same TNI. n A multi-tenant cloud infrastructure is now capable of delivering elastic capacity service by enabling additional application VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet. n Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with NVGRE Overlay Networking. n Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data flows over the WAN. n VMs are uniquely identified by a combination of their MAC addresses and TNI. Thus it is acceptable for VMs to have duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant customer networks for the Cloud service provider. n Finally, NVGRE is an evolutionary solution, already supported by switches and next-generation hypervisors, not requiring forklift hardware upgrades thus easing and hastening the adoption of the technology. 9

Looking Ahead Overlay Networking implemented in software, as currently defined, is incompatible with current vengine (and similar) hardware based protocol offload technologies, so network performance goes down, more CPU cycles are consumed, and the VM-host consolidation ratio is reduced. This fundamentally impacts the core economics of virtualization and can easily consume 10 to 20 percent of CPU cycles of a host. While the current NVGRE draft specification leaves out many of the details for managing the Control Plane, it outlines the critical piece for interoperability the NVGRE frame format. As for the control plane, there will be several tools available to manage the networking piece. The fundamental philosophy is that when a VM moves, a VM management tool orchestrates its movement. Since this tool knows the complete lifecycle of a VM, it can provide all of the control plane information required by the network. This builds on the trend that the control plane migrates away from the hypervisor s embedded vswitch to the physical 10GbE NIC, with the NIC becoming more intelligent, and a critical part of the networking infrastructure. Therefore, in order for NVGRE to be leveraged without a notable performance impact an intelligent NIC needs to be deployed that supports: n NVGRE-offload without disrupting current protocol offload capabilities. n API integration to apply the correct control plane attributes to each flow for Overlay Networking. n APIs to expose the presence and connectivity of Overlay Networks to network management tools. NVGRE - Microsoft s Call to Action n NVGRE is a pivotal technology that enables Microsoft s HyperV for Hybrid Cloud environments - Required for scalable virtual hybrid cloud environments - Solves VM migration problem between separate L2 domains n NVGRE will be released with Windows Server 2012 with support for hardware offloads - Microsoft will have a Hardware Qualified List for NICs and LOMs (including the servers that support them) - Microsoft call to action: NVGRE offloads be required for NIC hardware - Any encapsulation breaks stateless offloads, NICs/LOMs must become GRE aware - OEMs need to add NVGRE as a requirement to their NIC/LOM RFQs Conclusion By extending the routing infrastructure inside the server, Microsoft is enabling the creation of programmatic dynamic Overlay Networks that can easily scale from small to large to cloud enabling multi-tenant architectures. Network manageability can be moved to the edge, and the network management layer becomes virtualization aware and can adapt and manage the network as VMs start up, shut down, and move around. NVGRE, a new standard from Microsoft and Emulex to create Overlay Networks, focuses on eliminating the last constraint to enable workloads to seamlessly move across racks, host clusters and data centers. This is a very important step towards lowering the cost of the data center infrastructure. Given the cost and complexity of traditional networks to deploy and manage virtual servers, Overlay Network protocols such as NVGRE can reduce the time and cost to deploy and manage a virtual workload throughout its lifecycle. 10

Copyright 2012 Emulex Corporation. The information contained herein is subject to change without notice. The only warranties for Emulex products and services are set forth in the express warranty statements accompanying such products and services. Emulex shall not be liable for technical or editorial errors or omissions contained herein. LightPulse and OneCommand are registered trademarks of Emulex Corporation. HP is a registered trademark in the U.S. and other countries. www.emulex.com World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600 Bangalore, India +91 80 40156789 Beijing, China +86 10 68499547 Dublin, Ireland+35 3 (0)1 652 1700 Munich, Germany +49 (0) 89 97007 177 Paris, France +33 (0) 158 580 022 Tokyo, Japan +81 3 5325 3261 Wokingham, United Kingdom +44 (0) 118 977 2929 13-0209 8/12