Validating Long-distance VMware vmotion



Similar documents
DEPLOYMENT GUIDE Version 1.0. Deploying the BIG-IP v10.2 to Enable Long Distance VMotion with VMware vsphere

DEPLOYMENT GUIDE Version 1.2. Deploying the BIG-IP v10.2 to Enable Long Distance Live Migration with VMware vsphere vmotion

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Connecting to the Cloud with F5 BIG-IP Solutions and VMware VMotion

DEPLOYMENT GUIDE Version 1.1. Deploying the BIG-IP System to Enable Long Distance Live Migration with VMware vsphere vmotion

SILVER PEAK ACCELERATION WITH EMC VSPEX PRIVATE CLOUD WITH RECOVERPOINT FOR VMWARE VSPHERE

F5 and VMware Solution Guide. Virtualization solutions to optimize performance, improve availability, and reduce complexity

Nutanix Tech Note. VMware vsphere Networking on Nutanix

Cisco Wide Area Application Services Optimizes Application Delivery from the Cloud

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager

Availability Acceleration Access Virtualization - Consolidation

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VMware DRS: Why You Still Need Assured Application Delivery and Application Delivery Networking

VMware vsphere 5.0 Evaluation Guide

Implementing and Troubleshooting the Cisco Cloud Infrastructure **Part of CCNP Cloud Certification Track**

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

REDEFINE SIMPLICITY TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS

Riverbed WAN Acceleration for EMC Isilon Sync IQ Replication

BEST PRACTICES. Application Availability Between Hybrid Data Centers

Cloud Optimize Your IT

Dell PowerEdge Blades Outperform Cisco UCS in East-West Network Performance

Deploying the BIG-IP System with VMware vcenter Site Recovery Manager

CISCO WIDE AREA APPLICATION SERVICES (WAAS) OPTIMIZATIONS FOR EMC AVAMAR

Cisco Unified Computing Remote Management Services

VXLAN: Scaling Data Center Capacity. White Paper

Remote PC Guide Series - Volume 1

How To Deploy F5 With A Hyperv Virtual Machine Manager 2008

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

How to Configure an Initial Installation of the VMware ESXi Hypervisor

Silver Peak Virtual Appliances

How to Create a Virtual Switch in VMware ESXi

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Cisco Application Networking for IBM WebSphere

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Install Guide for JunosV Wireless LAN Controller

Getting More Performance and Efficiency in the Application Delivery Network

F5 and Oracle Database Solution Guide. Solutions to optimize the network for database operations, replication, scalability, and security

Federated Application Centric Infrastructure (ACI) Fabrics for Dual Data Center Deployments

SAP Solutions on VMware Infrastructure 3: Customer Implementation - Technical Case Study

DEPLOYMENT GUIDE Version 1.0. Deploying F5 with Microsoft Virtualization Technology

Cisco Hybrid Cloud Solution: Deploy an E-Business Application with Cisco Intercloud Fabric for Business Reference Architecture

vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN

High-Availability Fault Tolerant Computing for Remote and Branch Offices HA/FT solutions for Cisco UCS E-Series servers and VMware vsphere

MS EXCHANGE SERVER ACCELERATION IN VMWARE ENVIRONMENTS WITH SANRAD VXL

System Requirements and Server Configuration

VMware vsphere 5.0 Evaluation Guide

FIVE WAYS TO OPTIMIZE OFFSITE DATA STORAGE AND BUSINESS CONTINUITY

Cisco Virtual Wide Area Application Services: Technical Overview

Expert Reference Series of White Papers. VMware vsphere Distributed Switches

White Paper. Accelerating VMware vsphere Replication with Silver Peak

Cisco Nexus 1000V Switch for Microsoft Hyper-V

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

Private Cloud Migration

SN A. Reference Guide Efficient Data Center Virtualization with QLogic 10GbE Solutions from HP

F5 PARTNERSHIP SOLUTION GUIDE. F5 and VMware. Virtualization solutions to tighten security, optimize performance and availability, and unify access

Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)

Network Troubleshooting & Configuration in vsphere VMware Inc. All rights reserved

SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V

Balancing CPU, Storage

NET ACCESS VOICE PRIVATE CLOUD

Ixia Phantom vtap. Overview. Virtual Taps Phantom Monitoring Solution DATA SHEET

VMware vsphere Design. 2nd Edition

DEPLOYMENT GUIDE Version 1.0. Deploying the BIG-IP Edge Gateway for Layered Security and Acceleration Services

Cisco Application Networking for BEA WebLogic

Part 1 - What s New in Hyper-V 2012 R2. Clive.Watson@Microsoft.com Datacenter Specialist

Microsoft Exchange Solutions on VMware

Expert Reference Series of White Papers. vcloud Director 5.1 Networking Concepts

Why ClearCube Technology for VDI?

Stretched Active- Active Application Centric Infrastructure (ACI) Fabric

Virtual Appliance Setup Guide

Nutanix Tech Note. Configuration Best Practices for Nutanix Storage with VMware vsphere

Enhancing Cisco Networks with Gigamon // White Paper

VMware vsphere-6.0 Administration Training

Presented by Philippe Bogaerts Senior Field Systems Engineer Securing application delivery in the cloud

Networking Topology For Your System

Veeam Cloud Connect. Version 8.0. Administrator Guide

How Network Transparency Affects Application Acceleration Deployment

VM-Series Firewall Deployment Tech Note PAN-OS 5.0

Integration Guide. EMC Data Domain and Silver Peak VXOA Integration Guide

Springpath Data Platform with Cisco UCS Servers

Accelerating EMC VNX Data Protection Solutions with Silver Peak

vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN

COMPLEXITY AND COST COMPARISON: CISCO UCS VS. IBM FLEX SYSTEM (REVISED)

Deploying in a Distributed Environment

DEPLOYMENT GUIDE Version 1.1. Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications

DEPLOYMENT GUIDE DEPLOYING F5 WITH VMWARE ESX SERVER

Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure

Multitenancy Options in Brocade VCS Fabrics

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

vsphere Private Cloud RAZR s Edge Virtualization and Private Cloud Administration

Web Application Deployment in the Cloud Using Amazon Web Services From Infancy to Maturity

PORTrockIT. Veeam : accelerating virtual machine replication with PORTrockIT

NetScaler VPX FAQ. Table of Contents

How To Use The Cisco Wide Area Application Services (Waas) Network Module

Aerohive Networks Inc. Free Bonjour Gateway FAQ

Building the Virtual Information Infrastructure

Vmware VSphere 6.0 Private Cloud Administration

The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer

Transcription:

Technical Brief Validating Long-distance VMware vmotion with NetApp FlexCache and F5 BIG-IP F5 BIG-IP enables long distance VMware vmotion live migration and optimizes NetApp FlexCache replication. Key benefits of using BIG-IP include increased performance, increased efficiency, and cost savings. By Sebastián Senen Gonzalez Principal Consultant, ESI Technologies By Guillaume Paré Principal Consultant, ESI Technologies By Craig Hovey Principal Solution Engineer, F5 Networks September 2013

Content Introduction 3 Results 3 Non-AAM-optimized vmotion 3 AAM-optimized vmotion 3 AAM-optimized Storage vmotion 4 Business Benefits 5 Testing Methodology 6 vmotion 6 Storage vmotion 7 Environment Setup and Deployment 7 Products and versions tested 8 ESI Lab Environment 9 Cisco UCS Setup 9 VMware vsphere Setup 11 NetApp FlexCache Setup 14 VMware vmotion Setup 15 F5 BIG-IP Setup 16 Conclusion 21 2

Introduction Applications running across networks encounter a wide range of performance, security, and availability challenges. These problems cost organizations an enormous amount in lost productivity, missed opportunities, and damage to reputation. F5 BIG-IP Application Acceleration Manager (AAM) enables VMware vmotion long distance live migration and optimizes NetApp FlexCache replication. BIG-IP AAM creates opportunities to architect virtual datacenter solutions and allows administrators to non-disruptively move entire pools of virtual machines from one datacenter to another. In this guide, we describe how to use BIG-IP AAM to optimize vmotion for vsphere, how to optimize storage vmotion using NetApp FlexCache, and how to maintain user connections between datacenters during the virtual machine moves. Key benefits of using BIG-IP AAM for long distance live migration include: Increased performance Minimizes bandwidth usage and speeds up response time. Increased efficiency Ensures acceptable performance levels while transfers are under way. Cost savings Reduces WAN costs, maximizes datacenters usage, and ensures business continuity. Technical benefits from deploying BIG-IP AAM for long distance live migration also translate in significant direct and indirect cost savings, namely by mitigating the need for expensive dedicated large bandwidth connections between datacenters, by maximizing the use of datacenter resources with application load sharing across sites, and by ensuring business application continuity during major site maintenance or anticipated local disaster. To validate and measure these benefits, ESI and F5 teamed in a lab environment at ESI s headquarters in Montreal, Canada. In this brief, we provide an overview of how BIG-IP system, VMware vsphere and NetApp FlexCache devices were integrated for long distance live migration and present the FlexCache performance improvements observed. Results This section quantifies the performance improvements gained with BIG-IP AAM. The business benefits are also discussed. Non-AAM-optimized vmotion BIG-IP AAM improved virtual machine migration performance from base performance across a wide range of simulated WAN links, ranging from 50Mb/s to 10Mb/s with latencies from 5ms to 80ms and 0.1% packet loss. The migration at 10Mb/s with 80ms latency and 0.1% packet loss failed because the memory change was too large for the constrained network bandwidth. 3

0:36:00 0:28:48 0:21:36 0:14:24 0:07:12 5ms 15ms 30ms 80ms 80ms 0.1% packet loss 0:00:00 50Mb/s 40Mb/s 30Mb/s 20Mb/s 10Mb/s Figure 1 Non-AAM-optimized performance as virtual machines are vmotioned at varying link speeds, latency levels, and packet loss AAM-optimized vmotion BIG-IP AAM improved virtual machine migration performance from base performance across a wide range of simulated WAN links, ranging from 50Mb/s to 10Mb/s with latencies from 5ms to 80ms and 0.1% packet loss. 0:05:46 0:05:02 0:04:19 0:03:36 0:02:53 0:02:10 0:01:26 5ms 15ms 30ms 80ms 80ms 0.1% packet loss 0:00:43 0:00:00 50Mb/s 40Mb/s 30Mb/s 20Mb/s 10Mb/s Figure 2 AAM-optimized performance as virtual machines are vmotioned at varying link speeds, latency levels, and packet loss 4

AAM-optimized Storage vmotion BIG-IP AAM improved storage vmotion performance from base performance across a wide range of simulated WAN links, ranging from 50Mb/s to 10Mb/s with latencies from 5ms to 80ms and 0.1% packet loss. Some non-aam-optimized Storage vmotion were also attempted but for the most part failed because of the constrained network bandwidth. As a reference, it took 3:33:24 to complete a Storage vmotion at 50Mb/s with 5ms latency. 1:12:00 1:04:48 0:57:36 0:50:24 0:43:12 0:36:00 0:28:48 0:21:36 0:14:24 5ms 15ms 30ms 80ms 80ms 0.1% packet loss 0:07:12 0:00:00 50Mb/s 40Mb/s 30Mb/s 20Mb/s 10Mb/s Figure 3 AAM-optimized performance as the 40GB storage is vmotioned at varying link speeds, latency levels, and packet loss Business Benefits Nowadays, organizations depend heavily on business applications to support operations and drive revenue. As a result, uptime is a top priority for IT managers and administrators. Whether planned or accidental, downtime results in productivity loss, manufacturing delays, missed opportunities, and eventually customers and market share decrease. At the same time, users are demanding more and more from those applications, including faster response time and access across a wide variety of devices, forcing IT to maintain their environment in prime condition with leading edge technologies. In order to achieve this, transparent business applications mobility is required and achieved through virtualization, that is the separation of the logical layer (operating systems, applications, and data) from the physical layer (network, servers, and storage). Once virtualized, local application mobility is easily achieved, allowing for non-disruptive system maintenance and upgrade. However, because of limited bandwidth and high latencies, application mobility across datacenters whether for major site maintenance or anticipated local disaster remains a serious challenge. 5

F5 BIG-IP Application Acceleration Manager (AAM) enables VMware vmotion long distance live migration and optimizes NetApp FlexCache replication. BIG-IP AAM creates opportunities to architect virtual datacenter solutions and allows administrators to non-disruptively move entire pools of virtual machines from one datacenter to another. Increased performance The strategic combination of datacenter, transport, and application optimizations in BIG-IP AAM overcomes WAN latency by minimizing bandwidth usage with symmetric adaptive compression, and speeding up response time with adaptive TCP optimization, allowing for significantly improved vmotion, Storage vmotion, and FlexCache transfer time. Increased efficiency Reduced bandwidth usage and improved response time ensure acceptable performance levels can be maintained for business applications temporarily running in a different location from their storage while live transfers are under way. Whether applications are delivered locally or from across the WAN, end-users will be able to access applications and associated data in a timely manner. Cost savings Technical benefits from deploying BIG-IP AAM for long distance live migration also translate in significant direct and indirect cost savings, namely by mitigating the need for expensive dedicated WAN connections between datacenters, by maximizing the use of datacenter resources with application load sharing across sites, and by ensuring application uptime and business continuity during major site maintenance or anticipated local disaster. Testing Methodology Tested vmotion and storage vmotion were completed with three Windows 2008 Server virtual machines each with a 40GB disk. Tests were run manually across a simulated WAN environment with BIG-IP AAM disabled (baseline) and enabled. The WAN speed, latency, and packet loss were varied to represent a range of environments. vmotion Testing vmotion has been executed back and forth from both ESXi hosts, using three different virtual machines for each vmotion test. We start by moving the virtual machine across the WAN using multiple types of bandwidth and latency. Once the VM is transferred to the other datacenter, we continue with the other two VMs. Once all VMs are on the secondary datacenter, we repeat the process the other way around. 6

Storage vmotion Testing Storage vmotion server has been executed back and forth from both ESXi hosts, using three different virtual machines for each Storage vmotion test. We begin the test by moving a first virtual machine from the primary datacenter to the secondary using vmotion. When the virtual machine is on the secondary datacenter, we initiate the Storage vmotion across the WAN using multiple types of bandwidth and latency. Once the storage of the virtual machine is transferred to the secondary datacenter, we repeat the operation with the two other virtual machines. Once all three virtual machines and storage are on the secondary datacenter, we repeat the process the other way around. It is important to remember that the order of execution for this process is mandatory: always move the primary virtual machine using vmotion before initiating the Storage vmotion. Environment Setup and Deployment In our configuration example for this document, we modified a primary and secondary datacenters to be able to participate in long distance live migration for compute and storage. Our use case in this scenario is for disaster avoidance, a preemptive move of VMware guests to mitigate an issue in the primary datacenter. The components of our architecture include: vsphere ESXi hosts in the primary and secondary datacenters. NetApp devices used for vsphere storage in both datacenters. BIG-IP devices with the LTM and AAM modules in both datacenters. Our BIG-IP devices serve two purposes, first to handle Application Delivery for users on the Internet (e.g., Load Balancing) and second, the BIG-IP Application Acceleration Manager (AAM) enables long distance live migration. BIG-IP Global Traffic Manager (GTM), a separate device in our example, provides traffic direction based on where the Virtual Servers are located. Between the two datacenters we established two connections handled by the BIG-IP devices and carried over transport technologies such as IPSec or MPLS. First, BIG-IP isessions connect and accelerate the two datacenters to enable vmotion traffic. Second, a BIG-IP EtherIP tunnel connects the two datacenters to carry established connections until they are complete. In summary, isessions enable and accelerate vmotion traffic while EtherIP provides an uninterrupted user experience during vmotion. 7

Figure 4 Logical configuration example Products and versions tested We chose from a comprehensive range of application delivery services without adding the management complexity and disruptions that come from implementing single-purpose appliances. 8

ESI Lab Environment Cisco Fabric Interconnects Cisco Switches Cisco UCS 6248UP Catalyst 2960S Cisco UCS Chassis UCS 5108 Cisco UCS Blade Cisco UCSM Version VMware vsphere ESXi Enterprise Plus (vmotion and Storage vmotion) NetApp FAS2240-2 HA NetApp Software 2.66 GHz Xeon X5650 CPU (x2) 73 GB SAS Drives, 48 GB RAM 2.1 (1a) 5.1 24 x 600GB 10K RPM SAS Onboard 4 x 1GbE Data ONTAP 8.1rc2 FlexCache F5 BIG-IP 3900 BIG-IP Local Traffic Manager (LTM) 11.3.0 BIG-IP Application Acceleration Manager (AAM) 11.3.0 F5 BIG-IP VE BIG-IP Global Traffic Manager (GTM) 11.3.0 Network Nightmare (WAN emulator) 1.0.9b Cisco UCS Setup On the secondary datacenter the UCS setup consisted of a B200 M2 blade on a UCS 5108 chassis connected to redundant 6248UP Fabric Interconnects running UCSM 2.1 (1a). The blade server used has a VIC MK81kr interface from Cisco which allows several vnics to be created. A total of 6 vnics were used. Each vnic has an associated VLAN (Native) so there s no need to use tagging at the ESXi level. 9

Corresponding to vmnic0 on the host (esxi3): The other vnics configured on the host esxi3 were: vmnic1 (VLAN 999) vmnic4 and vmnic5 (VLAN 1065) vmnic6 and vmnic7 (VLAN 1066) 10

VMware vsphere Setup A primary and a secondary datacenters were created with similar (but not identical) configuration. This shows that, even if it s recommended to have both configurations as similar as possible, this is not a requirement. Here are the configuration details for both: Primary datacenter Hostname: esxi1 Vmware ESXi, 5.1.0, 914609 Generic Intel Server Storage configuration: The volume vol_m1_only_f5 is local (and visible) only to the primary datacenter and is used for troubleshooting purposes. Network Configuration: vswich0 and vswitch1 are vsphere standard switches. vswitch0 is used for management (vmk0) and has vmotion disabled: 11

vswitch1 is used for the rest of the traffic. All the virtual ports are tagged. A VLAN is used to connect the storage with the host esxi1 in the primary datacenter. The Gateway from the vmotion port was modified to point to the BIG-IP Self-IP: 12

Secondary datacenter Hostname: esxi3 Vmware ESXi, 5.1.0, 799733 Cisco UCS - UCSM 2.1(1a) Storage configuration: The volume vol_m2_only_f5 is local (and visible) only to the secondary datacenter and is used for troubleshooting purposes. Network Configuration: The blade server used has a VIC MK81kr interface from Cisco which allows several vnics to be created. Each vnic has an associated VLAN (Native) so there s no need to use tagging at the ESXi level. vswich0, vswitch1, vswitch2 and vswitch3 are vsphere standard switches: vswitch0 is used for management (vmk0) and has vmotion disabled vswitch1 is used to connect to the storage and has vmotion disabled 13

vswitch2 is used for vmotion. The default gateway is modified to point to the BIG-IP Self-IP vswitch3 holds the pool of VMs NetApp FlexCache Setup Netapp FlexCache software, included with the basic Data OnTap operating system, allows you to scale-out storage performance and accelerate remote data access. An Origin volume is created on the local datacenter and a caching copie (flexcache volumes) is created on the remote datacenter using the flexcache command. On this example both datacenters are being used as Local and Remote hence two volumes of each type are created. Primary datacenter fas2240-7m1> vol create DC2_Ori aggr0 -S 192.168.65.10:DC2_Ori Creation of volume 'DC2_Ori' with size 1457536k on containing aggregate 'aggr0' has completed. fas2240-7m1> vol status Volume State Status Options DC11_Ori online raid_dp, flex create_ucode=on, convert_ucode=on, 64-bit DC2_Ori online raid_dp, flex create_ucode=on, convert_ucode=on, no_i2p=on, flexcache, 64-bit Notes: Original volume (flexvol DC2_Ori) should be created first in the secondary datacenter. It s important to use the same name for the volumes in both datacenters. 192.168.65.10 is the IP address of the secondary datacenter going through the AAM. A static route should be added on the Netapp Controller. 14

Secondary datacenter fas2240-7m2> vol create DC11_Ori aggr0 -S 10.133.65.10:DC11_Ori Creation of volume 'DC11_Ori' with size 1457536k on containing aggregate 'aggr0' has completed. fas2240-7m2> vol status Volume State Status Options DC2_Ori online raid_dp, flex create_ucode=on, convert_ucode=on, 64-bit DC11_Ori online raid_dp, flex create_ucode=on, convert_ucode=on, no_i2p=on, flexcache, 64-bit Notes: Original volume (flexvol DC11_Ori) should be created first in the Primary datacenter. It s important to use the same name for the volumes in both datacenters. 10.133.65.10 is the IP address of the primary datacenter going through the AAM. A static route should be added on the Netapp Controller. VMware vmotion Setup The vmotion setup is standard as all vmotion networks; the main difference is the gateway use by vmotion. The gateway for vmotion network uses Big-IP AAM self-ip to ensure all the traffic going through the isession tunnel. 15

F5 BIG-IP Setup The following is a description of the networks configured on both BIG-IP devices. Private-WAN This is the network that enables BIG-IP isessions and EtherIP. isessions are used for deduplication, encryption and compression. The isession network transfers Storage vmotion as well as Active State (memory) vmotion. This network also enables the EtherIP network, which also runs between the two BIG- IPs in the primary and secondary datacenters. EtherIP and isessions are two distinct and separate technologies but in our example they use the same Private-WAN. The Private-WAN network runs only between the two BIG-IP devices in the primary and secondary datacenters and needs to be routed properly to reach the destination each way. Internal-LAN This is the network on the LAN side that terminates the vmotion traffic on either side. This Self IP will be the default gateway for both the VMware vmotion VMkernel and NetApp FlexCache s E0B interface. Servers This network is where the VM servers are hosted by ESXi. Notice the network has to be the same IP Address space in both primary and secondary datacenters in order to pass VMware validation and to work with EtherIP (see FAQ section for additional details of VMware networking during vmotion). Client This is the incoming client request traffic network. In this diagram, we are using private address space because there is upstream Network Address Translation (NAT) converting public IPs to our private space. In your scenario your client traffic network may be publicly routable IP space. 16

Primary datacenter This Network Map shows the virtual servers and associated pool members. Note: one pool member is marked down by the BIG-IP monitor because it has been vmotioned to the secondary datacenter. Two tunnels are used, an EtherIP tunnel to carry vmotion traffic and an isession tunnel to carry FlexCache traffic. 17

BIG-IP AAM is configured to secure, optimize, and accelerate vmotion and FlexCache specifically. 18

Secondary datacenter This Network Map shows the virtual servers and associated pool members. Note: two pool members are marked down by the BIG-IP monitor because they have been vmotioned to the primary datacenter. Two tunnels are used, an EtherIP tunnel to carry vmotion traffic and an isession tunnel to carry FlexCache traffic. 19

BIG-IP AAM is configured to secure, optimize, and accelerate vmotion and FlexCache specifically. 20

Conclusion vmotion and FlexCache running across networks encounter a wide range of performance and availability challenges. These problems cost organizations an enormous amount in lost productivity, missed opportunities, and damage to reputation. BIG-IP delivers a comprehensive range of application delivery services to eliminate these costs, increase performance, and increase efficiency to protect the investments in VMware and NetApp environments. About ESI Technologies ESI Technologies, a Canadian company headquartered in Montreal with offices in Toronto and Quebec City, has become the leader in the architecture, design, deployment and support of technological solutions that ensure the availability, security, compliance and performance of information. In addition, ESI designs and deploys an on-demand IT service management solution (Octopus) based on ITIL standards. Multiforce consulting, an ESI Group company, provides strategic business and IT consulting. Over the years ESI has developed strategic partnerships with the most important players in the IT world such as NetApp, HDS, IBM, EMC, Oracle, VMware, Symantec, Cisco, F5 Networks, Brocade, Quantum, Riverbed, Checkpoint, Microsoft, and Websense. ESI s rapid growth is based on carefully planned strategic investments and acquisitions. Over the past twenty years, ESI has become one of Canada s leading private suppliers of technological solutions and professional services. Visit ESI at www.esitechnologies.com. About F5 Networks F5 Networks (NASDAQ: FFIV) makes the connected world run better. F5 helps organizations meet the demands and embrace the opportunities that come with the relentless growth of voice, data, and video traffic, mobile workers, and applications in the datacenter, the network, and the cloud. The world s largest businesses, service providers, government entities, and consumer brands rely on F5 s intelligent services framework to deliver and protect their applications and services while ensuring people stay connected. Learn more at www.f5.com. You can also follow @f5networks on Twitter or visit us on Facebook for more information about F5, its partners, and technology. For a complete listing of F5 community sites, please visit www.f5.com/news-press-events/web-media/community.html. 21