Optimizing Hybrid Networks for SaaS

Similar documents
The Next Generation of Wide Area Networking

Key Components of WAN Optimization Controller Functionality

The Data Replication Bottleneck: Overcoming Out of Order and Lost Packets across the WAN

Virtualization Changes Everything. WOC 2.0: The Era of Virtual WAN Optimization Controllers

Best Practices for Super- Charging your IT Strategy

The Requirement to Rethink the Enterprise WAN

The Problem with TCP. Overcoming TCP s Drawbacks

TRUFFLE Broadband Bonding Network Appliance. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Frequently Asked Questions

TRUFFLE Broadband Bonding Network Appliance BBNA6401. A Frequently Asked Question on. Link Bonding vs. Load Balancing

Optimize Your Microsoft Infrastructure Leveraging Exinda s Unified Performance Management

The Need to Rethink the WAN

How To Improve Performance On A Network With A Slow Connection

A Link Load Balancing Solution for Multi-Homed Networks

Building a better branch office.

A Talari Networks White Paper. Turbo Charging WAN Optimization with WAN Virtualization. A Talari White Paper

Integration Guide. EMC Data Domain and Silver Peak VXOA Integration Guide

Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008

Cisco Wide Area Application Services Optimizes Application Delivery from the Cloud

The Critical Role of an Application Delivery Controller

Best Practices for Deploying WAN Optimization with Data Replication

Optimizing Dell Compellent Remote Instant Replay with Silver Peak Replication Acceleration

WAN Performance Analysis A Study on the Impact of Windows 7

white paper Using WAN Optimization to support strategic cloud initiatives

AN OVERVIEW OF SILVER PEAK S WAN ACCELERATION TECHNOLOGY

November Defining the Value of MPLS VPNs

PRODUCTS & TECHNOLOGY

Riverbed SaaS. 04 de Setembro de Copyright 2015 Data Systems, todos os direitos reservados.

Teridion. Rethinking Network Performance. The Internet. Lightning Fast. Technical White Paper July,

The Requirement for a New Type of Cloud Based CDN

Remote IT Infrastructure Consolidation

Accelerate Private Clouds with an Optimized Network

White Paper. The Compelling ROI of Adaptive Private Networking. Jim Metzler

A Guide to WAN Application Delivery for the SME Market

Optimizing Performance for Voice over IP and UDP Traffic

Truffle Broadband Bonding Network Appliance

CISCO WIDE AREA APPLICATION SERVICES (WAAS) OPTIMIZATIONS FOR EMC AVAMAR

AKAMAI WHITE PAPER. Delivering Dynamic Web Content in Cloud Computing Applications: HTTP resource download performance modelling

QoS issues in Voice over IP

Designing WANs for Today that Position You for Tomorrow

WAN OPTIMIZATION. Srinivasan Padmanabhan (Padhu) Network Architect Texas Instruments, Inc.

WAN Traffic Management with PowerLink Pro100

Best Effort gets Better with MPLS. Superior network flexibility and resiliency at a lower cost with support for voice, video and future applications

Accelerating File Transfers Increase File Transfer Speeds in Poorly-Performing Networks

white paper AN OVERVIEW OF SILVER PEAK S VXOA TECHNOLOGY REAL-TIME TECHNIQUES FOR OVERCOMING COMMON WAN BANDWIDTH, DISTANCE AND CONGESTION CHALLENGES

White Paper: Broadband Bonding with Truffle PART I - Single Office Setups

FIVE WAYS TO OPTIMIZE OFFSITE DATA STORAGE AND BUSINESS CONTINUITY

Top IT Pain Points: Addressing the bandwidth issues with Ecessa solutions

A SENSIBLE GUIDE TO LATENCY MANAGEMENT

networks Live & On-Demand Video Delivery without Interruption Wireless optimization the unsolved mystery WHITE PAPER

A Talari Networks White Paper. Transforming Enterprise WANs with Adaptive Private Networking. A Talari White Paper

File Transfer Protocol (FTP) Throughput Testing by Rachel Weiss

Computer Networks CS321

Broadband Bonding Network Appliance TRUFFLE BBNA6401

Challenges of Sending Large Files Over Public Internet

Microsoft Exchange 2010 /Outlook 2010 Performance with Riverbed WAN Optimization

A TECHNICAL REVIEW OF CACHING TECHNOLOGIES

Making a Case for Including WAN Optimization in your Global SharePoint Deployment

1.1. Abstract VPN Overview

WAN Optimization. Riverbed Steelhead Appliances

Akamai s EdgePlatform for Application Acceleration

OPTIMIZING THE NETWORK FOR APPLICATIONS

Applications. Network Application Performance Analysis. Laboratory. Objective. Overview

Cisco WAAS Express. Product Overview. Cisco WAAS Express Benefits. The Cisco WAAS Express Advantage

Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER

STEELHEAD PRODUCT FAMILY

The 3 Barriers to IT Infrastructure Consolidation

Detailed Lab Report DR101115D. Citrix XenDesktop 4 vs. VMware View 4 using Citrix Branch Repeater and Riverbed Steelhead

NETWORK ISSUES: COSTS & OPTIONS

Achieving High Quality Voiceover-IP Across WANs With Talari Networks APN Technology

Traffic delivery evolution in the Internet ENOG 4 Moscow 23 rd October 2012

The 2011 Application & Service Delivery Handbook

Executive summary. Introduction Trade off between user experience and TCO payoff

SPEAKEASY QUALITY OF SERVICE: VQ TECHNOLOGY

What is Network Latency and Why Does It Matter?

Supporting Server Consolidation Takes More than WAFS

WAN Optimization Integrated with Cisco Branch Office Routers Improves Application Performance and Lowers TCO

Desktop virtualization and the branch office. Optimizing virtual desktops and applications to the branch office VDI.

Getting Data from Here to There Much Faster

The Role of WAN Optimization in Cloud Infrastructures

Business Case for Cisco Intelligent WAN

Accelerating Cloud Based Services

WAN Optimization for Microsoft SharePoint BPOS >

Five Hosted VoIP Features

Optimizing WAN Performance for the Global Enterprise

VPN over Satellite A comparison of approaches by Richard McKinney and Russell Lambert

Voice Over IP Performance Assurance

WHITE PAPER: Broadband Bonding for VoIP & UC Applications. In Brief. mushroomnetworks.com. Applications. Challenge. Solution. Benefits.

The Impact of QoS Changes towards Network Performance

LAN Planning Guide LAST UPDATED: 1 May LAN Planning Guide

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

A Simulation Study of Effect of MPLS on Latency over a Wide Area Network (WAN)

Network Architecture and Topology

Application Performance Management: New Challenges Demand a New Approach

The 2012 Cloud Networking Report

Clearing the Way for VoIP

Redundancy for Corporate Broadband

Multi Protocol Label Switching (MPLS) is a core networking technology that

Computer Networking Networks

WAN Monitoring Whitepaper

Transcription:

Ashton, Metzler & Associates Leverage Technology For Success Whitepaper July 1, 2012 Optimizing Hybrid Networks for SaaS

TABLE OF CONTENTS INTRODUCTION.... 3 THE CHANGING NATURE OF APPLICATIONS.... 3 APPLICATION DELIVERY CHALLENGES FOR SaaS.... 5 Packet Loss 5 Bandwidth Constraints 5 Availability 6 Chatty Protocols and Applications 6 Characteristics of TCP 7 Network Complexity 7 NETWORK AND APPLICATION OPTIMIZATION.... 9 Network Latency 9 Packet Loss 10 Bandwidth Constraints 10 Chatty Protocols and Applications 10 Characteristics of TCP 10 Network Complexity 10 THE OPTIMIZED HYBRID WAN.... 11 QUANTIFYING THE PERFORMANCE IMPROVEMENTS... 12 SUMMARY.... 15

3 INTRODUCTION As recently as a few years ago, the primary consumers of SaaS-based applications were small and medium sized businesses (SMBs). However, as was pointed out by Gartner 1 in 2010 large enterprises began to shift away from on-premise software solutions and started to make a significant adoption of SaaS. This trend is likely to continue as Gartner has also predicted that SaaS will account for some 15 percent of enterprise application purchases by 2015, up from 10 percent today. 2 SaaS provides many benefits to enterprise IT organizations, including the potential for lower cost and a reduced time to implementation. There are, however, some significant challenges associated with SaaS. One of the most notable challenges is that because of factors such as the delay and packet loss that is associated with both the Internet and with private WANs, SaaS-based applications tend to exhibit performance that is both lengthy and highly variable. In many cases this erratic performance has become a barrier to the further adoption of SaaS. The primary goal of the white paper is to describe a new class of optimized hybrid WAN that combines the best practices for optimizing both the Internet and private WANs. This new class of WAN optimization improves the end-to-end performance of the network, which in turn significantly improves the performance and consistency of SaaS based applications and reduces these barriers to the ongoing adoption of SaaS. THE CHANGING NATURE OF APPLICATIONS The client/server model of computing became popular several years ago as an alternative to the traditional mainframe-based model of computing. In the client/server model, workloads are partitioned between two types of devices: servers and clients. A server is a host that runs one or more applications which share their resources with clients. Servers often feature higher-powered central processors, more memory and larger disk drives than clients feature. In contrast to a server, a client doesn t share any of its resources. What a client does is it makes requests, usually over a wide area network, to a server either for content or for a service function. In the initial deployments of client/server computing, client devices were typically desktop PCs. In the current environment, mobile devices also function as clients. An unfortunate aspect of client/server computing is that an upgrade to the server-side code of the application typically also requires an upgrade to the code installed on client, a task that can be quite burdensome. Web-based applications are another mainstream model of computing in which an application is accessed over the Internet or an Intranet and the user interface is a browser. Web-based applications are popular in part due to the ubiquity of web browsers. Another reason for the popularity of Web-based applications is that in contrast to traditional client/server applications, an upgrade to the server-side code does not require that changes be made to each client. Browser functionality is a key enabler that allows businesses to adopt BYOD, and hence avoid the capital investment it takes to refresh end user devices, but still have the requisite functionality to enable users to successfully access applications. Software-as-a-Service (SaaS) is a computing model that began to be popular a few years ago. In this computing model, an independent software vendor (ISV) hosts their software in a data center, either one that they own and manage, or one that they acquire from a third party. As previously noted, reduced cost and reduced time to implementation are two of the factors driving the growing acceptance of SaaS. Figure 1 contains a listing of some representative SaaS providers.

Optimizing Hybrid Networks for SaaS 4 Figure 1: Representative SaaS Providers Recent research from Gartner (Figure 2) indicates that primary classes of applications constitute the SaaS market. As previously mentioned, one of the challenges with SaaS-based applications is performance as users often access these applications over both the Internet and a private WAN. Further complicating the situation is the fact that it is generally not possible for IT organizations to place WAN Optimization Controller (WOC) functionality at the SaaS provider s facilities. Total Software Revenue Forecast for SaaS Delivery Within the Enterprise Application Software Markets, 2007 2015 4 Source: Gartner (November 2011) Figure 2: Growing Interest in SaaS

5 APPLICATION DELIVERY CHALLENGES FOR SaaS As explained in the 2011 Application and Service Delivery Handbook 5, there are a number of challenges that impact the delivery of any form of application. Those challenges include: Network Latency Network latency refers to the time it takes for data to go from the sender to the receiver and back. Since the speed of data flow is basically constant 6, WAN latency is directly proportional to the distance between the sender and the receiver. Table 3 contains representative values for network latency; both for a LAN as well as for a private WAN. Network Type LAN East Coast of the US to the West coast of the US International WAN Link Satellite Link Typical Latency 1 5 ms 80 ms 100 ms 100 ms 450 ms Over 500 ms Figure 3: Network Latency Values As described by Moore s Law of Internet Latency 7, Internet latency is typically greater than the latency in a private WAN. That law references the business model used by the Internet and it states, As long as Internet users do not pay for the absolute (integrated over time) amount of data bandwidth which they consume (bytes per month), Internet service quality (latency) will continue to be variable and often poor. Packet Loss Packet loss can occur in either a private WAN or the Internet, but it is more likely to occur in the Internet. Part of the reason for that is that as pointed out by Wikipedia 8, the Internet Is a network of networks that consists of millions of private and public, academic, business, and government networks of local to global scope. Another part of the reason for why there is more packet loss in the Internet than there is in a private WAN is the previously mentioned Internet business model. One of the effects of that business model is that there tend to be availability and performance bottlenecks at the peering points. It is normal to observe high single digit to double digit percentages of packet loss over the public Internet. When there is packet loss, TCP (Transmission Control Protocol) will re-transmit packets. In addition, the TCP slow start algorithm (see below) assumes that the loss is due to congestion and takes steps to reduce the offered load on the network. Both of the actions have the affect of reducing throughput on the WAN. Bandwidth Constraints Unlike the situation within a LAN, within a WAN there are monthly recurring charges that are proportional to the amount of bandwidth that is provisioned. For example, the cost of T1/E1 access to an MPLS network varies from roughly $450/Mbps/month to roughly $1,000/Mbps/month. In similar fashion, the cost for T1/E1 access to a Tier 1 ISP varies from roughly $300/Mbps/month to roughly $600/Mbps/month. The variation in cost is largely a function of geography. WAN costs tend to be the lowest in the United States and the highest in the Asia-Pacific region. To exemplify how the monthly recurring cost of a WAN leads to bandwidth constraints, consider a hypothetical company that has fifty offices and each one has on average a 2 Mbps WAN connection that costs on average $1,000/month. Over three years, the cost of WAN connectivity would be $1,800,000. Assume that in order to support an increase in traffic, the company wanted to double the size of the WAN connectivity at each of its offices. In most cases there wouldn t be any technical impediments to doubling the bandwidth. There would, however, be financial impediments. On the assumption that

Optimizing Hybrid Networks for SaaS 6 doubling the bandwidth would double the monthly cost of the bandwidth, it would cost the company over a three-year time frame an additional $1,800,000 to double the bandwidth. Because of the high costs, very few if any companies, provision either their private WAN or their Internet access to support peak loads. As such, virtually all WANs, both private WANs and the Internet, exhibit bandwidth constraints. Availability Despite the Internet s original intent to provide communication even when faced with a disaster, application availability over the Internet is somewhat problematic. As previously noted, the Internet is not a single network, but rather millions of networks interconnected to appear as a single network. The individual networks that compose the Internet exchange information between each other that describes what IP address ranges they contain (aka, routes). Within a single network called a routing domain a specialized networking protocol is used to communicate IP address ranges to all the routers within the individual network. Routing protocols within a network can detect a network link failure and update the routing table on all routers within a few seconds when properly designed. For the exchange of information between networks called inter-domain routing a special routing protocol call Border Gateway Protocol (BGP) is used. The size and complexity of the Internet as well as the inherent characteristics of BGP mean that a failed network link and the resulting routing path change may take several minutes before all routing tables are updated. In contrast, traditional voice circuits take milliseconds to reroute voice calls when a network link fails. The impact of a network link failure and the time it takes for the Internet to update its routing tables and find an alternative path varies according to type of application involved. For a simple web application, a brief outage may go unnoticed if users are not loading the web page during the outage. For real-time applications like Voice-over-IP (VoIP) or IP Video, an outage of several seconds may cause interrupted calls and video sessions. In addition, there are two primary types of communication over the Internet: TCP and UDP (User Datagram Protocol). With TCP communication, lost packets are retransmitted until the connection times out. With UDP communication, there is no built-in mechanism to retransmit lost data and UDP applications tend to fail rather than recover from brief outages. Chatty Protocols and Applications As illustrated in Figure 4, a chatty application or protocol requires hundreds of round trips to complete a transaction. To exemplify the impact of latency and chatty protocols, assume that a given transaction requires 200 application turns. Further assume that the round trip delay over the LAN that some users utilize to access the application is 5 ms, and that the round trip delay over the WAN that is used by other users is 100 ms. Over the LAN, the delay that is attributable just to application turns is one second. However, over the WAN, the combination of a chatty protocol and network latency results in a delay that is attributable just to application turns that is 20 seconds. Figure 4: Application Challenges WAN 1000 Mbps 1.5 Mbps Bandwidth Contention 200 ms RTT Latency and Chattiness 2 5% Loss Packet Loss RESULT IN: Slow application response times Poor user experience Lack of adoption Lost productivity

7 Characteristics of TCP TCP is the most commonly used transport protocol and it causes missing packet(s) to be re-transmitted based on TCP s retransmission time-out parameter. This parameter controls how long the transmitting device waits for an acknowledgement from the receiving device before assuming that the packets were lost and need to be retransmitted. If this parameter is set too high, it introduces needless delay as the transmitting device sits idle waiting for the time-out to occur. Conversely, if the parameter is set too low, it can increase the congestion that was the likely cause of the time-out occurring. Another TCP parameter that impacts performance is the TCP slow start algorithm. The slow start algorithm is part of the TCP congestion control strategy and it calls for the initial data transfer between two communicating devices to be severely constrained. The algorithm calls for the data transfer rate to increase if there are no problems with the communications. In addition to the initial communications between two devices, the slow start algorithm is also applied in those situations in which a packet is dropped. Network Complexity The overall complexity of both private WANs and the Internet tends to increase the impact of the previously described application delivery challenges. For example, as the number of links that the data has to transit between origin and destination increases, so does the delay and packet loss. It is not, however, just the number of links and the complex topologies that complicate application delivery, it is also complex protocols such as TCP and BGP. As previously mentioned, the Internet uses BGP to determine the routes from one subtending network to another. When choosing a route, BGP strives to minimize the number of hops between the origin and the destination. BGP doesn t, however, strive to choose a route with the optimal performance characteristics; i.e., lowest delay, lowest packet loss. Given the complex, dynamic nature of the Internet, a given network or a particular peering point router can go through periods where it exhibits severe delay and/or packet loss. As a result, the route that has the fewest hops is not necessarily the route that has the best performance. In addition to the previously discussed challenges that impact the performance of any type of application, SaaS-based applications introduce some new challenges. One of these challenges is that, as discussed in a subsequent section of this white paper, the vast majority of IT organizations route the majority of their Internet traffic over their private WAN to a central site prior to handing that traffic over to the Internet. As a result, when a company adopts a new SaaS-based application they are adding traffic to both their private WAN and to their Internet connections. This is referred to as backhauling of network traffic (depicted in figure 5). Figure 5: Backhauling Internet Traffic INTERNET SaaS data center Corporate data center PRIVATE WAN BACK HAUL Branch office

Optimizing Hybrid Networks for SaaS 8 However, it is not just the adoption of new applications that increases WAN traffic. Figure 6 contrasts how an on-premise email solution and a SaaS-based email solution impact a company s network links. In the case of the on-premise email solution, if a 10 MB file is sent to five users, the traffic is within the company s private WAN. If the company decided to move away from a premise-based solution and instead adopt a SaaS-based email solution, the email traffic traverses the private WAN and the Internet access link. Therefore when a 10 MB file is sent to five users, the email is sent to the email server and returns back multiple times across the Internet link once for each user, in this case adding an additional 50MB of traffic over the Internet access link. As companies replace premise-based solutions with SaaSbased solutions they typically keep all of the same traffic on their private WAN but they add an equivalent amount of traffic to their use of the Internet and may need to consider link upgrades. INTERNET INTERNET ENTERPRISE ENTERPRISE Figure 6: On Premise vs. SaaS-based Email Some of the other challenges that impact the delivery of SaaS-based applications include that accessing Web-based applications is a serial process. Referring again to the SaaS-based email example in Figure 6, if it takes several minutes to deliver the email messages, the users cannot perform any other tasks (e.g., send another email, open another window) until the email has been delivered. As mentioned, the factors described above impact the performance of SaaS-based applications whether those applications are accessed over a private WAN or over the Internet. However, as described in a subsequent section of this white paper, it is very common for users to access a SaaS-based application using both a private WAN and the Internet. While the use of multiple networks to access SaaS-based applications is quite common, it does exacerbate the performance challenges that are associated with SaaS. That follows because there are performance challenges (e.g., delay, variability) associated with each of the networks. The delay characteristics of the end-to-end network between the users and the SaaS-based application are the sum of the delay characteristics of each network individually. For example, if the private WAN is exhibiting an end-to end delay of 60 ms and the Internet is exhibiting an end-to-end delay of 70 ms then the overall delay will be roughly 130 ms. The fact that the WAN latency of the combined network is higher than either network individually means that chatty protocols and chatty applications will perform worse over the combined network than they would over either of the individual networks. The increase in WAN latency also increases the probability that some parameter, such as TCP s retransmission time-out parameter, will be triggered and further cause performance to degrade.

9 NETWORK AND APPLICATION OPTIMIZATION In order to mitigate the impact that the factors discussed in the preceding section have on the performance of applications accessed over a private WAN, many IT organizations have implemented a WAN Optimization Controller (WOC) both at a branch office and at the central data center. WOCs optimize the performance characteristics within the WAN and condition of the link for better application performance. This assumption is reasonable in the case of private WAN services. However, this assumption doesn t apply to enterprise application traffic that transits the Internet because there are significant opportunities to optimize performance within the Internet itself based on the use of an Application Delivery Network (ADN). Point of presence near Origin Server Secured at the edge GLOBAL INTERNET Customer origin/ Web server Users High-performance global overlay network Point of presence near end user Dynamic optimization of routes and reduced round trips Figure 7: Application Delivery Network Functionality As shown in Figure 7, the way that an ADN works is that it leverages service provider resources that are distributed throughout the Internet in order to optimize the performance, security, reliability and visibility of the enterprise s Internet traffic. All client requests to the application s origin server in the data center are redirected via DNS to an ADN server in a nearby point of presence (PoP) that is close to application users, typically within a single network hop. This edge server then optimizes the traffic flow to the ADN server that is closest to the data center s origin server. Below is a discussion of some of the techniques that are used by WOCs and/or ADNs to overcome the application delivery challenges that were discussed in the preceding section. Network Latency There is little if anything that can be done to reduce the latency of a private WAN in part because, as previously noted, the performance of a private WAN is determined by service parameters controlled by the WAN service provider. That isn t true for the Internet. For example, an ADN can eliminate the extra latency within the Internet that comes from the inefficiencies of BGP by implementing route optimization functionality. Route optimization dynamically chooses the optimum route between each end user and the application server. The choice of the selected route is based on factors such as the degree of congestion, traffic load and availability on each potential path. As a result, the selected path provides the lowest possible latency and packet loss for each user session.

Optimizing Hybrid Networks for SaaS 10 Packet Loss One way to reduce the impact of packet loss over the Internet was discussed in the preceding bullet: route optimization. As noted, the use of route optimization results in selecting a path that provides the lowest possible latency and packet loss for each user session. A technique that reduces the impact of packet loss over both the Internet and a private WAN is Forward Error Correction (FEC). FEC is typically used at the physical layer (Layer 1) of the OSI stack. FEC, however, can also be applied at the network layer (Layer 3) whereby an extra packet is transmitted for every n packets sent. This extra packet is used to recover from an error and hence avoid having to retransmit packets. Bandwidth Constraints There are many techniques that can be used to mitigate the bandwidth constraints of a WAN: Compression: The size of the file is reduced prior to transmitting it and then de-compressed on the receiving end; De-duplication: The only data that is transmitted across the WAN is the changes to the file since the last time it was transmitted; Caching: A copy of the file is kept at the branch office with the goal of either eliminating or minimizing the number of times that the file has to be re-transmitted from the central site. Chatty Protocols and Applications The two most common techniques to mitigate the impact of a chatty protocol or application are request prediction and request spoofing. Request prediction refers to leveraging an understanding of the semantics of specific protocols or applications to be able to anticipate a request a user will make in the near future. Making this request in advance of it being needed eliminates virtually all of the delay when the user actually makes the request. Request spoofing refers to situations in which a client makes a request of a distant server, but the request is responded to locally. Characteristics of TCP While the TCP retransmission time-out parameter and the TCP slow start algorithm can negatively impact application performance, both of these parameters are part of TCP because they provide value. As a result, these parameters can t be just blindly ignored. However, TCP performance can be significantly improved if an ADN can dynamically set these parameters based on the characteristics of the network such as the speed of the links and the distance between the transmitting and receiving devices. There is a strong synergy between route optimization and minimizing the impact TCP s retransmission timeout and the slow start algorithm. For example, because route optimization chooses the optimum path through Internet, it is more likely than BGP to choose a path that has minimum congestion and hence not experience the problems associated with the TCP slow start algorithm or with TCP s retransmission timeout. In addition, because the path is optimized, it is possible to get more aggressive with both the TCP slow start algorithm and TCP s retransmission timeout without incurring additional congestion. Network Complexity There is not much that can be done to reduce the topological complexity that is associated with private WANs. However, as noted, it is possible to minimize the impact TCP s retransmission timeout and the slow start algorithm by implementing TCP optimization. It is also possible to eliminate the complexity and negative impact that is associated with BGP by implementing route optimization.

11 THE OPTIMIZED HYBRID WAN In almost all instances when a user accesses a SaaS-based application they do that over the Internet and not over a private WAN service such as MPLS. That follows in large part because from the perspective of the SaaS provider, one or two high-speed Internet connections are much simpler and more economical to provision and manage than are connections to the varying private WAN services offered by multiple network service providers. In addition, the high fixed costs of these private WAN services can detract significantly from the overall cost-effectiveness of providing SaaS-based applications. The traditional approach to providing Internet access to branch office employees has been to carry their Internet traffic on the organization s enterprise network (e.g., their MPLS network) to a central site where the traffic was handed off to the Internet. The advantage of this approach is that it enables IT organizations to exert more control over their Internet traffic and it simplifies management in part because it centralizes the complexity of implementing and managing security policy. One disadvantage of this approach is that it results in extra traffic transiting the enterprise s WAN, which adds to the cost of the WAN. Another disadvantage of this approach is that it adds additional delay to the Internet traffic. Optimization of these applications needs to take this backhauling into account. The 2011 Cloud Networking Report 9 contained the results of a survey in which the survey respondents were asked to indicate the percentage of their Internet traffic that they carry to a central site over their enterprise WAN prior to handing the traffic off to the Internet. The results of that survey are shown in Figure 8. Percentage of Internet Traffic Currently Routed to a Central Site 100% 39.7% 76% 99% 24.1% 51% 75% 8.5% 26% 50% 14.2% 1% 25% 7.1% 0% 6.4% Figure 8: Percentage of Centralized Internet Traffic As shown in Figure 8, the vast majority of IT organizations (72.3%) route the majority of their traffic to a central site prior to handing that traffic over to the Internet. As previously described, a major challenge that is associated with using multiple networks to access a SaaS-based application is that the end-to-end delay is roughly the sum of the delay of each network. In addition, as was also noted, many IT organizations have implemented WOCs in order to overcome the performance challenges that are associated with private WAN services. This means that the existing WOCs can utilize technology to overcome performance challenges such as TCP s retransmission timeout and slow start algorithm over the private WAN that connects a branch office to a central site. However, in the traditional scenario, once that SaaS traffic is handed off to the Internet, the performance of the application is negatively impacted by those parameters.

Optimizing Hybrid Networks for SaaS 12 Overcoming the performance impairments that are associated with employees in branch offices, using multiple networks to access a SaaS-based application, requires an end-to-end approach to network and application optimization. This approach is depicted in Figure 9 and will be referred to in this white paper as an Optimized Hybrid WAN. A key component of creating an optimized hybrid WAN is integrating the optimization that is in place for private WANs with the performance gains that are provided by an ADN. As part of this integration, key functionality that is part of the ADN must be integrated into the WOC that sits in the enterprise data center. In addition, WOCs have to be distributed to the PoPs that support the ADN. The integration needs to ensure a seamless handoff of functionality such as TCP optimization between the WOC in the data center and the ADN. WOC OPTIMIZATION ADN OPTIMIZATION PRIVATE WAN INTERNET Data Center Branch Office OPTIMIZED HYBRID WAN End-to-End Optimization Figure 9: Optimized Hybrid WAN QUANTIFYING THE PERFORMANCE IMPROVEMENTS This suite of tests demonstrates the performance improvements that result from the Riverbed Steelhead Cloud Accelerator, a joint solution resulting from the integration of the Steelhead appliances from Riverbed Technology with the ADN provided by Akamai. As shown in Figure 10, the tests involved users in Gothenburg, Sweden; Singapore and Bangalore, India accessing services in Dublin, Ireland. Some of the users accessed the services in Ireland over a back-hauled optimized hybrid WAN while others accessed the services using a direct Internet connection to the optimized Akamai ADN. One of the three sets of tests that one of Akamai s customers ran was downloading an 18MB file from Dublin to Gothenburg. Prior to implementing optimization, this activity took 10 seconds. Performing the same task using the Steelhead Cloud Accelerator reduced the amount of time this task took to 2.5 seconds a factor of 4 improvement. The second set of tests involved uploading and downloading a 22.4 MB PowerPoint file between Dublin and Singapore. Without using any optimization techniques, the round trip delay between Dublin and Gothenburg over the Internet was 330 ms. After implementing route optimization, the round trip delay was reduced to 209 ms. This reduction in the round trip delay was one of the factors that contributed to the improved performance using the Steelhead Cloud Accelerator. The time it took to download

13 Dublin Akamai Edge Server Bangalore 12 Mbps Optimized Connection PRIVATE WAN BACK HAUL 10 Mbps Sweden Optimized Connection 50 Mbps Singapore Riverbed Steelhead Appliance Figure 10: The Test Environment the file from Dublin to Singapore, for example, went from 210 seconds to 4 seconds a factor of 52.5 improvement, while the time it took to upload the file from Singapore to Dublin went from 240 seconds to 5 seconds a factor of 48 improvement. The third set of tests involved a number of tasks that were performed between Dublin and Bangalore, both using the Steelhead Cloud Accelerator solution and direct over the Internet. The use cases that were tested and the comparison in performance are shown in Figure 11. The type of performance improvements that were described above were confirmed by Joel Smith, CTO of AppRiver. AppRiver is a ten year old hosted services provider that focuses on email and security. The company has more then 50,000 customers and manages over eight million mailboxes. Use Case Internet Optimized Hybrid WAN Performance Improvement Download a 22.4 MB 38 Min, 20 Sec. 11 seconds 209x PowerPoint file Upload a 22.4 MB 300 seconds 10 seconds 30x PowerPoint file View a document 20 seconds 5 seconds 5x in a browser Edit a document 30 seconds 5 seconds 6x in a browser Change and save 10 seconds 3 seconds 3.3x a document Figure 11: Test Results Between Bangalore and Dublin

Optimizing Hybrid Networks for SaaS 14 Smith stated that as a provider of hosted Exchange services that they used to get high volumes of support calls and that in the vast majority of instances, the calls were related to issues with the Internet that were beyond their control. He added that after they added Akamai services, there was a 95% reduction in the volume of calls for non-solvable Internet related issues. According to Smith, the savings in support costs covered the cost of the Akamai services and that the improved performance has given them entree to additional accounts. He added that AppRiver loves the Akamai service. Smith also stated that when AppRiver was contemplating offering Office 365, they knew that they had to be able to provide the same performance that the customer would get if they hosted SharePoint and Exchange internally. As a step towards achieving that goal, AppRiver decided to conduct tests in order to quantify the performance improvements that result from implementing the combined Akamai Riverbed solution. The parameters for the tests they did were: Number of users: 162 Average Mailbox Size: 1.x GB Average inbound messages per day/per user: 40 45 Average outbound messages per day/per user: 15-20 According to Smith, the test results showed a 60% reduction both in traffic and in the time it takes to complete key tasks.

15 SUMMARY The enterprise adoption of SaaS-based applications has been increasing significantly in large part because the use of these applications can result in both a lower overall cost and a reduced time to implementation. However, because of factors such as the delay and packet loss that are associated with both the Internet and with private WANs, SaaS-based applications tend to exhibit erratic performance that can become a barrier to the further adoption of SaaS. Any application that is accessed over a WAN is subject to erratic performance due to factors such as network latency, packet loss, bandwidth constraints, chatty protocols, TCP characteristics and the overall network complexity. SaaS-based applications, however, are particularly vulnerable because in virtually all instances they are accessed using the Internet and in a majority of instances, they are accessed using a combination of a private WAN and the Internet. Accessing an application over the Internet creates performance challenges due to the high levels of impairments such as delay and packet loss that are associated with the Internet as well as the sub-optimal routing that typically occurs within the Internet. Accessing an application using both the Internet and a private WAN creates performance challenges because the sub-optimal routing within the Internet is still a factor and the transmission impairments are additive; e.g., the end-to-end delay is the sum of the delay in the private WAN and in the Internet. One of the negative side affects of increasing the end-to-end delay is that it can trigger TCP s retransmission timeout parameter. Solutions that improve application performance have been available for years. WOCs, for example, can be placed on each end of a WAN link to improve the performance of a private WAN. ADNs can be used to improve the performance of applications that are accessed over the Internet. The key thing that has been missing from an optimization perspective is the ability to improve the end-to-end performance of applications that are accessed using both a private WAN and the Internet, which in the case of SaaS-based applications is the primary way these applications are accessed. As previously mentioned, further complicating this situation is the fact that it will generally not be possible for an IT organization to place a WOC at the SaaS provider s facilities. The Riverbed Steelhead Cloud Accelerator is designed to optimize the end-to-end performance of SaaSbased applications. The solution accomplishes this goal by combining WOC functionality from Riverbed with ADN functionality from Akamai. As indicated by the test results, this combination of functionality yields dramatic performance improvements, making performance much less of a barrier to the ongoing adoption of SaaS. 1 Gartner Forecast: Software as a Service, Worldwide, 2010-2015, 2H11 Update by Sharon A. Mertz, Chad Eschinger, Laurie F. Wurster, Tom Eid, Chris Pang, Yanna Dharmasthira, Hai Hong Swinehart, Fabrizio Biscotti, November 11, 2011 2 http://www.gartner.com/it/page.jsp?id=1735214 3 The phrase private WAN refers to services such as Frame Relay and MPLS that are intended primarily to interconnect the sites within a given enterprise. 4 Gartner Forecast: Software as a Service, Worldwide, 2010-2015, 2H11 Update by Sharon A. Mertz, Chad Eschinger, Laurie F. Wurster, Tom Eid, Chris Pang, Yanna Dharmasthira, Hai Hong Swinehart, Fabrizio Biscotti, November 11, 2011 5 http://www.webtorials.com/content/2011/07/2011-application-service-delivery-handbook.html 6 There are slight variations in the speed of data flow in copper vs. the speed of date flow in fiber optics. 7 http://www.tinyvital.com/misc/latency.htm 8 http://en.wikipedia.org/wiki/internet 9 http://www.webtorials.com/content/2011/11/2011-cloud-networking-report.html