Per-Flow Queuing Allot's Approach to Bandwidth Management



Similar documents
Per-Flow Queuing Allot s Approach to Bandwidth Management

Distributed Systems 3. Network Quality of Service (QoS)

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Requirements of Voice in an IP Internetwork

Quality of Service in the Internet. QoS Parameters. Keeping the QoS. Traffic Shaping: Leaky Bucket Algorithm

Transport Layer Protocols

Improving Quality of Service

APPLICATION NOTE 209 QUALITY OF SERVICE: KEY CONCEPTS AND TESTING NEEDS. Quality of Service Drivers. Why Test Quality of Service?

Advanced Networking Voice over IP: RTP/RTCP The transport layer

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

1. The subnet must prevent additional packets from entering the congested region until those already present can be processed.

Quality of Service versus Fairness. Inelastic Applications. QoS Analogy: Surface Mail. How to Provide QoS?

Voice over IP: RTP/RTCP The transport layer

Quality of Service Analysis of site to site for IPSec VPNs for realtime multimedia traffic.

Encapsulating Voice in IP Packets

Clearing the Way for VoIP

This topic lists the key mechanisms use to implement QoS in an IP network.

Network Considerations for IP Video

Voice, Video and Data Convergence > A best-practice approach for transitioning your network infrastructure. White Paper

Nortel Technology Standards and Protocol for IP Telephony Solutions

VoIP QoS. Version 1.0. September 4, AdvancedVoIP.com. Phone:

Internet Quality of Service

Network Management Quality of Service I

The Problem with TCP. Overcoming TCP s Drawbacks

Transport and Network Layer

Quality of Service. Traditional Nonconverged Network. Traditional data traffic characteristics:

HX System Quality of Service

[Prof. Rupesh G Vaishnav] Page 1

Lecture 16: Quality of Service. CSE 123: Computer Networks Stefan Savage

TECHNICAL CHALLENGES OF VoIP BYPASS

Analysis of IP Network for different Quality of Service

The need for bandwidth management and QoS control when using public or shared networks for disaster relief work

Network Simulation Traffic, Paths and Impairment

Final for ECE374 05/06/13 Solution!!

Chapter 7 outline. 7.5 providing multiple classes of service 7.6 providing QoS guarantees RTP, RTCP, SIP. 7: Multimedia Networking 7-71

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

Congestion Control Review Computer Networking. Resource Management Approaches. Traffic and Resource Management. What is congestion control?

Is Your Network Ready for VoIP? > White Paper

Access Control: Firewalls (1)

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

Quality of Service (QoS) on Netgear switches

Real-time apps and Quality of Service

Solving the Big Dilemma of Big Data

Protagonist International Journal of Management And Technology (PIJMT) Online ISSN Vol 2 No 3 (May-2015) Active Queue Management

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

Optimizing Converged Cisco Networks (ONT)

technology standards and protocol for ip telephony solutions

Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.

Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation

IVCi s IntelliNet SM Network

Technote. SmartNode Quality of Service for VoIP on the Internet Access Link

Congestion Control Overview

VoIP Network Dimensioning using Delay and Loss Bounds for Voice and Data Applications

Introduction VOIP in an Network VOIP 3

CS640: Introduction to Computer Networks. Why a New Service Model? Utility curve Elastic traffic. Aditya Akella. Lecture 20 QoS

Voice Over IP Performance Assurance

MPLS and NetEnforcer Synergy. Enhancing the control of MPLS-based, enterprise managed services with Allot's NetEnforcer

Configuring an efficient QoS Map

Indepth Voice over IP and SIP Networking Course

Question: 3 When using Application Intelligence, Server Time may be defined as.

IP Network Layer. Datagram ID FLAG Fragment Offset. IP Datagrams. IP Addresses. IP Addresses. CSCE 515: Computer Network Programming TCP/IP

Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

Improved Digital Media Delivery with Telestream HyperLaunch

Modeling and Simulation of Queuing Scheduling Disciplines on Packet Delivery for Next Generation Internet Streaming Applications

Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008

Multi Stage Filtering

point to point and point to multi point calls over IP

Why SSL is better than IPsec for Fully Transparent Mobile Network Access

Application Note How To Determine Bandwidth Requirements

Applications that Benefit from IPv6

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

High Performance VPN Solutions Over Satellite Networks

The Broadband Service Optimization Handbook Chapter 3

CHAPTER 5: CHECK POINT QOS...

Quality of Service (QoS) for Enterprise Networks. Learn How to Configure QoS on Cisco Routers. Share:

Three Key Design Considerations of IP Video Surveillance Systems

Combining Voice over IP with Policy-Based Quality of Service

Mixer/Translator VOIP/SIP. Translator. Mixer

Quality of Service (QoS)) in IP networks

Network management and QoS provisioning - QoS in the Internet

Overview. Securing TCP/IP. Introduction to TCP/IP (cont d) Introduction to TCP/IP

A Preferred Service Architecture for Payload Data Flows. Ray Gilstrap, Thom Stone, Ken Freeman

Sources: Chapter 6 from. Computer Networking: A Top-Down Approach Featuring the Internet, by Kurose and Ross

Internet Firewall CSIS Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS net15 1. Routers can implement packet filtering

ICOM : Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM

THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS

CSE 123: Computer Networks

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding

Customer White paper. SmartTester. Delivering SLA Activation and Performance Testing. November 2012 Author Luc-Yves Pagal-Vinette

WhitePaper: XipLink Real-Time Optimizations

"Charting the Course to Your Success!" QOS - Implementing Cisco Quality of Service 2.5 Course Summary

Voice over Internet Protocol (VoIP) systems can be built up in numerous forms and these systems include mobile units, conferencing units and

Mobile Computing/ Mobile Networks

Multimedia Requirements. Multimedia and Networks. Quality of Service

Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc

An Introduction to VoIP Protocols

What is CSG150 about? Fundamentals of Computer Networking. Course Outline. Lecture 1 Outline. Guevara Noubir noubir@ccs.neu.

Transcription:

White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved.

Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth Management?... 4 Per Flow Queuing... 5 Comparing PFQ with other QoS methods... 11 Summary... 12 About Allot NetEnforcer and NetXplorer... 13 About Allot Communications... 13 2006 Allot Communications. All rights reserved. 2

Executive Overview The Allot NetEnforcer family of IP service optimization devices offers an innovative and intelligent approach to policy-powered bandwidth management on broadband networks. Allot s patented Per-Flow Queuing (PFQ) is a direct traffic control approach that utilizes and leverages TCP s inherent flow control mechanisms to achieve the maximum and most efficient bandwidth usage. By applying PFQ, NetEnforcer devices are able to provide accurate per-application control that dynamically shapes both incoming and outgoing traffic flows in real time and at Gigabit speeds. This paper will provide a brief overview of the TCP/IP protocol, which governs traffic flows on most IP networks, and it will explain how Per Flow Queuing works with the facilities of TCP/IP to allocate sufficient bandwidth for all levels of IP-based applications and services running on broadband networks. 2006 Allot Communications. All rights reserved. 3

Understanding TCP/IP The TCP/IP protocol suite includes two main transport protocols, TCP and UDP. The majority of the traffic in today s networks uses TCP for the transport layer since it provides a reliable flow of data between two end-points. TCP provides a connection-oriented byte-stream service in which the two end-points must establish a connection with each other before they can exchange data. UDP, on the other hand, provides a simpler but unreliable transport layer and is used for streaming applications such as VoIP (voice over IP) and video. Applications that use UDP as the transport layer usually implement some of the TCP abilities, such as rate control, in the application layer to compensate on the lack of these important features in the transport layer. TCP provides the following transport facilities: Reliability - TCP assigns a sequence number to each byte transmitted, and expects a positive acknowledgment (ACK) from the receiving end. If the ACK is not received within a certain interval (called the timeout interval ), the data is retransmitted. The receiving TCP end uses the sequence numbers to rearrange the segments when they arrive out of order, and to eliminate duplicate segments. Rate Control - The receiving TCP end-point, when sending an ACK back to the sender, also indicates to the sender the number of bytes it can receive beyond the last received TCP segment, without causing an overrun or overflow in its internal buffers. Slow start and congestion avoidance These two methods are used by TCP to adapt the sending rate of the transmitting end-point to the available bandwidth in the link between the two end-points. This is especially important when there are bottlenecks in the traffic flow. Logical Connections - The reliability and rate control mechanisms described above require that TCP initializes and maintains certain status information for each datastream. The combination of this status, including sockets, sequence numbers and window sizes, is called a logical connection. Full Duplex - TCP provides for concurrent datastreams in both directions. TCP is a connection-oriented transport protocol and uses sequence numbers and acknowledgment messages to provide a sending node with delivery information about packets transmitted to the destination node. If the sending computer is transmitting too fast for the receiving computer, TCP employs rate control mechanisms to slow data transfer. What is Bandwidth Management? Bandwidth management, IP service control, or quality of service (QoS) are the general terms given to a broad range of techniques designed to control and shape the traffic flows on IP networks. Bandwidth management ensures that the maximum amount of traffic flows over the broadband network in the most efficient manner possible from end to end so that packets are not dropped or re-transmitted. It also provides a method for enabling high-priority traffic to move more quickly through the network, causing business-critical and service-critical applications respond more quickly. Most broadband consumers have experienced, at some time or another, the effects of network latency (slow network response). Any subscriber who has tried to use interactive Web applications over a low speed connection has seen the effect that a large file transfer has on the interactive traffic over the connection. The file transfer easily consumes most of the link s bandwidth, and delays the interactive data. The result is a poor user experience. Extrapolate that experience to many customers using the same broadband network and there is a multiplicative effect in terms of operational performance. 2006 Allot Communications. All rights reserved. 4

This source of the problem occurs because the datagrams or packets of data that contain the file transfer data are given equal priority on the link as those of the interactive applications. No consideration is given to the type of data contained within the datagram when deciding which datagram will be transmitted. All datagrams are scheduled for transmission on a "First Come, First Served" basis. When a new datagram arrives it is added to the end of the transmitting queue; when link bandwidth becomes available, the datagram at the head of the queue is transmitted. This is often referred to as "best-effort" transmission. Traffic shaping through policies allows you to implement a series of network actions that alter the way in which data is queued for transmission. Ultimately, it will take the same amount of time to transmit the entire set of datagrams across a network link, regardless of the order in which they are transmitted. However, sacrificing the response time of P2P file transfers by prioritizing interactive VoIP traffic can significantly speed up the response time of these interactive sessions. Policies define how bandwidth management is to be achieved. QoS policies translate the service and SLA requirements into traffic management priorities and actions. Each policy defines both the priority conditions for matching the traffic with policies and the network actions that need to be applied to the traffic when the conditions are met. In addition to prioritizing traffic, today s advanced traffic shapers should also provide the following capabilities: Setting a minimum amount of bandwidth for an application/user ( guaranteeing ) Setting a maximum amount of bandwidth for an application/user ( limiting ) Enforcing a specific CBR (Constant Bit Rate) level for specific connections Allowing bursts of traffic on certain connections that exceed maximum defined limits Enabling hierarchical policies that ease policy creation and maintenance The Allot NetEnforcer policy enforcement device offers all of these important traffic shaping features as well as a customizable Policy Editor, a real-time Traffic Monitor, and IP accounting. The NetEnforcer supports three hierarchical levels for shaping the traffic: the connection level; the policy or Virtual Channel (VC) level that aggregates connections that match a user-defined rule; and the Pipe level that aggregates several VCs associated with a specific user or IP address. Per Flow Queuing The NetEnforcer uses a unique approach to queuing called Per-Flow Queuing (PFQ). With PFQ, each flow gets its own queue and is treated individually by the NetEnforcer. This enables the NetEnforcer to offer very accurate traffic shaping. The PFQ method is a direct approach to QoS enforcement. Unlike indirect approaches that try to manage the available bandwidth by changing different parameters in the packets/flows (such as TCP window size ), Per-Flow Queuing uses TCP s inherent flow control to achieve the maximum and most efficient bandwidth usage. Per-Flow Queuing exploits two important internal mechanisms of TCP, the Slow Start and Congestion Avoidance. These mechanisms gradually increase the rate of the data flow until they identify that the link between the two end points is saturated. PFQ takes advantage of these mechanisms by artificially (and dynamically) enforcing the proper transmitting rate (bandwidth) per flow in a way that will meet the policy requirement and will avoid collisions. The transmitting TCP will then synchronize to the rate dictated by the NetEnforcer. The NetEnforcer thus forces each flow to transmit packets in the rate that will meet the user defined policy, including the minimum, maximum and priority definitions. 2006 Allot Communications. All rights reserved. 5

How PFQ Works Allot s Per-Flow Queuing is implemented by the QoS enforcement module in the NetEnforcer. Each packet that enters the NetEnforcer/QoS enforcement module is matched to the proper flow by the Flow Identifier, and inserted to the queue of the proper flow. If the packet does not match any of the existing flows, the New Flow Generator examines the conditions and characteristics of the flow and matches it to the proper policy (Policies are represented by "Virtual Channels" in the NetEnforcer). The new flow s queue is then added to the system. When the packet arrives at the QoS enforcement module, it checks whether the guaranteed bandwidth was exhausted and whether the maximum limitation was achieved. If the guaranteed bandwidth was not exhausted, the packet is transmitted immediately (without any delay). If the maximum limit for that flow has been reached, the packet is placed in a buffer. Otherwise the packet will be placed in its flow queue and will be transmitted based on the priority of the flow and the available bandwidth. The queues are generated and increased dynamically. A queue is generated per-flow and closed once the flow ends. This way the resources of the system are optimally used. The NetEnforcer does not assign a predefined size of buffers per queue; it manages a large buffer bank and dynamically assigns to each queue only the buffer size required for the flow at any given time. Thus even large temporary queues for certain peaks or bursts of a flow can be accommodated. The QoS enforcement module uses a very accurate scheduler. This scheduler decides which flow may send a packet in a given moment. After a packet is sent the system decides which flow will send the next packet, based on the defined policy and the number of packets already sent by each flow. Figure 1: Schematic drawing of the NetEnforcer QoS enforcement module 2006 Allot Communications. All rights reserved. 6

Per-Flow Queuing: Two Examples Example #1: Synchronizing Flow Rates with the NetEnforcer A flow is established between Client A and Server B. The NetEnforcer enforces a specific rate on the flow based on the minimum/maximum bandwidth and the priority definitions of the flow. At the start of the connection, a larger buffer size may be allocated, but as the sender synchronizes to the enforced rate, almost none is required. This enables the network to operate at maximum efficiency. Figure 2: TCP connection synchronizes to the rate dictated by the NetEnforcer If additional connections are added to the system, the original flow may transmit at a lower rate in accordance with the defined policies. This may occur because the new connections may have a higher priority or they are of same traffic type/policy (e.g. same priority level) and are treated with fairness in accessing the link: flows with the same priority will occupy the same percentage of the link s bandwidth. Either way, the NetEnforcer reduces the rate and the TCP adapts to the new rate. 2006 Allot Communications. All rights reserved. 7

Figure 3: The TCP synchronizes to the new rate dictated by the NetEnforcer Example #2: Ensuring Fairness Between Connections with PFQ Two flows, one red and one blue, pass through a bottlenecked broadband link. The blue flow tries to transmit at a higher rate than the red one. If the policy defines that both the red and the blue flows should be provided with the same priority, PFQ provides fairness while shaping the traffic. Without the NetEnforcer, there will be no fairness between the flows and the blue flow will consume most of the available bandwidth. When the blue connection increases its rate (Figure 4), both the blue and the red connections will start randomly dropping packets at the congestion point where the wide link meets the narrow link. This usually occurs at the access router where one side has a relatively narrow link (e.g., 1 Mbps) and the other has the relatively wide link (10-34 Mbps). Figure 4: Increasing the sending rate without the NetEnforcer Eventually, the sender s TCP will synchronize to the bandwidth available at the bottlenecked link. We can see that fairness between connections was not accomplished: both the red and the blue connection had to reduce their sending rate. Packets will not longer be dropped because the bottlenecked link is able to pass all the arriving traffic. Dropped packets were retransmitted wasting additional bandwidth. 2006 Allot Communications. All rights reserved. 8

Figure 5: Multiple connections reduce the sending rate without NetEnforcer When using the NetEnforcer both connections will go over the bottleneck link in the same rate. You can see how the Red connection is sending more traffic on the expense of the blue connection. The NetEnforcer delays the packets of the blue connection in the proper queue. The internal scheduler determines the timing in which each packet should be transmitted. Figure 6: Reducing the sending rate (without dropping packets) with NetEnforcer After a while the TCP flow control of the blue flow synchronizes to the rate dictated by the NetEnforcer. After the sending rate synchronizes to the rate dictated by the NetEnforcer, there will not be any packets in the queue. This is the normal state of the NetEnforcer most of the queues are empty (or nearly empty with only one packet) most of the time. Packets will be in the queue only until the sending TCP adopts the rate dictated by the NetEnforcer. 2006 Allot Communications. All rights reserved. 9

Figure 7: No packets in the queue The Benefits of Per-Flow Queuing Per-Flow Queuing offers a method for direct implementation of QoS and uses TCP s inherent flow control to achieve the maximum and most efficient bandwidth usage. Some additional abilities and characteristics of the PFQ are: Maximal use of the available bandwidth The scheduling mechanism will transmit packets as long as there is available bandwidth. This ensures maximal link utilization, which results in maximum application performance. Very accurate policy enforcement The scheduling mechanism enforces the policy definitions in extremely high resolution. It provides bandwidth management with resolution of packet size. Traffic smoothing As part of the accurate scheduling, the NetEnforcer smoothes the bandwidth, providing a more stable/constant rate of consumption that helps avoid collisions and packet drops. Fairness between connections One of the important benefits of the PFQ method is fairness among all connections. Two connections with the same priority will get the same bandwidth even if one of them tries to transmit in a higher rate. This is one of the basic demands expected from a traffic shaper. Independent flow control (TCP/IP stack at the endpoints) Unlike other traffic shaping implementations, the NetEnforcer is independent of the flow control mechanisms at the endpoints. This enables the NetEnforcer to use the same algorithms for both TCP application traffic and for UDP application traffic in which rate control is independent. Per-connection CBR enforcement By accurately controlling the rate at which packets are transmitted per flow, the NetEnforcer reduces the jitter and enhances the end-user experience. Reducing jitter is critical for achieving acceptable performance levels for streaming applications such as VoIP and video. 2006 Allot Communications. All rights reserved. 10

Comparing PFQ with other QoS methods Many products today use various queuing approaches such as WFQ (weighted-fair queuing) or CBQ (class-based queuing). These queuing algorithms provide fairness between different classes or priorities of traffic. However, flows that are of the same priority class have no consistent fairness policies. If a connection comes in with a given priority or guaranteed bandwidth, it will be put on a certain queue. As traffic on the router begins to queue up and more connections arrive with that priority class, the new connections will always go to the back of the queue and wait until all previously queued packets are sent. The end result is inconsistent and unpredictable delivery of traffic. Class-Based Queuing (CBQ) The main differences between PFQ (Per-Flow Queuing) and the CBQ (Class-Based Queuing) are: CBQ does not provide fairness between connections. All connections that match a certain class share the bandwidth of the class without fairness among the connections. CBQ cannot provide CBR (Constant Bit Rate) per connection because it does not treat individual connections. CBQ is usually used in only a single hierarchical level, with a limited or fixed number of classes. Weighted Fair Queuing (WFQ) Weighted Fair Queuing (WFQ) has similar limitations as CBQ. The WFQ and CBQ are only prioritization methods and do not enforce minimum and maximum levels of bandwidth per connection or per class. The per-flow WFQ is the most similar prioritization method to Allot s PFQ mechanism among the known scheduling mechanisms. Per-flow WFQ is considered to be one of the most accurate scheduling methods. The main difference between per-flow WFQ and Allot s PFQ mechanism is that Allot s method offers better performance with nearly the same accuracy. TCP Rate Control The TCP Rate Control uses two main mechanisms to achieve bandwidth management: (a) changing the window size field in the TCP header and (b) generating an intentional delay. This indirect approach to bandwidth management has been proven to have several important drawbacks: Inaccurate QoS enforcement TCP Rate Control tries to enforce the rate per connection by changing window size instead of enforcing the rate directly on the traffic passing through the bandwidth management device. This causes relatively inaccurate enforcement. Real-world window sizes are not static - Networks in real life are very dynamic in their nature. Defining the window size per connection depends on the policy defined by the user and on the actual rate of all other connections (not the window size). No one can accurately predict the actual transmission rate of all other connections and therefore the setting for the window size is no more than a good guess. Studies have shown that this method provides inaccurate traffic shaping. Inaccurately CBR enforcement - When using the TCP Rate Control, you cannot accurately enforce CBR. The PFQ approach can easily provide CBR (Constant Bit Rate) enforcement by using the queue of the flow to control the rate in which the packets are transmitted from the NetEnforcer. Since TCP Rate Control does not queue packets, you cannot have this ability. 2006 Allot Communications. All rights reserved. 11

Slow recovery for slowed connections - TCP Rate Control causes slow recovery for connections that were previously slowed down. When extra available bandwidth is detected, the rate control will update the window size of the connection. Only at the next transmission will the connection increase its data rate. In the meantime, your expensive bandwidth and time is wasted. Dual TCP Stack Dependence - TCP Rate Control strongly depends on the TCP stack of both end points. TCP stacks are not uniform and can also change over the years. PFQ does not rely on a specific implementation of flow control (rate control) and only requires that there is rate control at one of the end points. Poor performance for short connections - TCP Rate Control has poor performance when using short connections (connections that transfer only few packets during one session). In current networks, a majority of the traffic is comprised of short HTTP (web) connections. These connections do not have the chance to enlarge or shrink their window size before they finish transferring data. For these connections, changing the window size is not relevant and you must use a direct approach to manage these connections. No broadcast and multicast support - The Rate control method does not support broadcast and multicast traffic. No support for UDP traffic - With TCP Rate Control, you cannot enforce QoS on UDP traffic; it only works with TCP traffic. PFQ is protocol independent, requiring that there will be some sort of flow and rate control at the endpoints. Applications that use UDP implement rate control in the application layer. (For example, RTP uses RTCP for this purpose). One of the main claims in favor of TCP Rate Control is that packets are never dropped. Unfortunately, this is not possible while fully utilizing the broadband link. The reason for this is simple: if you set the window size conservatively, that is smaller than you think is necessary, then each window will have unused space and you will not be fully utilizing the link. Otherwise you will eventually have dropped packets and retransmissions. Summary The Per-Flow Queuing method treats each individual flow separately and enforces the defined policy in the most efficient way on your network. PFQ directly controls the traffic speed by using TCP s built-in mechanisms to synchronize the transmission speed between two end points. Other methods indirectly incorporate rate control (the speed of the data flow) by directly changing parameters (e.g. window size) in the TCP protocols. 2006 Allot Communications. All rights reserved. 12

About Allot NetEnforcer and NetXplorer NetEnforcer Allot NetEnforcer traffic management and service optimization devices provide the granular visibility and dynamic control that network operators need to guarantee the delivery, performance and profitability of broadband services. Allot NetEnforcer devices thoroughly inspect, monitor, and control network traffic, per application and per user. Allot NetEnforcer devices are available in a variety of models designed to suit the requirements of any broadband network. NetEnforcer AC-1000 carrier-class series NetEnforcer AC-2500 five-gigabit carrier-class series NetXplorer Allot NetXplorer management software provides a consolidated picture of all traffic on the network. This centralized management system works in harmony with NetEnforcer devices in the network to provide the network business intelligence that is needed to manage and differentiate broadband services. Its intuitive interface, rich functionality, and wide array of ondemand reports help network operators: Track bandwidth usage, per application and per user Analyze traffic patterns and usage trends Identify malicious traffic and neutralize attacks Differentiate service offerings Translate SLAs into service control policies and much more About Allot Communications Allot Communications is the leading provider of intelligent IP service optimization solutions based on deep packet inspection (DPI) to the world's leading service providers and enterprises. Allot's unparalleled DPI technology provides the granular visibility that enables broadband service providers to monitor and analyze network usage; to control IP service delivery; to assure quality of experience; to maximize ROI on infrastructure investments; and to increase average revenue per user. 2006 Allot Communications. All rights reserved. 13