pplication Note Dynamic Load alancing with Director Pro This application note explains the dynamic load balancing functionality of Net Optics Director Pro network controller switches, including its advanced high availability features. Static and Dynamic Load alancing The purpose of load balancing in a network traffic monitoring context is to distribute traffic evenly among a number of monitoring tools in order to process more traffic than a single tool can support. Dynamic load balancing is different from the static load balancing techniques offered in other products (and also available in Director Pro) in that a dynamic load balancer constantly adjusts the way it distributes traffic flows based on the actual state of how loaded each tool is, whereas static load balancing algorithms depend strictly on a fixed set of rules relating to characteristics of the input traffic. Static load balancing algorithms the specific packet characteristics used to distribute the traffic are chosen to generate a random distribution of flows in order to produce a balance that is even on average, while a dynamic load balancer actually measures how even the balance is an adjusts to maintain an even balance. Static Load alancing Packets rrive Tool assigned based on packet header information Tool Tool Dynamic Load alancing Packets rrive Tool assigned based on actual tool loading Tool Tool Figure : Dynamic and static load balancing compared Thus a dynamic load balancer should produce an even balance regardless of the characteristics of the input traffic, while a static load balancer may produce uneven loads for given types of traffic distribution. For example, a static load balancer using the simple rule send traffic with odd-numbered source IP addresses to tool # and traffic with even-numbered source IP addresses to tool # will produce an even balance only if the traffic has a random (which is to say, and even) distribution of even and odd source IP addresses. Some load balancers, such as Net Optics xalancer, mitigate this weakness of static load balancing by hashing the header fields that are selected as balancing criteria, effectively randomizing them. Each application should evaluate the chosen balancing method against the actual input traffic to validate the how even of a balance is achieved. More will be discussed comparing static and dynamic load balancing at the end of this paper. June 0 - -
pplication Note Dynamic Load alancing with Director Pro Director Pro s Dynamic Load alancing Engine The Director Pro network controller switch includes a dedicated hardware accelerator called the Pro engine which provides true dynamic load balancing and 0 Gbps line-speed Deep Packet Inspection (DPI). The dynamic load balancer actively monitors the amount of traffic going to each tool and allocates each new traffic flow to the least utilized tool. This method is dynamic because the traffic is allocated to the tools based on the changing loads the tools experience. The Pro engine dynamic load balancer has the following characteristics: It takes a single input traffic streazm and distributes it to to 3 output channels (tools, or output ports) Its bandwidth is 0 Gbps, so it is ideal for distributing traffic from a 0 Gigabit network link to multiple Gigabit tools. The high input bandwidth also enables traffic from many Gigabit links to be aggregated and then distributed to multiple tools. It supports an unlimited number of flows. It can be used in conjunction with aggregation, filtering, and DPI; load balancing is applied after these operations have processed the traffic. It has three flow-based balancing modes plus a packet-based round-robin mode. spare port can be allocated to provide N+ redundancy. n overflow mode is supported, allocating additional tools when existing tools reach capacity. In a multi-unit system, the input and outputs of the flow balancer can cross chassis boundaries freely. For example, if one Director Pro is daisy-chained with one Director, the dynamic load balancer in the Director Pro can take in traffic from a network port on Director and load balance it to monitor ports on Director. Flow-ased Load alancing When traffic is distributed to multiple tools, it is important for each tool to receive traffic that it can analyze successfully. To make sense of IP traffic, tools usually need to see a complete conversation between two endpoints. For example, in a call center VoIP call recording application, it is desirable to keep the entire telephone call, including both directions of traffic, confined to single traffic recorder so that it is not necessary to piece together segments from several recorders in order to play back a call. set of related packets, such as the packets that make up an IP conversation, are called a flow. Director Pro s dynamic load balancer has three balancing modes that maintain flow coherency (they ensure that all of the packets in a given flow are sent to the same tool). Director Pro s three flow-based load balancing modes differ only in the way in which flows are defined: Flows are identified by conversation, where a conversation means an IP address pair (packets moving in both directions between two IP addresses) Flows are identified by the IP source address (all packets coming from a given endpoint) Flows are identified by the IP destination address (all packets going to a given endpoint) No matter which way a flow is identified, dynamic load balancing works by assigning each new flow received by the load balancer to the output channel that has passed the smallest number of packets since load balancing began. y continuously sending new traffic to the least-used output channel, the channel loads become balanced over time. June 0 - -
pplication Note Dynamic Load alancing with Director Pro The previous paragraph is a simplified explanation of how the dynamic load balancer works. The detailed algorithms are designed to maintain an even load balance over time, even if the characteristics of the traffic such as packet size and address distribution change over time. Two points to keep in mind when using the dynamic load balancer: The dynamic load balancer passes only IP traffic in flow-based balancing modes. Non-IP packets are dropped from the traffic stream. The dynamic load balancer maintains an even loading across the output channels when the traffic contains a large number of data flows. If the number of data flows relative to the number of output channels is low, the dynamic load balancing algorithm may not be as effective. For example, if you specify eight output channels and send traffic containing only eight flows, the load balancing algorithm does not guarantee that each channel will receive one flow. For traffic containing a low number of flows, static load balancing may be more effective. Packet round-robin load balancing The Pro dynamic load balancer supports a packet-based round-robin mode for applications where it is not important to keep flows together. In packet round-robin mode, the load balancer sends the first packet to first output channel, the second packet to the second output channel, and so on, starting over again at the first output channel after hitting the last channel. Thus every output channel receives the same number of packets, regardless of the traffic characteristics. In packet round-robin mod, the dynamic load balancer passes all traffic, including both IP and non-ip packets,. ecause packet round-robin mode ensures an equal number of packets in each channel, it is possible for the number of bytes in each channel to differ. lthough this effect is not likely to be significant in a real-world application, it is possible to construct a traffic profile that results in very unbalanced loads judging by total bytes or bytes per second. For example, configure the load balancer with two output channels and send in a traffic stream that consist of a 64 byte packet followed by a,58-byte packet, repeating. One output channel will receive,58 / 64 = 3 times as many bytes as the other. Packet Round-Robin Load alancing Packets rrive Figure : Packet round-robin load balancing Each packet assigned to the next tool in rotation Tool Tool June 0-3 -
pplication Note Dynamic Load alancing with Director Pro High-vailability Load alancing Link State wareness Some applications require that monitoring should persist even if a monitoring tool fails. For example, a typical use of dynamic load balancing is to apply multiple forensic recorders to record all of the traffic on a network link, when a single recorder does not have sufficient bandwidth to handle the job alone. The Pro engine dynamic load balancer has a link state awareness feature that causes traffic to be redistributed among all active tools when a link goes down or comes back up. This scenario is illustrated in the following figure. Tool Tool Initial state: ll tools operating XTool Tool Tool fails; link down condition causes all traffic to be reallocated to Tools, 3, and 4 Tool 4 Tool 4 3 4 X Tool XTool Tool fails; link down condition causes all traffic to be reallocated to tools 3 and 4 Tool XTool Tool recovers; link up condition causes all traffic to be reallocated to tools, 3, and 4 Tool 4 Tool 4 Figure 3: Dynamic load balancing with link-state awareness When link-state awareness is enabled, a link up or down transition on any load balancer output channel causes the traffic to be redistributed (as if the load balancer was reset and restarted) to the channels with up links. When link state awareness is disabled, traffic is always distributed as if all the links are up, and if a link goes down, the traffic being sent to that channel is lost. Note that when link-state awareness is enabled and a link goes up or down, traffic redistribution may break and reassign flows in progress on any channel. June 0-4 -
Dynamic Load alancing with Director Pro pplication Note High-vailability Load alancing N+ Redundancy Link-state awareness is not always a good solution for fault-tolerant load balancing. For example, the application may not be able to tolerate the flow reallocation that takes place when a link goes up or down in link-state aware load balancing. For another example, if the load is being balanced to just two tools, if one tool fails, directing all the traffic to a single tool may oversubscribe the tool, resulting in lost traffic. In such a case, the organization may be willing to dedicate a third tool as a hot standby in case one of the two active tools fails. Director Pro enables the third tool to be attached to a spare output port, and if either of the two active tools fails, the traffic that was being sent to that tool is automatically switched to the hot standby tool. This scenario is illustrated in the following figure. Hot Spare Forensic Forensic 3 Forensic XForensic Forensic 3 Hot Spare Forensic Figure 4: Dynamic load balancing hot spare replacing a failed tool Director Pro s dynamic load balancer supports a single spare port for attaching a hot spare tool, providing N+ redundancy when N active tools are deployed. Traffic ia switched to the spare port when link is lost on an active tool port. When the link comes back up, traffic is switched back from the spare tool to the original tool. The load balancer s spare port feature is designed to maintain monitoring continuity in spite of a single tool failure. If more than one tool fails, the traffic from all of the down tools is sent to the hot spare tool. This condition could result in overrunning the capacity of the spare tool. Note that N+ redundancy can be generalized to N+M redundancy, with N active tools and M spares. For example, Net Optics xalancer supports N+M redundancy, while Director Pro supports only N+. June 0-5 -
pplication Note Dynamic Load alancing with Director Pro Overflow mode In some applications, it may be helpful to send all of the traffic to a single tool. Then, when the traffic bandwidth begins to near the tool s capacity, add a second tool and load balance the traffic between the two tools. third tool could be added when the first two tools reach saturation, and so on with a fourth, fifth, and more tools. Director Pro supports this type of operation with the dynamic load balancer s overflow mode. Overflow mode is a convenient way to keep the traffic confined to the minimum number of tools necessary. For example, if the tools are forensic recorders, all of the traffic can be captured by a single forensic recorder as long as it has the capacity available, and then by two recorders, and so on. Overflow mode is also a good solution for dealing with network traffic growth. For example, a new 0G link may only need to carry 00 Mby/s of traffic today, and a single forensic recorder could be deployed to capture all of the traffic of interest. If the load balancer is set up ahead of time in overflow mode with four output channels allocated, the administrator can monitor the traffic load week by week or day by day. When the traffic grows to the point of overrunning the traffic recorder, a second recorder can be deployed. Without touching the load balancers, Director Pro will automatically start sending traffic to the second recorder when the traffic load passes a pre-configured threshold, such as 80 percent to the first recorder s capacity. Thus expensive traffic recorders can be added to load balancing set one by one as the like utilization grows, enabling the CPEX for the tools to be spread out over the growth period. In overflow mode, the Pro engine dynamic load balancer can be configured for to 3 output channels, the same as non-overflow mode. You can specify the minimum number of channels to use, in other words, how many channels should be used initially. For example, you may want to start off using two tools instead of one. Finally, you set a utilization threshold, and the remaining channels are automatically switched into the load balancing set one at a time, adding the next channel when the currently active channels exceed the programmed utilization level. To be more precise, the next output channel is added to the load balancing set when the aggregate traffic bandwidth exceeds the programmed threshold level times the aggregate capacity of the active output ports. Overflow mode can be used together with link-state awareness and N+ redundancy. Utilization Utilization Forensic Forensic Forensic Initial state: ll traffic sent to Forensic Forensic Forensic utilization reached the threshold, so traffic is now balanced to Forensic and Forensic Forensic 3 Forensic 3 Forensic 4 Forensic 4 3 4 Utilization Utilization Forensic Forensic Forensic 3 Forensic and Forensic utilization reached the threshold, so traffic is now balanced to 3 tools Forensic Forensic Forensic 3 Three tools reached the threshold, so traffic is now balanced to all 4 tools Note: Forensic utilization has dropped below the threshold, but it remains in the load balancing ouptut set Forensic 4 Forensic 4 Figure 5: Dynamic load balancing in overflow mode June 0-6 -
pplication Note Dynamic Load alancing with Director Pro Weighted Outputs While the goal of load balancing is to balance the load evenly, sometimes an uneven balance is preferred. The use case that benefits from a deliberately uneven balance is one where the capacities of the attached tools are not all the same. For example, perhaps we have three tools each of which can process a Gbps throughput, and a fourth tool that can process 3 Gbps throughput. In this case, it is desirable to send the 3 Gbps tool three times as much traffic as is sent to each of the Gbps tools. Some load balancers support this requirement by allowing the outputs to be weighted. In this case, the outputs would be weighted ---3, the fourth tool being allocated 3 packets (or flows) for each one packet (or flow) the other tools receive. In practice, the load balance operates as if it has 6 evenly weight outputs, but 3 of them are actually send to the fourth tool, as shown in the following figure. Tool Tool 4 has three times the capacity of the other tools. Outputs are wighted ---3 so tool 4 receives three times as many flows as the other tools. Tool Tool 4 Figure 6: Weighted outputs for tools with different capacities June 0-7 -
pplication Note Dynamic Load alancing with Director Pro Trade-offs of Static and Dynamic Load alancing The following table highlights the relative advantages and disadvantages of load balancing with a filter based static load balancer, a hash-based load balancer, and a dynamic load balancer. Table : Load alancing Types Compared Type of load balancing Characteristic Dynamic Hash-based Static, filter-based alance evenness est Good Good Flow coherent Yes Yes Yes Configuration complexity Simple Simple Complex Dependence on traffic characteristics Least Little Most Flexibility Least verage est While the previous table compares the fundamental differences between static and dynamic load balancing, the following table compares features available in different load balancing products. We compare dynamic load balancing as implemented in Director Pro and Director xstream Pro, hash-based load balancing as implemented in xalancer, filter-based static load balancing as implemented in Director and Director xstream, and a generic implementation of load balancing that might be found in other products. Table : Load alancing Features Compared Net Optics Load alancing Products Generic Load Feature Director Pro xalancer Director xstream alancer Type of load balancing * Dynamic Hash-based Static, filter-based Hash-based Flow coherent Yes Yes Yes Yes alancing throughput 0 Gbps 480 Gbps 480 Gbps alance criteria SIP+DIP, SIP only, DIP only ny mix of nine header fields Totally flexible** Fixed selection Packet round-robin mode Yes No No No Link-state awareness Yes Yes No No N+M redundancy N+ N+M No No Overflow mode Yes No No No Weighted outputs Yes No No No Inline load balancing Yes Yes No No Heartbeat health checking No Yes No No Link fault mirroring No Yes No No Information link Director Pro xalancer Director xstream * Filter-based static load balancing can also be implemented on any filtering platform. ** Director xstream can use any header fields including user-definable fields as criteria for static load balancing. pplies to load balancing to inline tools such as IPSs; this topic is discussed in a separate application note. June 0-8 -
pplication Note Dynamic Load alancing with Director Pro Configuring the Pro Engine s Dynamic Load alancer This section walks through configuring dynamic load balancers using the Director Pro Command Line Inteface (CLI). The dynamic load balancer is set up in two steps: Configure the load balancer using the pro-engine lb_set command Define a filter to send traffic into the load balancer Some examples are presented next. See the Director Pro CLI Command Reference manual for all of the details of the pro-engine command. To balance input traffic to four tools:. Type pro-engine lb_set mode=clear.. Type commit. The Pro engine configuration is cleared. 3. Type pro-engine lb_set mode=flow_con. The load balancer is configured to balance by flows identified by conversations (IP address pairs). The keywords for the other modes are flow_dst, flow_src, and packet_rr. 4. Type pro-engine lb_set ports=m.-m.3,m.0. The four output ports are assigned to the load balancer. 5. Type pro-engine lb_set weight=-4,. ll four output channels are assigned the same weight. If you want channels 3 and 4 to receive twice as much traffic as the others, type pro-engine lb_set weight=-4, followed by pro-engine lb_set weight=3-4,. Note: It is important to set the weights of the output channels. The default value of the weights is 0, which results in no traffic going to any output. The load balancer is now configured. Next a filter must be set up to send traffic to the load balancer. 6. Type filter add in_ports=t. action=lb. filter has been defined to direct traffic from a 0G port to the dynamic load balancer. 7. Type commit. The Pro engine configuration and the filter are activated; load balancing begins. 8. Type pro-engine show content=stats. The packet and byte counts for each of the load balancer outputs is displayed. Verify that the traffic is being evenly distributed to the first four load balancer outputs. June 0-9 -
pplication Note Dynamic Load alancing with Director Pro In the previous procedure, Steps 3, 4, and 5 could have been combined into a single CLI command. The next example does this, and adds a hot spare tool to the deployment. To balance input traffic to four tools and include a hot spare tool:. Type pro-engine lb_set mode=clear.. Type commit. The Pro engine configuration is cleared. 3. Type pro-engine lb_set mode=flow_con ports=m.-m.3,m.0 weight=-4, spare=m.5 The load balancer is configured with the hot spare tool assigned to port m.5. 4. Type filter add in_ports=t. action=lb. filter has been defined to direct traffic from a 0G port to the dynamic load balancer. 5. Type commit. The Pro engine configuration and the filter are activated and load balancing begins. If the link goes down on any of the active output ports m.,m.,m.3, or m.0, the traffic going to that output will be diverted to port m.5. When the link comes back up, the traffic is returned to the original port and m.5 becomes idle again. In the next example, the same load balancing configuration is operated in overflow mode. The spare port is still allocated, and will take over if any of the active ports lose link. To balance input traffic to two tools, but add a third and fourth tool as the load grows:. Type pro-engine lb_set mode=clear.. Type commit. The Pro engine configuration is cleared. 3. Type pro-engine lb_set mode=flow_con ports=m.-m.3,m.0 weight=-4, spare=m.5 threshold=80 min_ports= Since the threshold is set to a non-zero value (it is set to 80 percent), the load balancer will operate in overflow mode. Note: Once a port has been added to the load balancing set, it stays active even if the utilization drops below the threshold. This behavior is necessary to ensure that flows stay intact. 4. Type filter add in_ports=t. action=lb. filter has been defined to direct traffic from a 0G port to the dynamic load balancer. 5. Type commit. The Pro engine configuration and the filter are activated and load balancing begins. Initially the first two output ports, m. and m., receive all of the traffic. When the aggregated traffic on m. and m. exceeds.6 Gbps [80% x ( Gbps + Gbps)], traffic begins being directed to m.3 as well. When the aggregated traffic on m. m., and m.3 exceeds.4 Gbps [80% x ( Gbps + Gbps + Gbps)], traffic is directed to all four ports. June 0-0 -
pplication Note Dynamic Load alancing with Director Pro In this final example, the same load balancing configuration is used, but with different inputs and outputs. multi-unit daisy-chain system is assumed, with a Director Pro unit on each end of the daisy-chain (UIDs and 5) and three Director units in between (UIDs, 3, and 4). The load balancer s input traffic is aggregated from several ports in different units. The traffic is filtered for TCP protocol and DPI is used to select traffic that contains the word Free twice, with at least 00 bytes between the two instances of the word. The traffic is load balanced to four outputs in different units of the daisy-chain. Three of the outputs are G ports, and one is a 0G port. The 0G output port is sent four times as much traffic as the G ports. To configure the application described in the previous paragraph:. Type pro-engine lb_set mode=clear.. Type commit. The Pro engine configuration is cleared. 3. Type pro-engine lb_set mode=flow_con ports=u.m.,u.m.0,u3.n.5,u4.t. weight=-4, spare=m.5 threshold=80 min_ports= uid=5 The dynamic load balancer in the Director Pro at the tail of the daisy-chain (uid=5) is configured as in the previous example, but with different input and output ports. 4. Type pro-engine lb_set weight=4-4,4 The fourth load balancer output port, which is the 0G port u4.t., is weighted to receive four times a much traffic as each of the other ports. 5. Type filter add in_ports=u.t.,u.t.,u3.t. action=lb ip_protocol=6 pro_pat_string=free pro_pat_skip=0 pro_pat_anchor=off pro_pat_ignore_case=on pro_pat_string=free pro_pat_ skip=00 pro_pat_anchor=off pro_pat_ignore_case=on. The necessary filter has been defined. 6. Type commit. The Pro engine configuration and the filter are activated and load balancing begins. June 0 - -
pplication Note Dynamic Load alancing with Director Pro In the previous example, the 0G output port is the last port t bandwidth grows. It is important to list all of the G ports before the 0G port, because once the 0G port is in the active set, its capacity can cause the G ports to overrun. To illustrate this point, suppose the 0G port was listed second instead of last in the Pro engine ports list. The maximum aggregated, load-balanced bandwidth that can be carried by the full output port set, without The load balancer starts off using one G and one 0G port. The G port saturates at G, at which point the 0G port is carrying 4G, and the aggregate load is 5G. ut 5G does not pass the threshold (80% x the bandwidth of the two active ports = 80% x G = 8.8G) so the remaining two G ports will not be switched in to carry some of the active G port. Packets will be dropped, and the full 7G ca Summary This paper has explained what dynamic load balancing for monitoring applications is, and how it compares to static load balancing techniques. It describes important features provided by Director Pro s dynamic load balancing engine including link-state awareness and N+ redundancy for high avai take-away from this read is that a complete monitoring load balancing solution must take into account what There is much more to an effective real-world solution than simply checking a box that says load balance. Net Optics engineers are happy to analyze your load balancing application and help you develop the best solution to meet your requirements and keep your monitoring infrastructure from becom volumes increase. Distributed by: Network Performance Channel GmbH Ohmstr. 635 Langen Germany T: +49 603 906 7 netoptics@np-channel.com www.network-taps.eu For further information about monitoring load balancing solutions: Net Optics, Inc. 5303 etsy Ross Drive Santa Clara, C 95054 (408) 737-7777 info@netoptics.com June 0 - -