HP-UX 11i TCP/IP Performance White Paper

Size: px
Start display at page:

Download "HP-UX 11i TCP/IP Performance White Paper"

Transcription

1 HP-UX 11i TCP/IP Performance White Paper 1 Introduction Intended Audience Organization of the document Related Documents Acknowledgements: Out of the Box TCP/IP Performance Features for HP-UX Servers TCP Window Size and Window Scale Option (RFC 1323) Selective Acknowledgement (RFC 2018) Limited Transmit (RFC 3042) Large Initial Congestion Window (RFC 3390) TCP Segmentation Offload (TSO) Packet Trains for IP fragments Advanced Out of the Box Scalability and Performance Features TOPS Configuration Scenario for TOPS socket_enable_tops Tunable STREAMS NOSYNC Level Synchronization IP NOSYNC synchronization Protection from Packet Storms Detect and Strobe Solution HP-UX Networking Responsiveness Features Responsiveness Tuning Interrupt Binding and Migration Configuration Scenario for Interrupt Migration Cache Affinity Improvement Improving HP-UX Server Performance Tuning Application and Database Servers Tuning Application Servers Tuning Database Servers Tuning Web Servers Network Server Accelerator HTTP...18

2 4.2.2 Socket Caching for TCP Connections Tuning tcphashsz Tuning the listen queue limit Using MSG_EOF flag for TCP Applications Tuning Servers in Wireless Networks Smoothed RTO Algorithm Forward-Retransmission Timeout (F-RTO) Tuning Applications Using Programmatic Interfaces sendfile() Polling Events send() and recv() Socket Buffers Data Buffering in Sockets Controlling Socket Buffer Limits System Socket Buffer Tunables Effective use of the listen backlog value Monitoring Network Performance Monitoring network statistics Monitoring TCP connections with netstat an Monitoring protocol statistics with netstat -p Monitoring link level statistics with lanadmin Monitoring System Resource Utilization Monitoring CPU Utilization Using Glance Monitoring CPU statistics using Caliper Monitoring Memory Utilization using Glance: Monitoring Memory utilization using vmstat Monitoring Cache Miss Latency Monitoring other resources Measuring Network Throughput Measuring Throughput with Netperf Bulk Data transfer Measuring Transaction Rate with Netperf request/response: Key issues for throughput with Netperf traffic Additional Monitoring Tools...40 Appendix A: Annotated output of netstat s (TCP, UDP, IP, ICMP)...41 Appendix B: Annotated output of ndd h and discussions of the TCP/IP tunables...53 Table 1: Summary of TCP/IP Tunables...81 Table 2: Operating System Support for TCP/IP Tunables...85 Revision History...89

3 1 Introduction This white paper is intended as a guide to tuning networking performance at the network and transport layers. This includes IPv4, IPv6, TCP, UDP, and related protocols. Some topics will touch on other areas including sockets interfaces, network interface drivers and application protocols; however that is not the focus of this paper. Other information is available for these subsystems as referenced below. 1.1 Intended Audience This whitepaper is intended for the following: Administrators responsible for supporting or tuning the internal workings of the HP-UX networking stack Network programmers, for example those who directly write to the TCP or UDP protocols using socket system calls HP-UX network and system administrators who want to supplement their knowledge of HP-UX configuration options NOTE: This white paper is specific to performance tuning, and is not a general guide to HP-UX network administration. 1.2 Organization of the document This document is organized as follows: Chapter 1: Introduction. Chapter 2: Provides information on out of the box TCP/IP performance features. Chapter 3: Provides information on advanced out of the box scalability and performance features. Chapter 4: Provides recommendations on tuning HP-UX servers. Chapter 5: Provides information on tuning applications using programmatic interfaces. Chapter 6: Provides information on how to monitor and troubleshoot network performance on HP- UX. Appendix A: Provides detailed description of netstat statistics and related tuning. Appendix B: Provides detailed description of TCP/IP ndd tunables. 1.3 Related Documents The following documentation supplements information in this document: HP-UX 11i v3 performance further increases your productivity for improved IT business value Performance and Recommended Use of AB287A 10 Gigabit Ethernet Cards HP Auto Port Aggregation Performance and Scalability White Paper Network Server Accelerator White Paper RFCs related to TCP/IP Performance: RFC 1323: TCP Extensions for High Performance RFC 2001: TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms 3

4 RFC 2018: TCP Selective Acknowledgement Options RFC 2861: TCP Congestion Window Validation RFC 3042: Enhancing TCP's Loss Recovery Using Limited Transmit RFC 3390: Increasing TCP's Initial Window RFC 3782: The NewReno Modification to TCP's Fast Recovery Algorithm RFC 4138: Forward RTO-Recovery (F-RTO): An Algorithm for Detecting Spurious Retransmission Timeouts 1.4 Acknowledgements: Most of the information in Appendix A and B has been derived from the Annotated Output of 'ndd -h' and 'netstat -s" documents written by Rick Jones. You can find these documents at ftp://ftp.cup.hp.com/dist/networking/briefs/annotated_ndd.txt ftp://ftp.cup.hp.com/dist/networking/briefs/annotated_netstat.txt 4

5 2 Out of the Box TCP/IP Performance Features for HP-UX Servers The HP-UX Networking Stack is especially engineered and tested for optimum performance in an enterprise mission-critical environment. HP-UX 11i v3 exhibits excellent performance on NFS server performance and in the TPC-C benchmark, a measurement of intensive online transaction processing (OLTP) in a database environment. Typically, OLTP includes a mixture of read-only or update, short or long, and interactive or deferred database transactions. There are significant networking performance tuning improvements and optimizations in 11i v3 for database applications as demonstrated by the benchmark result. There are many out of the box performance features introduced in HP-UX 11i. Users do not need to configure or tune any attributes in order to see the performance improvement with these out of the box performance features. The networking stack gracefully adapts to different networking needs in an enterprise, from noisy low-bandwidth wireless environments to high-bandwidth high-throughput datacenter environments. The TCP/IP performance features described in this chapter improve the performance of HP-UX servers, including database servers, application servers, NFS servers, web servers, mail servers, DNS servers, ftp servers, DHCP servers, gateways, and firewall systems. 2.1 TCP Window Size and Window Scale Option (RFC 1323) TCP performance depends not only on the transfer rate itself, but also on the product of the link bit rate and the round-trip delay, or latency. This "bandwidth-delay product" measures the amount of data that would "fill the pipe"; it is the buffer space required on the sender and receiver systems to obtain maximum throughput on the TCP connection over the path, i.e., the amount of unacknowledged data that TCP must handle in order to keep the pipeline full. TCP performance problems arise when the bandwidth-delay product is large. We refer to an Internet path operating in this region as a "long, fat pipe". In order to improve the performance of a network with a large bandwidth-delay product, the TCP window size needs to be sufficiently large. HP-UX supports the TCP window scale option (RFC 1323), which increases the maximum TCP window size up to approximately 1 gigabyte, or 1,073,725,440 bytes (65,535 * 2 14 ). When HP-UX initiates a TCP connection (an active open), HP-UX always initiates a SYN segment with the window scale option. Even when the real window size is less than 65,536, the window scale option is used with the scale factor set to 0. It is because advertising a 64K window with a window scale option of 0 is better than advertising a 64K window without a window scale option as it tells the peer that the window scale option is supported. When HP-UX responds to a connection request (a passive open), HP-UX accepts SYN segments with the window scale option. For the receiving TCP, the default receive window size is set by the ndd tunable tcp_recv_hiwater_def. Applications can change the receive window size by the SO_RCVBUF socket option. To fully utilize the bandwidth, the receive window needs to be sufficiently large for a given bandwidth-delay product. For the sending TCP, the default send buffer size is set by the ndd tunable tcp_xmit_hiwater_def, and applications can change the size by the SO_SNDBUF setsockopt() option. By setting the send socket buffer 5

6 sufficiently large for a given bandwidth-delay product, the transport is better positioned to taking full advantage of the remote TCP's advertised window. 2.2 Selective Acknowledgement (RFC 2018) TCP may experience poor performance when multiple packets are lost from one window of data. Selective Acknowledgment (SACK), described in RFC 2018, is effective in recovering from loss of multiple segments in a window. It accomplishes this by extending TCP's original, simple "ACK to the first hole in the data" algorithm with one that describes holes past the first lost segment. This information, sent from the receiver to the sender as an option field in the TCP header, allows the sender to retransmit lost segments sooner. In addition, the acknowledgment of segments after the first hole in sequence space using SACK allows the sender to avoid retransmission of segments which were not lost. SACK is configured in HP-UX with the ndd tunable tcp_sack_enable, which can be set to the following values: 0: Never initiate, nor accept the use of SACK 1: Always ask for, and accept the use of SACK 2: Do not ask for, but accept the use of SACK (Default) The default value of 2 is somewhat conservative, as the system will not initiate the use of SACK on a connection. It may be necessary to keep this value in some cases, as other TCP implementations which do not support SACK may be improperly implemented and may not ignore this option when it is requested in a TCP SYN segment. However, if the remote initiates the connection and asks for SACK, HP-UX will honor that request. A tcp_sack_enable value of 1 should be used if you want the system to use SACK for those connections initiated from the system itself (i.e. applications on the system calling connect() or t_connect()). 2.3 Limited Transmit (RFC 3042) HP-UX implements TCP Limited Transmit (RFC 3042), which provides faster recovery from packet loss. When a segment is lost, exercising the TCP Fast Retransmit algorithm (RFC 2001) is much faster than waiting for the TCP retransmit timeout. In order to trigger the TCP Fast Retransmit, three duplicate acknowledgments need to be received. However, when the congestion window is small, enough duplicate acknowledgments may not be produced. The Limited Transmit feature attempts to induce necessary duplicate acknowledgments in such situations. For each of the first two duplicate ACKs, Limited Transmit sends a new data segment (if new data is available). If a previous segment has in fact been lost, these new segments will induce additional duplicate acknowledgments. This improves the chances of initiating Fast Retransmit. Limited Transmit can be used either with or without the TCP selective acknowledgement (SACK) mechanism. 2.4 Large Initial Congestion Window (RFC 3390) The congestion window is the flow-control imposed by the sending TCP entity. When TCP starts a new connection or re-starts transmission after a long idle period, it starts conservatively by sending a few segments, i.e. the initial congestion window, and does not utilize the whole window advertised by the receiving TCP. 6

7 The large initial congestion window (RFC 3390) increases the permitted initial window from one or two segments to four segments or 4380 bytes, whichever is less. For example, when MSS is 1460 bytes, the TCP connection starts with three segments (3*1460=4380). By default, HP-UX uses the large initial congestion window. This is configured by the ndd tunable tcp_cwnd_initial. The large initial congestion window is especially effective for connections that need to send a small quantity of data. For example, to send 4KB of data, it needs just one RTT to transmit the data, while, without the large initial window, it requires an extra round trip time (RTT) which could have significant performance impact on long delay networks. 2.5 TCP Segmentation Offload (TSO) TCP Segmentation Offload (TSO) refers to a mechanism by which the TCP host stack offloads certain portions of outbound TCP packet processing to the Network Interface Card (NIC). This reduces the host CPU utilization. It allows the HP-UX transport implementation to create packets up to bytes in length that can be passed down to the driver in one write. This feature is also referred to as Large Send Offload, Segmentation Offload, Multidata Transmit, or Re-segmentation. TSO increases the efficiency of the HP-UX kernel by allowing 22 segments with the TCP maximum segment size (MSS) of 1460 bytes to be processed at one time, saving 21 passes down the stack. Using a Jumbo Frame MTU of 9000 bytes, this translates to 3 to 4 passes for a 32k byte send. This feature can significantly reduce the server load for applications that transmit large amounts of data from the system. Examples of such applications include Web Servers, NFS, and file transfer applications. If the link is primarily used for bulk-data transfers, then turning on this feature improves CPU utilization. The performance gain is less optimal for shorter application messages transmitted over interactive streams. The NIC must be capable of providing this feature. To enable this feature use the following commands: 11i v1 and 11i v2: lanadmin -X send_cko_on ppa lanadmin -X vmtu ppa 11i v3: nwmgr -s -A tx_cko=on -c interface_name nwmgr -s -A vmtu= c interface_name If the card is not TSO-capable, the "vmtu" option will not be supported. For more information on TSO-enhanced cards and drivers, go to and search for TSO. 2.6 Packet Trains for IP fragments Packet Trains are used when sending IP fragments. This solves the problem where driver may not be able to handle a burst of IP fragments. Previously, when processing a large datagram, IP would fragment the datagram to the current MTU size and send down each fragment to the driver before processing the rest of the datagram. This could cause a problem if the driver is unable to process one or more of the IP fragments during outbound processing of these individual fragments. 7

8 A single fragment dropped by the driver will cause an entire datagram to be unrecoverable. When the remote machine picks up the remaining fragments, they will be queued in its reassembly queue, according to the IP protocol. If this happens frequently, the entire IP reassembly queue on the receiving side will be exhausted. This, in turn, would result in good packets being dropped because of the full buffer on the receiving side. To mitigate this problem, HP-UX uses Packet Trains. As each fragment is carved off, it is linked with the other fragments for this write to form a packet train, until the entire datagram is processed. Then a request is made to the driver to ensure that all of the fragments can be accommodated in one request. If so, then IP passes down the packet train and the driver sends it to the card. If the driver cannot accommodate the entire packet train, then the entire train is discarded. This reduces the host CPU utilization. This feature is enabled by default, provided that the driver is capable of handling this request. Currently only 1000 Mbit or faster interfaces support this feature. To see if a driver has this feature enabled, enter the following command: # ndd get /dev/ip ip_ill_status If the output includes the keyword TRAIN, the driver supports this feature. For example: ILL rq wq upcnt mxfrg err memavail ilmcnt name a e e RUNNING BROADCAST CKO MULTICAST CKO_IN TRAIN lan0 8

9 3 Advanced Out of the Box Scalability and Performance Features The HP-UX Networking Stack has been engineered for best scalability and performance for high end servers. It can gracefully scale up from a few processors to 256 processors, and from 10 BaseT to 10 Gigabit Ethernet. Due to various configuration requirements for different type of workloads on high end servers, HP-UX provides the following advanced performance features for a highly-scalable TCP/IP stack: TOPS NOSYNC Protection from Packet Storm Interrupt Binding 3.1 TOPS Thread-Optimized Packet Scheduling (TOPS) increases the scalability and performance of TCP and UDP socket applications sharing a high-bandwidth network interface on multiprocessor systems. The goal is to move inbound packets processing to the same processor that runs the receiving user application. IP networking stacks, such as the stack implemented on HP-UX, operate as multiplexers, which route packets between network interface cards (NICs) and a set of user endpoints. HP-UX achieves excellent scalability by scheduling multiple applications across a set of processors; and, for outbound data, applications scale well when sharing a NIC. However, for inbound data, the configuration of each NIC determines which processor it interrupts. For most NICs, a single processor is interrupted as packets come in from the network. In the absence of TOPS, this processor will do the protocol processing for each incoming packet. Since a single high-speed NIC can process incoming data for many connections, the processor interrupted by this NIC can easily become a bottleneck. This prevents the maximum network throughput or packet rate from being realized. In order to improve scalability in this case, the TOPS mechanism allows the driver to quickly hand off packets to the processor where the application is most likely running, and return to processing packets coming from the wire. In most cases, a single processor will then perform all memory accesses to the application data inside each packet. This leads to a more efficient use of memory and cache subsystems. The TOPS mechanism is used by all TCP and UDP sockets without application modification or recompilation. In most cases, an additional benefit of requiring only a single processor to handle application data coming in from the network is realized. This leads to a more efficient use of memory and cache subsystems Configuration Scenario for TOPS TOPS is most beneficial for system configurations where the number of CPUs is much greater than the number of NICs such as a 16-way system with one or two Gigabit cards. Inbound packet processing is spread among the CPUs based on where the socket application processes are scheduled, leading to a more even distribution of the processing load in MP-scalable and network-intensive applications socket_enable_tops Tunable TOPS is enabled by default on HP-UX 11i, and requires no action on the part of an application to take advantage of this feature. On the more recent patches of 11i v1 and 11i v2, the ndd tunable socket_enable_tops is available to turn off or alter the behavior of TOPS. In 11i v3, the equivalent 9

10 tunable will be provided in a future patch. This may be useful in cases described below where specific conditions make the TOPS default less than optimal. Refer to Table 2 (at the end of Appendix B) for the patch level information for the ndd tunable socket_enable_tops. It should not be necessary to disable TOPS. However, there are cases where the scalability issue addressed by TOPS does not exist. When there are multiple NICs on a system, it is possible that no NIC interrupt will become a processing bottleneck even with TOPS disabled (socket_enable_tops = 0). In these cases, there may be some efficiency gained by avoiding the overhead of TOPS, and allowing more of the processing to be done in the NIC interrupt context before switching to the processor running the application. In the most efficient, highest-performing case of the application and NIC being assigned to the same processor, however, there is no need for TOPS to switch processors, and therefore the TOPS tunable setting will have no effect on performance. Another consideration for TOPS tuning is whether the NIC is configured for checksum offload (CKO) on inbound data. If CKO is enabled, TOPS will provide less benefit for the memory cache, as there will not be a need to read the payload data during the inbound TCP/UDP processing. As an application is rescheduled over time between different processors, or in the cases where threads executing on different processors may share a socket, TOPS may not operate optimally in determining which processor to switch to in order to match where the system call will execute to receive the data. In most cases, the default TOPS setting for11i v3 (socket_enable_tops = 2) will work best in following the application to its current CPU. In cases where sockets are being opened and closed at a high rate, it may be possible to gain some efficiency by fixing the processor assigned to each connection by TOPS using the ndd setting socket_enable_tops = 1, which is the default for 11i v1 and 11i v2. However, these cases may be rare, and can only be determined by experimentation, or by detailed measurement and analysis of the performance of the HP-UX kernel. As a result, changing from the default setting to socket_enable_tops = 2 on 11i v1 and 11i v2 will provide equal or better performance in the majority of cases. 3.2 STREAMS NOSYNC Level Synchronization Previously the STREAMS framework supported execution of only one instance of the put procedure at a time for a given STREAMS queue. For multiple requests to the same queue, STREAMS synchronized the requests depending on the synchronization level of a module. Synchronization ensured that only one request was executed at a time. With high speed I/O, the synchronization limits imposed by STREAMS could easily lead to a performance bottleneck. The restriction imposed by these previous STREAMS synchronization methods has been removed by providing a new synchronization method NOSYNC in 11i v3 and the latest patches for 11i v1 and 11i v2. If a module uses NOSYNC level synchronization, the STREAMS framework can concurrently execute multiple instances of its queue's put procedure and a single instance of the same queue's service procedure. This requires the modules to protect any module-specific data that is shared between multiple instances of put procedures, or between the put and service procedures IP NOSYNC synchronization With NOSYNC level synchronization, the IP module can handle requests simultaneously when multiple requests arrive on the same queue. This feature significantly improves network throughput, reaching near link speed for high-speed network interfaces such as multi-port Gigabit cards in an Auto Port Aggregation (APA) configuration or 10Gigabit cards. To realize the performance gain from this feature, all modules (eg. DLPI, IPFilter) on the networking stack between the IP layer and the LAN driver must have NOSYNC enabled. HP recommends that providers of 10

11 modules pushed on the DLPI stream create or modify the modules to operate at the NOSYNC synchronization level so that the NOSYNC performance gain is not lost. For more details about writing a NOSYNC module/driver refer to: STREAMS/UX Programmer's Guide, available at Patch level information for the NOSYNC feature: 11i v1: STREAMS: PHNE_35453 or higher ARPA Transport: PHNE_35351 or higher DLPI: PHNE_33704 or higher IPFilter: A or later 11i v2: STREAMS: PHNE_34788 or higher ARPA Transport: PHNE_35765 or higher DLPI: PHNE_33429 or higher IPFilter: A or later 3.3 Protection from Packet Storms When the network is overloaded, or when defective network components send out a flood of packets, a server can see an inbound packet storm of network traffic. This would have a serious impact on the performance of a mission critical server as a lot of the CPU power is consumed and an excessive amount of time spent in interrupt context to process these packets. HP-UX has an extensive set of features to minimize the negative effects of many types of packet storms. Using the default capabilities of HP-UX 11i v3, the system will be well protected against this, as described below Detect and Strobe Solution The Detect and Strobe kernel functionality is described in The HP-UX 11i v3 Release Notes, Chapter 5 This feature is designed to limit the amount of processor time spent in interrupt context to a maximum percentage over time. This provides better responsiveness for time-sensitive applications and high-priority kernel threads that could otherwise be delayed by interrupt activity. A tunable parameter, documented in the man page intr_strobe_ics_pct(5) is provided to control the operation of detect and strobe. It is enabled by default, and it is documented that only HP Field Engineers should change the value of this tunable HP-UX Networking Responsiveness Features Several features of the networking kernel code contribute to protecting the system from packet storms, and these have been improved in HP-UX 11i v3. In general, synchronization points exist in the protocol layers to serialize the processing of packets when required. For example, to maintain the state of a particular TCP connection, inbound and outbound segments are processed serially as they are received by the upper or lower protocol layers, and queuing can occur. The queued backlog of packets could become a responsiveness issue, particularly when processed in an interrupt context. However, by setting some reasonable limits to the queue length, and 11

12 eliminating points of contention to allow more parallelism in TCP/IP processing, HP-UX has eliminated many causes of delay in the kernel, even when the system is under extreme load. In addition, the Detect and Strobe feature will be activated if the incoming traffic is more than the system can handle Responsiveness Tuning The cost of providing responsiveness for the overall system in the case of packet storms is that incoming network interrupts can be delayed or even dropped. This will usually occur in a case where the incoming packets would be eventually dropped anyway due to a kernel queue overflow, memory shortage, or network protocol timeout. Given that dropping packets is inevitable, dropping them as soon as possible in the NIC uses fewer operating system resources, and is therefore the most desirable response. The latter is particularly true in the case of packet storms consisting of unwanted packets, for example from a malfunctioning switch, where the loss of the packets themselves is of little or no consequence. In the case of reliable protocols such as TCP, the dropped data should be recovered through a retransmission, and the protocol should help relieve the overflow by slowing down the connection using the TCP congestion window. In the case of an unreliable datagram protocol such as UDP, the loss of data may be noticeable at the user or application level. The logging messages described in intr_strobe_ics_pct(5) can be used to determine when the Detect and Strobe feature has been activated due to excessive interrupt activity. In addition, HP Support can retrieve network-specific kernel statistics that can determine if packets are being dropped due to queue overflows. If responsiveness is not critical on the system, it may be possible to gain a small amount of performance by tuning intr_strobe_ics_pct(5) to allow a higher maximum percentage of interrupt processing. Other approaches to increasing responsiveness and performance include using Interrupt Migration as described in section 3.4 and binding critical applications to processors or processor sets where interrupt activity is less likely to be a problem. Many of the features described above are available in HP-UX 11i v1 and 11i v2 at the most recent patch levels. HP-UX Reference HP-UX 11i v2 September 2004, describes intr_strobe_ics_pct(5), which is disabled by default in this version. In September 2006, an 11i v2 Detect and Strobe was released based on the May 2005 update release, with some additional recommended patches. A similar responsiveness solution was released for HP-UX 11i v1 as a set of patches and optional products. The Interrupt Migration product for 11i v1 is one of these optional products, which is available without cost from software.hp.com. Because of the set of patches required, and the recommendation for the HP Field to modify intr_strobe_ics_pct(5), HP Support should be contacted if a responsiveness solution is required for all HP-UX 11i releases. 3.4 Interrupt Binding and Migration In HP-UX 11i, the system administrator has the ability to assign interface cards to interrupt specific processors, overriding the default assignment performed by the operating system. The command used for this assignment, called Interrupt Migration, is intctl(1m). The default assignments done by the operating system at boot time will spread interrupts evenly across a set of processors, and will work well in most cases. However, for optimal network performance, it may be necessary to change this, taking into consideration the overall system and application workload. 12

13 3.4.1 Configuration Scenario for Interrupt Migration A significant amount of network protocol processing for inbound packets is done as part of the interrupt from the network interface. In order to avoid a CPU bottleneck when there is heavy network traffic, Interrupt Migration can be used to move interrupts away from heavily-loaded processors. Examples of this load balancing could be to configure two busy network interfaces to interrupt separate processors, or to schedule network interrupts away from a processor which is busy with unrelated application processing. In the case of an IP subnet configured using Auto Port Aggregation (APA); maximum throughput can be achieved by assigning interrupts for each interface in the aggregate to a separate processor. The 10 Gigabit Ethernet driver (ixgbe) for HP-UX provides load balancing through the destination-port based multiqueue feature. This allows multiple processors to be interrupted by the 10 Gigabit card, and the incoming traffic can be separated into multiple flows based on the TCP destination port. Only TCP is supported by the destination port multiqueue feature. This increases the maximum throughput of the 10 Gigabit card, which would otherwise be limited by the interrupt processing speed of a single CPU. The "10GigEthr-00 (ixgbe) 10 Gigabit Ethernet Driver" release notes ( explains the configuration of the multiqueue feature Cache Affinity Improvement Network protocols are layered, and data and control structures are shared between these layers. When these structures are brought into a processor's cache, less time is spent stalling for cache misses as the remaining protocol layers process the packet. Since interrupts for a NIC are bound to a processor, there is even a good possibility that some structures will still be in the correct processor's cache when the next packet for a given connection arrives. However, when an application receives the data, there is the possibility of additional cache misses, as the HP-UX scheduler assigns application threads to processors independently of the interrupt bindings. To get the most efficient operation from a cache standpoint, it is beneficial to have the interrupt assigned where the busiest applications are consuming the data. Using mpctl(2) on a per-application basis, and optionally defining processor sets, applications can be restricted to run on specific processors. If this does not result in a CPU bottleneck, then it is most efficient both for the application and from a system wide perspective. On the other hand, there is little cache sharing between network interfaces, so there will be little benefit from cache affinity if multiple network interfaces interrupt the same processor. 13

14 4 Improving HP-UX Server Performance 4.1 Tuning Application and Database Servers Many of the enterprise applications today are architected and built using the J2EE framework, which is designed for the mainframe-scale computing typical of large enterprises. The J2EE framework provides a way to architect solutions which are distributed, multi-tiered and scalable. The diagram below shows an overview of the multi-tiered J2EE application architecture. Enterprise Data Center Tier 1 Tier 2 Web Server App Servers Tier 3 DB Server Internet Internet Web Clients In such an architecture, the client tier typically consists of web browsers or traditional terminals used at point of sales etc. In a typical deployment, web and business tiers are either separate or may be hosted within a single physical server. Application servers normally run business logic of an enterprise and they communicate with backend database servers using application programming interfaces such as the Java Database Connectivity (JDBC) interfaces. Though both web servers and application servers can be hosted on a single physical server, the often used practice is to separate them and run them on different physical servers for better performance and scalability of applications. In an actual deployment, there may be components such as a network load balancer which will help balance the load among multiple application servers and/or web servers. The diagram on the next page shows a typical physical view of such a deployment. 14

15 Enterprise Data Center Firewall Web Server Load balancer App Server DB Server Internet/Leased lines http https Open Zone DMZ MZ Tuning Application Servers Network traffic characteristics of a physical server which is used to run as an application server varies based on its usage context and the nature of applications (business logic) that they run. Web applications are normally implemented using technologies such as servlets and JSP scripts. For example, users first connect to the Web server which in turn forwards the request to run an application. Such an application may be implemented as a servlet on an application server. Based on the application logic, the application server may need to access the back-end database server. Typically, application servers communicate with front-end web servers or back-end database servers using a shared set of TCP connections an approach known as connection pooling. Typically, Web servers reuse these connections for forwarding the requests from different clients at different point in time. The connection pooling approach is preferred over creation of new connections when needed for performance reasons. The number of TCP connections in the pool is often configurable and is based on the number of concurrent users that the system needs to support during peak load conditions. Usually application server vendors may suggest a set of networking related tunable parameters that are best suited to run the application server on a given OS platform. In this section we provide a set of guidelines on tuning network tunable parameters to run application servers on HP-UX 11i. Most of the tunable parameters discussed below are queried or set using ndd command on HP-UX. Please refer to Appendix B for more details on these tunable parameters tcp_time_wait_interval A physical server has to support large number of concurrent TCP connections if it is used to run both an application server and a Web server simultaneously. tcp_time_wait_interval controls how long connections need to be in the TIME_WAIT state before closing down. Often opening and closing of a large 15

16 number of TCP connections, as in the case with Web servers, may result in a large number of connections staying in TIME_WAIT state before getting closed. Application server vendors may typically suggest tuning this parameter related to TCP s TIME_WAIT timer. With the default value of 60 seconds for tcp_time_wait_interval on HP-UX, the HP-UX stack can track literally millions of TIME_WAIT connections with no particular decrease in performance and only a slight cost in terms of memory. Please refer to Appendix B for further discussion on this tunable parameter tcp_conn_request_max Depending upon the configuration of a physical server, application servers typically need to accept a large number of concurrent connections. The number of connections that can be queued by the networking stack is the minimum of the listen backlog and this tunable parameter tcp_conn_request_max. Application server vendors may suggest this tunable parameter to be set to On HP-UX the default value for this tunable is 4096 so it may not need a change. Use netstat p tcp to monitor any dropped connections due to listen queue full conditions and increase the value of this tunable parameter if necessary. Refer to section for a detailed description of this tunable parameter tcp_xmit_hiwater_def This parameter controls the amount of unsent data that triggers the write side flow control. For typical OLTP types of transactions (short request and short response) this parameter needs no tuning. Increasing this tunable enables large buffer writes. For Decision Support System (DSS) workloads (i.e. small query and large response), we recommend setting this tunable parameter to (default is 32768). Please refer to Appendix B for further discussion on this tunable parameter tcp_ip_abort_interval In certain deployment scenarios, backend database servers may be used in a highly available cluster configuration. tcp_ip_abort_interval is the maximum amount of time a sending tcp will wait before concluding the receiver is not reachable. Application servers may use this mechanism to detect node or link failure and automatically switch the traffic over to a working database server in a cluster configuration. In a typical deployment, application servers may be communicating with database servers and Web servers which are physically close. To help faster detection and making use of fail-over features in such environment, it may be desirable to set this tunable parameter to a lower value than the default value of 10 minutes. However, it is not recommended to set this parameter lower than tcp_time_wait_interval. Please refer to Appendix B for further discussion on this tunable parameter tcp_keepalive_interval When there is no activity on a connection and if the application requests that the keepalive timer be enabled on the connection, then TCP sends keepalive probes at tcp_keepalive_interval intervals to make sure that the remote host is still reachable and is responding. Application servers may make use of this feature (SO_KEEPALIVE) to quickly fail over in cluster configurations when there is not much network traffic. As application servers typically maintain a pool of long standing TCP connections open with both Web servers and database servers, it is desirable to detect and fail over earlier in case of node or link failures during very low network traffic. The default value is 2 hours; however some application server vendors suggested tuning this tunable parameter to a much lower value (e.g. 900 seconds) than the default value. 16

17 tcphashsz tcphashsz controls the size of several hash tables maintained within the kernel. For better performance it is better to have larger tables at the expense of more memory being used when there is a large number of concurrent connections in the system. On modern-day servers memory may not be a major constraint. When Web server and application servers are run on the same physical machine, the suggested value for this tunable parameter is If Web server and application servers are running on different machines, then the number of concurrent connections on the application server may not be very large. In this case the default value (number of CPUs * 1024) should suffice. This parameter is set using the following command: # kctune tcphashsz=32768 Please note that system has to be rebooted for the new value to take effect. Otherwise system will continue to use the current value of the tcphashsz parameter. Refer to section on page 21 for more discussion on tuning tcphashsz Tuning Database Servers There are several different database systems that are deployed today on HP-UX. Typically networking may not be a bottleneck on a database server as compared to I/O. Nevertheless, the following tuning may help improve the overall efficiency from a networking perspective tcp_xmit_hiwater_def This parameter controls the amount of unsent data that can be queued to the connection before subsequent attempts by the application to send data will cause the call to block or return EWOULDBLOCK/EAGAIN if the socket is marked non-blocking. For typical OLTP types of transactions (short requests and short responses) this parameter needs no tuning. However, you may consider increasing it to from the default value for DSS (Decision Support System) or BI (Business Intelligence) workloads that require a large amount of data to be transferred from the database server. Furthermore, this may help in data backups from database servers to an external storage device over a network-attached storage (NAS) socket_udp_rcvbuf_default Cluster based database technologies are becoming popular. Typically nodes of such database clusters communicate among themselves using UDP. A large amount of data may get exchanged between server nodes in a database cluster connected through an interconnect. In this case you may want to consider increasing the tunable parameter socket_udp_rcvbuf_default which defines the default receive buffer size for UDP sockets. If the command netstat p udp shows socket overflows, it might be desirable to increase this tunable parameter. It should be noted that increasing the size of the socket buffer only helps if the length of the overload condition is short and the burst of traffic is less than the size of the socket buffer. Increasing the socket buffer size will not help if the overload is sustained socket_udp_sndbuf_default As mentioned above, cluster based database technologies may use UDP to communicate between nodes in the cluster. This tunable parameter sets the default send buffer size for UDP sockets. The default value for this tunable parameter is 65536, which is optimal for cluster based database technologies used on HP-UX. 17

18 4.2 Tuning Web Servers As the demand for faster and more scalable web service increases, it is desirable to improve web server performance and scalability by integrating web server functionality into operating systems. Web servers have characteristics of many short-lived connections, which open and close TCP connections at a very fast rate. In a busy web server environment, there could be tens of thousands of TCP connections per second. The following features and configurations are recommended for optimizing Web servers: Network Server Accelerator HTTP Socket Caching for TCP connections Increase the tcphashsz value Increase the listen queue limit MSG_EOF for TCP applications Network Server Accelerator HTTP The Network Server Accelerator HTTP (NSA HTTP) is a product that provides an in-kernel cache of web pages in HP-UX 11i. This section describes the performance improvements achievable with NSA HTTP and the system tuning needed to achieve these performance improvements. The following list highlights the techniques NSA HTTP implements to achieve superior performance of Web servers in HP-UX 11i: Serving content from RAM (main memory) eliminates disk latency and I/O bandwidth constraints. In-kernel implementation decreases transitions between kernel and user mode. Tight integration with the TCP protocol stack. This allows efficient event notification and data transfer. In particular, a zero-copy send interface reduces data transfer by allowing responses to be sent directly from the RAM-based cache. Deferred-interrupt context processing removes the overhead associated with threads. Re-use of data structures and other resources reduces the lengths of critical code paths. The HTTP-specific portion of NSA HTTP is implemented as a DLKM module. In addition, the nsahttp utility is provided to configure and administer NSA HTTP. For a detailed description of the utility, refer to the nsahttp(1) man page. The NSA HTTP product is supported on HP-UX 11i and is available from Usage Scenarios There are a number of ways NSA HTTP can be used in a Web server environment. We briefly describe two scenarios to highlight the most typical usage Single System Web Server with NSA HTTP The simplest scenario uses NSA HTTP and a conventional user-level Web-server process co-located on a single system. In this topology, NSA HTTP increases server capacity by increasing the efficiency for processing static requests. NSA HTTP provides a fast path in the kernel that bypasses normal processing of static requests at the user level. This fast path entails having NSA HTTP parse each HTTP request to determine whether it can be served from the kernel. Requests that cannot be served from the kernel are passed to the user-level server process. Adding the fast path in the kernel, therefore, introduces additional parsing and processing to the path for requests served at the user level. This overhead is not significant and is more than compensated for by the increased efficiency when serving static requests. 18

19 Multiple Web Servers with Partitioned Content High-traffic Web sites typically feature multiple servers that are dedicated for specific purposes. A given set of servers, for example, may serve specific content such as images, advertisements, audio, or video. Dedicating servers to specific content types limits the total working set that must be delivered by any single server and allows the server's hardware configuration to be tailored to its content. One common approach to partitioning content is to separate static and dynamic content. Servicing static content requests typically requires more I/O bandwidth and memory than servicing dynamic content; servicing dynamic content requests typically requires greater CPU capacity. A second typical usage scenario, usually associated with very-high-traffic Web sites, is to deploy NSA HTTP on multiple web servers dedicated to serving static content, with a load balancer and/or web switch that routes user requests to the appropriate server. This approach is typically viable when the content of a site has already been manually partitioned among a set of specialized servers Tuning Recommendations This section describes the NSA HTTP operating parameters that you can tune to improve performance Maximum NSA HTTP Cache Percentage (max cache percentage) The maximum NSA HTTP URI (Uniform Resource Identifier, a term that encompasses URL) cache size is configured as a percentage of system memory. You can set the value for this parameter by editing /etc/rc.config.d/nsahttpconf or by using the nsahttp(1) command: # nsahttp -C max_cache_percentage By default, max_cache_percentage is 50 (50% of system memory). You should set the value for max_cache_percentage in conjunction with the system file cache settings (see filecache_max(5) for 11i v3, and dbc_min_pct(5) for 11i v1 and 11i v2). See sendfile() in section 5.1 of this document for additional information on file cache settings, as sendfile caching is done directly in the file cache, separately from caching done by NSA HTTP Cache Entry Timeout NSA HTTP has a URI cache entry timeout value. If an entry is not accessed for a period longer than the timeout value, NSA may re-use (write over) the entry. For best performance, an optimal value for the timeout value must be found. If it is too high, the cache may contain many stale entries. If it is too low, there may be excessive cache entry timeouts and increased cache misses. You can set the cache timeout by editing /etc/rc.config.d/nsahttpconf or by using the following nsahttp command: # nsahttp -e cache_timeout The cache_timeout value is set in seconds. For example, the command nsahttp -e 7200 sets the cache entry timeout to 7200 seconds (two hours) Maximum URI Page Size NSA HTTP allows you to limit the maximum size of each of the URI objects (web pages) stored in the cache. You can tune this value to optimize the cache usage. You can set the maximum URI page size by editing /etc/rc.config.d/nsahttpconf or by using with the following nsahttp command: # nsahttp -m max_uri_page_size 19

20 The max_uri_page_size is specified in bytes. For example, the command nsahttp -m causes NSA HTTP to cache only web pages that contain 2MB or fewer Performance Data A simulated web server environment was used to measure the performance of NSA HTTP. The workload was a mix of static content (70%) and dynamic content (30%). The measurements were taken using Web servers that implement copy avoidance when servicing static requests. The performance improvement was about 13-17%. On workloads with only static content, the performance improvement was approximately 60-70%. The performance improvements can be significantly greater when NSA HTTP is used with web servers that do not implement copy avoidance for servicing static requests Socket Caching for TCP Connections There is a finite amount of operating system overhead in opening and closing a TCP connection (for example in the processing of the socket(), accept() and close() system calls) that exists regardless of any data transfer over the lifetime of the connection. For long-lived connections, the cost of opening and closing a TCP connection is not significant when amortized over the life of the connection. However, for a short-lived connection, the overhead of opening and closing a connection can have a significant performance impact. HP-UX 11i implements a socket caching feature for better performance of short-lived connections such as web connections. A considerable amount of kernel resources (such as TCP and IP level data structures, STREAMS data structures) are allocated for each new TCP endpoint. By avoiding the allocation of these resources each time an application opens a socket, or receives a socket with a new connection through the accept() system call, a server can proceed more quickly to the data transfer phase of the connection. The socket caching feature for TCP connections saves the endpoint resources instead of freeing them, speeding up the closing function. Once the cache is populated, new TCP connections can use cached resources, speeding up the opening of a connection. TCP endpoint resources cached due to closing of one TCP connection can be reused to open a new TCP connection by any application. HP-UX 11i v3 has been enhanced to cache both IPv4 and IPv6 TCP connections. HP-UX 11i v1 and HP-UX 11i v2 support caching of IPv4 TCP connections only. HP-UX does not currently cache other transport protocols such as UDP Tuning Recommendation Socket caching is enabled by default for IPv4 and IPv6 TCP connections. The default value of number of TCP endpoint resources that are cached is 512 per processor. The number of cached elements (TCP endpoint resources) can be changed by using the ndd tunable socket_caching_tcp. For example to set the number of cached elements to 1024 one would do the following: # ndd -set /dev/sockets socket_caching_tcp

Pump Up Your Network Server Performance with HP- UX

Pump Up Your Network Server Performance with HP- UX Pump Up Your Network Server Performance with HP- UX Paul Comstock Network Performance Architect Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject

More information

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade

Application Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489

More information

Per-Flow Queuing Allot's Approach to Bandwidth Management

Per-Flow Queuing Allot's Approach to Bandwidth Management White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth

More information

Transport Layer Protocols

Transport Layer Protocols Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements

More information

VMWARE WHITE PAPER 1

VMWARE WHITE PAPER 1 1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the

More information

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM? Ashutosh Shinde Performance Architect ashutosh_shinde@hotmail.com Validating if the workload generated by the load generating tools is applied

More information

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.

More information

Optimizing TCP Forwarding

Optimizing TCP Forwarding Optimizing TCP Forwarding Vsevolod V. Panteleenko and Vincent W. Freeh TR-2-3 Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556 {vvp, vin}@cse.nd.edu Abstract

More information

TCP/IP Optimization for Wide Area Storage Networks. Dr. Joseph L White Juniper Networks

TCP/IP Optimization for Wide Area Storage Networks. Dr. Joseph L White Juniper Networks TCP/IP Optimization for Wide Area Storage Networks Dr. Joseph L White Juniper Networks SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals

More information

Ethernet. Ethernet. Network Devices

Ethernet. Ethernet. Network Devices Ethernet Babak Kia Adjunct Professor Boston University College of Engineering ENG SC757 - Advanced Microprocessor Design Ethernet Ethernet is a term used to refer to a diverse set of frame based networking

More information

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized

More information

Key Components of WAN Optimization Controller Functionality

Key Components of WAN Optimization Controller Functionality Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications

More information

Computer Networks. Chapter 5 Transport Protocols

Computer Networks. Chapter 5 Transport Protocols Computer Networks Chapter 5 Transport Protocols Transport Protocol Provides end-to-end transport Hides the network details Transport protocol or service (TS) offers: Different types of services QoS Data

More information

B-2 Analyzing TCP/IP Networks with Wireshark. Ray Tompkins Founder of Gearbit www.gearbit.com

B-2 Analyzing TCP/IP Networks with Wireshark. Ray Tompkins Founder of Gearbit www.gearbit.com B-2 Analyzing TCP/IP Networks with Wireshark June 15, 2010 Ray Tompkins Founder of Gearbit www.gearbit.com SHARKFEST 10 Stanford University June 14-17, 2010 TCP In this session we will examine the details

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network

More information

MOBILITY AND MOBILE NETWORK OPTIMIZATION

MOBILITY AND MOBILE NETWORK OPTIMIZATION MOBILITY AND MOBILE NETWORK OPTIMIZATION netmotionwireless.com Executive Summary Wireless networks exhibit uneven and unpredictable performance characteristics which, if not correctly managed, can turn

More information

Networking Topology For Your System

Networking Topology For Your System This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.

More information

Final for ECE374 05/06/13 Solution!!

Final for ECE374 05/06/13 Solution!! 1 Final for ECE374 05/06/13 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam taker -

More information

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Session 11899

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Session 11899 Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Session 11899 Dean Butler butlerde@us.ibm.com 2012 IBM Corporation Agenda IBM Software Group Tivoli software Best Practices in Monitoring

More information

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding

Names & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding Names & Addresses EE 122: IP Forwarding and Transport Protocols Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ (Materials with thanks to Vern Paxson, Jennifer Rexford, and colleagues at UC Berkeley)

More information

Understanding Slow Start

Understanding Slow Start Chapter 1 Load Balancing 57 Understanding Slow Start When you configure a NetScaler to use a metric-based LB method such as Least Connections, Least Response Time, Least Bandwidth, Least Packets, or Custom

More information

Content Distribution Networks (CDN)

Content Distribution Networks (CDN) 229 Content Distribution Networks (CDNs) A content distribution network can be viewed as a global web replication. main idea: each replica is located in a different geographic area, rather then in the

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Lecture Computer Networks

Lecture Computer Networks Prof. Dr. H. P. Großmann mit M. Rabel sowie H. Hutschenreiter und T. Nau Sommersemester 2012 Institut für Organisation und Management von Informationssystemen Thomas Nau, kiz Lecture Computer Networks

More information

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP)

TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) Internet Protocol (IP) TCP over Multi-hop Wireless Networks * Overview of Transmission Control Protocol / Internet Protocol (TCP/IP) *Slides adapted from a talk given by Nitin Vaidya. Wireless Computing and Network Systems Page

More information

Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX

Understanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX APPENDIX A Introduction Understanding TCP/IP To fully understand the architecture of Cisco Centri Firewall, you need to understand the TCP/IP architecture on which the Internet is based. This appendix

More information

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards From Results on an HP rx4640 Server Table of Contents June 2005 Introduction... 3 Recommended Use Based on Performance and Design...

More information

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

Accelerating High-Speed Networking with Intel I/O Acceleration Technology White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing

More information

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering

Internet Firewall CSIS 4222. Packet Filtering. Internet Firewall. Examples. Spring 2011 CSIS 4222. net15 1. Routers can implement packet filtering Internet Firewall CSIS 4222 A combination of hardware and software that isolates an organization s internal network from the Internet at large Ch 27: Internet Routing Ch 30: Packet filtering & firewalls

More information

A Transport Protocol for Multimedia Wireless Sensor Networks

A Transport Protocol for Multimedia Wireless Sensor Networks A Transport Protocol for Multimedia Wireless Sensor Networks Duarte Meneses, António Grilo, Paulo Rogério Pereira 1 NGI'2011: A Transport Protocol for Multimedia Wireless Sensor Networks Introduction Wireless

More information

PERFORMANCE TUNING ORACLE RAC ON LINUX

PERFORMANCE TUNING ORACLE RAC ON LINUX PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database

More information

Transport and Network Layer

Transport and Network Layer Transport and Network Layer 1 Introduction Responsible for moving messages from end-to-end in a network Closely tied together TCP/IP: most commonly used protocol o Used in Internet o Compatible with a

More information

Isilon IQ Network Configuration Guide

Isilon IQ Network Configuration Guide Isilon IQ Network Configuration Guide An Isilon Systems Best Practice Paper August 2008 ISILON SYSTEMS Table of Contents Cluster Networking Introduction...3 Assumptions...3 Cluster Networking Features...3

More information

hp ProLiant network adapter teaming

hp ProLiant network adapter teaming hp networking june 2003 hp ProLiant network adapter teaming technical white paper table of contents introduction 2 executive summary 2 overview of network addressing 2 layer 2 vs. layer 3 addressing 2

More information

[Prof. Rupesh G Vaishnav] Page 1

[Prof. Rupesh G Vaishnav] Page 1 Basics The function of transport layer is to provide a reliable end-to-end communications service. It also provides data transfer service for the user layers above and shield the upper layers from the

More information

Measure wireless network performance using testing tool iperf

Measure wireless network performance using testing tool iperf Measure wireless network performance using testing tool iperf By Lisa Phifer, SearchNetworking.com Many companies are upgrading their wireless networks to 802.11n for better throughput, reach, and reliability,

More information

Building a Highly Available and Scalable Web Farm

Building a Highly Available and Scalable Web Farm Page 1 of 10 MSDN Home > MSDN Library > Deployment Rate this page: 10 users 4.9 out of 5 Building a Highly Available and Scalable Web Farm Duwamish Online Paul Johns and Aaron Ching Microsoft Developer

More information

About Firewall Protection

About Firewall Protection 1. This guide describes how to configure basic firewall rules in the UTM to protect your network. The firewall then can provide secure, encrypted communications between your local network and a remote

More information

MODBUS MESSAGING ON TCP/IP IMPLEMENTATION GUIDE V1.0b CONTENTS

MODBUS MESSAGING ON TCP/IP IMPLEMENTATION GUIDE V1.0b CONTENTS MODBUS MESSAGING ON TCP/IP IMPLEMENTATION GUIDE V1.0b CONTENTS 1 INTRODUCTION... 2 1.1 OBJECTIVES... 2 1.2 CLIENT / SERVER MODEL... 2 1.3 REFERENCE DOCUMENTS... 3 2 ABBREVIATIONS... 3 3 CONTEXT... 3 3.1

More information

Technical Support Information Belkin internal use only

Technical Support Information Belkin internal use only The fundamentals of TCP/IP networking TCP/IP (Transmission Control Protocol / Internet Protocols) is a set of networking protocols that is used for communication on the Internet and on many other networks.

More information

Cisco Integrated Services Routers Performance Overview

Cisco Integrated Services Routers Performance Overview Integrated Services Routers Performance Overview What You Will Learn The Integrated Services Routers Generation 2 (ISR G2) provide a robust platform for delivering WAN services, unified communications,

More information

GlobalSCAPE DMZ Gateway, v1. User Guide

GlobalSCAPE DMZ Gateway, v1. User Guide GlobalSCAPE DMZ Gateway, v1 User Guide GlobalSCAPE, Inc. (GSB) Address: 4500 Lockhill-Selma Road, Suite 150 San Antonio, TX (USA) 78249 Sales: (210) 308-8267 Sales (Toll Free): (800) 290-5054 Technical

More information

Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308

Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant Laurak@aesclever.com Applied Expert Systems, Inc. 2011 1 Background

More information

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance

TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance TCP Servers: Offloading TCP Processing in Internet Servers. Design, Implementation, and Performance M. Rangarajan, A. Bohra, K. Banerjee, E.V. Carrera, R. Bianchini, L. Iftode, W. Zwaenepoel. Presented

More information

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer

Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment Analyzer Version 2.0.0 Notes for Fixpack 1.2.0-TIV-W3_Analyzer-IF0003 Tivoli IBM Tivoli Web Response Monitor and IBM Tivoli Web Segment

More information

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University

More information

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking

Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand

More information

bbc Adobe LiveCycle Data Services Using the F5 BIG-IP LTM Introduction APPLIES TO CONTENTS

bbc Adobe LiveCycle Data Services Using the F5 BIG-IP LTM Introduction APPLIES TO CONTENTS TECHNICAL ARTICLE Adobe LiveCycle Data Services Using the F5 BIG-IP LTM Introduction APPLIES TO Adobe LiveCycle Enterprise Suite CONTENTS Introduction................................. 1 Edge server architecture......................

More information

ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM

ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 Outline The transport service Elements of transport protocols A

More information

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

IP SAN Best Practices

IP SAN Best Practices IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.

More information

Improving DNS performance using Stateless TCP in FreeBSD 9

Improving DNS performance using Stateless TCP in FreeBSD 9 Improving DNS performance using Stateless TCP in FreeBSD 9 David Hayes, Mattia Rossi, Grenville Armitage Centre for Advanced Internet Architectures, Technical Report 101022A Swinburne University of Technology

More information

Quantum StorNext. Product Brief: Distributed LAN Client

Quantum StorNext. Product Brief: Distributed LAN Client Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without

More information

CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY

CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY 51 CHAPTER 3 PROBLEM STATEMENT AND RESEARCH METHODOLOGY Web application operations are a crucial aspect of most organizational operations. Among them business continuity is one of the main concerns. Companies

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

IP - The Internet Protocol

IP - The Internet Protocol Orientation IP - The Internet Protocol IP (Internet Protocol) is a Network Layer Protocol. IP s current version is Version 4 (IPv4). It is specified in RFC 891. TCP UDP Transport Layer ICMP IP IGMP Network

More information

Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics

Outline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002

More information

Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1

Smart Tips. Enabling WAN Load Balancing. Key Features. Network Diagram. Overview. Featured Products. WAN Failover. Enabling WAN Load Balancing Page 1 Smart Tips Enabling WAN Load Balancing Overview Many small businesses today use broadband links such as DSL or Cable, favoring them over the traditional link such as T1/E1 or leased lines because of the

More information

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance. Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance

More information

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób) QUESTION NO: 8 David, your TestKing trainee, asks you about basic characteristics of switches and hubs for network connectivity. What should you tell him? A. Switches take less time to process frames than

More information

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Ernie Gilman

Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Ernie Gilman Top 10 Tips for z/os Network Performance Monitoring with OMEGAMON Ernie Gilman IBM Sr Consulting IT Specialist Session 10723 Agenda Overview of OMEGAMON for Mainframe Networks FP3 and z/os 1.12 1.OSA Express

More information

2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring

2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 2057-15 First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 7-25 September 2009 TCP/IP Networking Abhaya S. Induruwa Department

More information

Network Simulation Traffic, Paths and Impairment

Network Simulation Traffic, Paths and Impairment Network Simulation Traffic, Paths and Impairment Summary Network simulation software and hardware appliances can emulate networks and network hardware. Wide Area Network (WAN) emulation, by simulating

More information

IPv4 and IPv6 Integration. Formation IPv6 Workshop Location, Date

IPv4 and IPv6 Integration. Formation IPv6 Workshop Location, Date IPv4 and IPv6 Integration Formation IPv6 Workshop Location, Date Agenda Introduction Approaches to deploying IPv6 Standalone (IPv6-only) or alongside IPv4 Phased deployment plans Considerations for IPv4

More information

Chapter 1 - Web Server Management and Cluster Topology

Chapter 1 - Web Server Management and Cluster Topology Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management

More information

Introduction to Mainframe (z/os) Network Management

Introduction to Mainframe (z/os) Network Management Introduction to Mainframe (z/os) Network Management Monday, August 10, 1:45-2:45 Session 17736 Dean Butler (butlerde@us.ibm.com) Agenda What is network management? Why manage the network on z/os? z/os

More information

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand

Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand Exploiting Remote Memory Operations to Design Efficient Reconfiguration for Shared Data-Centers over InfiniBand P. Balaji, K. Vaidyanathan, S. Narravula, K. Savitha, H. W. Jin D. K. Panda Network Based

More information

Basic Networking Concepts. 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet

Basic Networking Concepts. 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet Basic Networking Concepts 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet 1 1. Introduction -A network can be defined as a group of computers and other devices connected

More information

First Midterm for ECE374 02/25/15 Solution!!

First Midterm for ECE374 02/25/15 Solution!! 1 First Midterm for ECE374 02/25/15 Solution!! Instructions: Put your name and student number on each sheet of paper! The exam is closed book. You have 90 minutes to complete the exam. Be a smart exam

More information

High Performance Cluster Support for NLB on Window

High Performance Cluster Support for NLB on Window High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,

More information

Proxy Server, Network Address Translator, Firewall. Proxy Server

Proxy Server, Network Address Translator, Firewall. Proxy Server Proxy Server, Network Address Translator, Firewall 1 Proxy Server 2 1 Introduction What is a proxy server? Acts on behalf of other clients, and presents requests from other clients to a server. Acts as

More information

MINIMUM NETWORK REQUIREMENTS 1. REQUIREMENTS SUMMARY... 1

MINIMUM NETWORK REQUIREMENTS 1. REQUIREMENTS SUMMARY... 1 Table of Contents 1. REQUIREMENTS SUMMARY... 1 2. REQUIREMENTS DETAIL... 2 2.1 DHCP SERVER... 2 2.2 DNS SERVER... 2 2.3 FIREWALLS... 3 2.4 NETWORK ADDRESS TRANSLATION... 4 2.5 APPLICATION LAYER GATEWAY...

More information

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager

DEPLOYMENT GUIDE Version 1.1. Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager DEPLOYMENT GUIDE Version 1.1 Configuring BIG-IP WOM with Oracle Database Data Guard, GoldenGate, Streams, and Recovery Manager Table of Contents Table of Contents Configuring BIG-IP WOM with Oracle Database

More information

ACHILLES CERTIFICATION. SIS Module SLS 1508

ACHILLES CERTIFICATION. SIS Module SLS 1508 ACHILLES CERTIFICATION PUBLIC REPORT Final DeltaV Report SIS Module SLS 1508 Disclaimer Wurldtech Security Inc. retains the right to change information in this report without notice. Wurldtech Security

More information

1. Comments on reviews a. Need to avoid just summarizing web page asks you for:

1. Comments on reviews a. Need to avoid just summarizing web page asks you for: 1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of

More information

Chapter 3. TCP/IP Networks. 3.1 Internet Protocol version 4 (IPv4)

Chapter 3. TCP/IP Networks. 3.1 Internet Protocol version 4 (IPv4) Chapter 3 TCP/IP Networks 3.1 Internet Protocol version 4 (IPv4) Internet Protocol version 4 is the fourth iteration of the Internet Protocol (IP) and it is the first version of the protocol to be widely

More information

VXLAN: Scaling Data Center Capacity. White Paper

VXLAN: Scaling Data Center Capacity. White Paper VXLAN: Scaling Data Center Capacity White Paper Virtual Extensible LAN (VXLAN) Overview This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where

More information

High-Speed TCP Performance Characterization under Various Operating Systems

High-Speed TCP Performance Characterization under Various Operating Systems High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,

More information

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT 1. TIMING ACCURACY The accurate multi-point measurements require accurate synchronization of clocks of the measurement devices. If for example time stamps

More information

Allocating Network Bandwidth to Match Business Priorities

Allocating Network Bandwidth to Match Business Priorities Allocating Network Bandwidth to Match Business Priorities Speaker Peter Sichel Chief Engineer Sustainable Softworks psichel@sustworks.com MacWorld San Francisco 2006 Session M225 12-Jan-2006 10:30 AM -

More information

OpenFlow Based Load Balancing

OpenFlow Based Load Balancing OpenFlow Based Load Balancing Hardeep Uppal and Dane Brandon University of Washington CSE561: Networking Project Report Abstract: In today s high-traffic internet, it is often desirable to have multiple

More information

Overlapping Data Transfer With Application Execution on Clusters

Overlapping Data Transfer With Application Execution on Clusters Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer

More information

IP Network Layer. Datagram ID FLAG Fragment Offset. IP Datagrams. IP Addresses. IP Addresses. CSCE 515: Computer Network Programming TCP/IP

IP Network Layer. Datagram ID FLAG Fragment Offset. IP Datagrams. IP Addresses. IP Addresses. CSCE 515: Computer Network Programming TCP/IP CSCE 515: Computer Network Programming TCP/IP IP Network Layer Wenyuan Xu Department of Computer Science and Engineering University of South Carolina IP Datagrams IP is the network layer packet delivery

More information

D1.2 Network Load Balancing

D1.2 Network Load Balancing D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage

Lecture 15: Congestion Control. CSE 123: Computer Networks Stefan Savage Lecture 15: Congestion Control CSE 123: Computer Networks Stefan Savage Overview Yesterday: TCP & UDP overview Connection setup Flow control: resource exhaustion at end node Today: Congestion control Resource

More information

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols

Guide to TCP/IP, Third Edition. Chapter 3: Data Link and Network Layer TCP/IP Protocols Guide to TCP/IP, Third Edition Chapter 3: Data Link and Network Layer TCP/IP Protocols Objectives Understand the role that data link protocols, such as SLIP and PPP, play for TCP/IP Distinguish among various

More information

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This

More information

AS/400e. TCP/IP routing and workload balancing

AS/400e. TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing AS/400e TCP/IP routing and workload balancing Copyright International Business Machines Corporation 2000. All rights reserved. US Government Users Restricted

More information

Chapter 12 Supporting Network Address Translation (NAT)

Chapter 12 Supporting Network Address Translation (NAT) [Previous] [Next] Chapter 12 Supporting Network Address Translation (NAT) About This Chapter Network address translation (NAT) is a protocol that allows a network with private addresses to access information

More information

Network Security TCP/IP Refresher

Network Security TCP/IP Refresher Network Security TCP/IP Refresher What you (at least) need to know about networking! Dr. David Barrera Network Security HS 2014 Outline Network Reference Models Local Area Networks Internet Protocol (IP)

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

Protocols. Packets. What's in an IP packet

Protocols. Packets. What's in an IP packet Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets

More information

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc.

Chapter 2 TOPOLOGY SELECTION. SYS-ED/ Computer Education Techniques, Inc. Chapter 2 TOPOLOGY SELECTION SYS-ED/ Computer Education Techniques, Inc. Objectives You will learn: Topology selection criteria. Perform a comparison of topology selection criteria. WebSphere component

More information

Active-Active and High Availability

Active-Active and High Availability Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark

More information

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy OVERVIEW The global communication and the continuous growth of services provided through the Internet or local infrastructure require to

More information

Overview of TCP/IP. TCP/IP and Internet

Overview of TCP/IP. TCP/IP and Internet Overview of TCP/IP System Administrators and network administrators Why networking - communication Why TCP/IP Provides interoperable communications between all types of hardware and all kinds of operating

More information

QoS & Traffic Management

QoS & Traffic Management QoS & Traffic Management Advanced Features for Managing Application Performance and Achieving End-to-End Quality of Service in Data Center and Cloud Computing Environments using Chelsio T4 Adapters Chelsio

More information