Testing the Performance of Multiple TCP/IP Stacks
|
|
- Pearl Matthews
- 8 years ago
- Views:
Transcription
1 Testing the Performance of Multiple TCP/IP Stacks White Paper by John L. Wood, Christopher D. Selvaggi, & John Q. Walker Contents Introduction... 2 Our Implementation... 2 TCP and UDP Performance... 3 Stack Performance... 7 Performance Implications of the TCP Close Conclusions Appendix A: Test Methodology14 Appendix B: Details of Test Scripts Copyright Information Performance of TCP/IP applications varies widely among stacks, operating systems, and hardware platforms. We describe our experiences with TCP and UDP performance on eight operating systems, ranging from 3.11 to UNIX. We address issues such as performance differences between TCP and UDP, throughput and response time differences among different operating systems and Ethernet speeds, and how a program s use of Sockets calls can affect performance. An earlier version of this paper was presented at the Computer Measurement Group in 1997: John L. Wood, Christopher D. Selvaggi, and John Q. Walker II, Testing the Performance of Multiple TCP/IP Stacks, Proceedings of CMG97, December 7-12, 1997, volume 1, pages Testing the Performance of Multiple TCP/IP Stacks 1
2 Introduction Computer networks change at an ever-greater rate. Software vendors package TCP/IP protocol stacks with their operating systems, and new applications, including browsers and , readily make use of these stacks. Network administrators upgrade the hardware and software in their networks as users find they need ever more bandwidth. But, how does network hardware and software really perform? What happens when it is lightly loaded, and what happens when it reaches its boundaries? Which network hardware has the capacity to handle traffic in a given network? We used a set of software programs to drive the TCP and UDP protocol stacks, measuring the round-trip performance of three typical network transactions. We describe here the differences we saw between TCP and UDP. We compare the performance of protocol stacks among a variety of operating systems, with different Ethernet speeds. We discuss what we learned about tuning the stacks. We also discuss in detail the use of the Sockets close, and how it affects the performance of short, back-to-back connections. Our Implementation We used a software application named Chariot to explore these basic network performance questions. Chariot generates network traffic between pairs of computers and observes the performance of the traffic. Traffic patterns are highly tailorable, letting you recreate the traffic generated by real user applications. Performance tests, capturing everything about the traffic patterns and the computers involved, can be saved and reliably repeated. For example, you can see the effect of changing the hardware or software along network paths; you can see the effect of adding new users; or you can track the level of service available in the network. Console Endpoint 1 Endpoint 2 Figure 1: Performance tests are set up at the Console and run between a pair of computers, labeled Endpoint 1 and Endpoint 2. The set of programs is operated from the console, where you create and run tests. Creating a test consists of deciding which computers to use and the type of data traffic to run between them. We refer to the computers executing the tests as endpoints. An endpoint pair comprises the network addresses of the two endpoint computers, the network protocol to use between them, and the type of application to emulate. For each endpoint pair, select an application script corresponding to the application you are simulating. The endpoint programs use an application script to generate the same data flows that an application would, without installing the application. A set of pre-built application scripts provides standard performance benchmarks and emulates common end-user applications. Today, endpoints run on twenty operating systems 1, supporting six network protocols 2. Support for TCP and UDP is available on all these systems. On the platforms of 1997, endpoints issued calls to the WinSock application programming interface (API) at the 1.1 level. On the other platforms, endpoints issue Sockets calls. Our programs issued their WinSock and Sockets calls as blocking calls. We saw a performance degradation of 20% to 50% when using nonblocking calls on the code paths where we were measuring performance. Chariot supports tests with multiple concurrent connections between any endpoints. We limited the scope of our testing here to just one connection at a time between endpoints. 1 Operating systems tested: NT, 95, and 3.11, OS/2, NetWare, AIX, HP-UX, and Sun Solaris. 2 Network protocols supported: APPC, IPX, SPX, TCP, UDP, and RTP. 2 Copyright NetIQ Coporation,
3 We used three benchmark scripts in our testing. The application scripts called CREDITS and CREDITL simulate repeated credit-check transactions: an endpoint sends a small record and gets an acknowledgment. Latency and turnaround time have a big effect when running these scripts; buffer sizes have little effect. The FILESNDL application script simulates a file transfer, by sending a large block of data and getting an acknowledgment in return. This causes multiple, full buffers to flow on the network in one direction, so the stacks buffering and windowing has a greater effect than the latency and turnaround time. CREDITS (credit-check, short connections) a transaction consists of sending 100 bytes from the first endpoint to the second, which replies with a one-byte acknowledgment. A connection between the endpoints is brought up and taken down within each repeated transaction; the connection time is measured as part of each transaction. CREDITL (credit-check, long connection) as with CREDITS, 100 bytes is sent and one byte is received in reply. One connection is brought up and multiple transactions are repeated before taking it down. In comparing CREDITL to CREDITS, you can see the effect of bringing up and taking down connections. FILESNDL (file-send, long connection) a transaction consists of sending a large number of bytes and receiving a one-byte confirmation. As with CREDITL, connections span many transactions and connection time is not measured. These three benchmark scripts and the parameters we used are described in further detail in Appendix B. We used a matched pair of Intel-based computers with matched sets of Ethernet adapters when testing with the PC operating systems. With disk partitioning and the multi-boot program System Commander, we configured five operating systems on each of the two computers. We also tested from one of these computers (running NT as Endpoint 1) to three different UNIX platforms. The setup and configuration for the computers are described in Appendix A. TCP and UDP Performance Programs that communicate over a network follow a protocol for the exchange of data. A connectionoriented or stream protocol provides reliable delivery of data at the cost of a set of initialization and termination procedures. TCP is a connectionoriented protocol. A connection-less or datagram protocol, such as UDP, provides a best-effort delivery service. The network tries to deliver application data to the recipient, but if there are problems along the way, the data is lost. Moreover, the application is not notified of the loss. In spite of the unreliable nature of datagram protocols, they are frequently used by network applications because they do not incur the overhead associated with connection establishment and takedown. UDP Reliable Datagram Implementation Datagrams work like two people exchanging letters using the postal service: there is no guarantee letters arrive in order or even at all. If they are to use UDP, those applications must follow an approach that ensures the data is properly exchanged. Such an approach typically requires the use of: acknowledgments, to let the sender know the partner has received data, timers, so the sender can retransmit its data if it doesn t receive an acknowledgment from the partner soon enough, and a flow control mechanism, to prevent the sender from flooding its partner with too much data. Our test programs employ a straightforward datagram protocol. 1. A window scheme is used as the flow control mechanism: a sender transmits a certain amount of data before waiting for an acknowledgment from the receiver. 2. A sender waits for a period of time (the retransmission time-out period) to receive an acknowledgment from the partner. If the acknowledgment does not arrive in time, the Testing the Performance of Multiple TCP/IP Stacks 3
4 sender retransmits the window of unacknowledged data. 3. An acknowledgment is sent by a receiver when a window is filled or all the data is received for a script RECEIVE command. 4. In the case where the receiver can detect lost datagrams, it sends an acknowledgment indicating how much of the window was received in sequence, allowing the sender to retransmit what was not received. This datagram protocol is a subset of the functionality TCP provides to ensure that data is received reliably. If an application sends one traffic flow pattern all the time, the reliable receive algorithms can be tuned so that UDP will outperform TCP. As the traffic flow pattern becomes more varied, it is hard to build a reliable transport mechanism on UDP that outperforms the reliable transport mechanism implemented by TCP. It is not surprising that TCP performed as well as UDP or better in many of our tests. TCP and UDP provide different levels of functionality. TCP provides reliable transport, where UDP allows for low-overhead stateless transactions. UDP works best with applications which use short transactions. For long-running transactions TCP is more efficient and can overcome the connection overhead. In general, we want to demonstrate how performance is affected by protocol differences. In comparing TCP and UDP, we used one operating system and TCP/IP protocol stack: the latest version of Microsoft s NT 4.0. Our comparison involves several variables: the quality of the TCP and UDP layers in NT, Chariot s implementation of a datagram protocol, and the network layout. We used two matched computers on a single LAN segment. In these tests, we did not explore multi-hop network topologies. Similarly, we did not measure how routers treat TCP and UDP. Connection Establishment Overhead Test TCP is connection-oriented and therefore incurs setup time while the connection is established. This shows up as slower response time, for transactions where the connection is started and stopped. The cost is incurred regardless of the underlying network type. This protocol difference is demonstrated using the CREDITS and CREDITL scripts. A test consisted of one pair of computers, using 10 Mbps and 100 Mbps Ethernet, for both TCP and UDP. The results appear in the graphs below CREDIT S, TCP CREDIT L, TCP CREDIT S, UDP CREDIT L, UDP Figure 2: Response time with 10 Mbps Ethernet, using the CREDITS and CREDITL scripts. Units are shown in seconds per transaction; lower is better. 4 Copyright NetIQ Coporation,
5 CREDITS, TCP CREDITL, TCP CREDITS, UDP CREDITL, UDP Figure 3: Response time with 100 Mbps Ethernet, using the CREDITS and CREDITL scripts. Units are shown in seconds per transaction; lower is better. Connection establishment overhead clearly causes poorer response time for short-lived transactions. The response time can be improved by using longlived transactions. The response time for UDP is about the same in both the CREDITS and CREDITL scripts, as it should be, since no connection-establishment overhead cost is incurred. Using a 10x faster Ethernet link did not result in a 10x improvement in response time this was the case for both protocols. We believe computer constraints, such as CPU and bus speeds, limit what is attainable. Comparing TCP and UDP Response Time The CREDITL script also serves well as an applesto-apples test. The reported measurements do not include connection-establishment overhead, so the comparison is between how well TCP and UDP send data. The results in Figure 4 show that better response time can be had with TCP. TCP s reliable transport algorithm is able to outperform the reliable transport algorithm we implemented for UDP. When the overhead of connection setup and take down is removed, this test shows that TCP provides an efficient mechanism for achieving excellent response time Mbps, TCP 10 Mbps, UDP 100 Mbps TCP 100 Mbps, UDP Figure 4: Response time with 10 Mbps and 100 Mbps Ethernet, using the CREDITL script. Units are shown in seconds per transaction; lower is better. Testing the Performance of Multiple TCP/IP Stacks 5
6 Comparing TCP and UDP Throughput When optimizing throughput, four variables are important: file size, send buffer size, window size, and maximum transmission unit (MTU). In comparing TCP and UDP, we chose UDP and TCP values that would maximize throughput. We used the FILESNDL script to compare the throughput differences of UDP and TCP. Each test used a send size of 1,460,000 bytes and a send buffer size of 32,767 bytes for TCP and 8,863 bytes for UDP. The TCP window size was 8,760 bytes, and the UDP window size was set to 17,726. With 100 Mbps Ethernet we were able to increase the send size to 32,543 and the window size to 130,172. The MTU size was left at the maximum of 1,500 for all tests. Changing the Send Buffer Size The network packet size, sometimes called the frame size, is the maximum amount of data a network adapter can transmit at one time. When the size of a datagram exceeds the packet size, the network protocol stack breaks the datagram into pieces, each no larger than the packet size. This process is called datagram fragmentation. The process of putting the packets back together, which is done at the destination computer, is called datagram reassembly. IP supports fragmentation and reassembly of higher-layer protocols such as UDP and TCP. If the TCP/IP network protocol stack can reassemble datagram fragments faster than the application software can issue API send calls, tests can run faster -- if you configure the send buffer size as large as possible. On the other hand, a large number of datagram fragments may increase the congestion in a network and, therefore, the likelihood that one of them may be dropped. If that occurs, the entire datagram must be retransmitted, causing poorer performance. TCP avoids IP datagram fragmentation when possible by breaking data into MTU-sized pieces. Since TCP ensures the delivery of every IP datagram it sends, if one datagram is lost it only requires the retransmission of that datagram. UDP on the other hand does not avoid IP datagram fragmentation. UDP will pass on whatever size buffer it gets and then IP fragments it and sends it out. If a 32-KB send is issued to TCP, the block of data is broken up by TCP into MTU-sized pieces. If one MTU is lost, only that MTU needs to be retransmitted. If a 32-KB send is issued to UDP, the 32 KB is passed directly to IP and IP breaks it into MTU-sized pieces. If one MTU, the whole 32 KB block is considered lost and the whole 32 KB have to retransmitted. For the best performance, the TCP send buffer size should be as large as possible. For UDP throughput tests, the send buffer should be tuned for the underlying networks. We found a send buffer size of 8,863 bytes to be best for 10Mbps and 32,543 for 100Mbps. This causes IP fragmentation, but not enough to result in lost packets and thus degrade performance. 6 Copyright NetIQ Coporation,
7 Mbps, TCP 10 Mbps, UDP 100 Mbps, TCP 100 Mbps, UDP Figure 5: Throughput with UDP and TCP, on 10 Mbps and 100 Mbps Ethernet, using the FILESNDL script. Units are shown in Mbps; higher is better. When using 10 Mbps Ethernet, UDP and TCP perform roughly the same. The 100 Mbps Ethernet tests show the difference between them. TCP outperforms UDP by approximately 3 Mbps. Even a highly-tuned UDP algorithm has a hard time significantly improving the performance possible with TCP. We first tested throughput, using the FILESNDL script. We used the same operating system for both pairs (except for the UNIX tests where we used a NT computer as Endpoint 1). We ran the same set of tests on 10 Mbps and 100 Mbps Ethernet LANs. Stack Performance We used a single operating system and protocol stack when comparing TCP and UDP performance, above. To examine stack performance, we used the same hardware, but varied the operating systems and stacks. We used each operating system and stack with their shipped default parameters. The setup is described in Appendix A NT WFW OS/2 NetWare Warp Solaris 2.4 HP-UX 9.0 AIX 4.1 Figure 6: Throughput with 10 Mbps Ethernet, using the FILESNDL script. Units are shown in Mbps; higher is better. Testing the Performance of Multiple TCP/IP Stacks 7
8 NT WFW 3.11 OS/2 Warp 4.0 NetWare 4.11 Figure 7: Throughput with 100 Mbps Ethernet, using the FILESNDL script. Units are shown in Mbps; higher is better. Each test sent 1,460,000 bytes 100 times and timed each 1,460,000-byte transfer. Every stack (but OS/2) was able to use almost all the 10 Mbps link, with the three UNIX computers a little better than the rest. When we look at the 100 Mbps results, we begin to notice some differentiation among stack performance. Also the throughput performance does not scale directly when moving from 10 Mbps to 100 Mbps. It is much harder for a stack to take advantage of the 100 Mbps available. There are two surprising results in the 100 Mbps tests. First, the OS/2 throughput is poorer than expected. As a mature multitasking PC operating system, we expected its performance to be on par with NT and better than 95 and 3.1. When the same test is run with a shorter file size (100,000 bytes), the OS/2 throughput drops to 17 Mbps, while the other stacks only dropped by 5 or 6 Mbps (shown in Figure 8). The second surprise was the good performance seen by Microsoft s 32-bit TCP/IP stack for for Workgroups (WFW) With other 16-bit stacks on 3.1 or 3.11 using 100Mbps Ethernet, we saw little over 10 Mbps of throughput. With most 3.1 TCP stacks, upgrading to 100 Mbps was not advantageous; this was not the case with Microsoft s 32-bit stack. Tuning for Throughput There are four parameters that have a significant impact on the throughput of a TCP/IP stack. These parameters are file size, send buffer size, MTU, and window size. File size is how much user data to send from one program to another. The file size needs to be big enough to allow an accurate measurement of the network. Too small a size yields an unrealistically low measure of your network s throughput. We noticed a measurable increase in performance when file size is increased from 100,000 bytes to 1,460,000 bytes. The increase in performance for file sizes over 1,460,000 is negligible. 8 Copyright NetIQ Coporation,
9 k send size 100K send size NT WFW 3.11 OS/2 Warp 4.0 NetWare 4.11 Figure 8: Throughput with 100 Mbps Ethernet, comparing FILESNDL file sizes of 1,460,000 bytes and 100,000 bytes. Units are shown in Mbps; higher is better. Send buffer size is the number of bytes of user data provided on each TCP Send call. This number should be as large as possible; the fewer API crossings the better. On many operating systems, a Send call involves crossing from user space to kernel space. Once the data makes this crossing, it can be sent from the kernel very efficiently. We want to get as much data as possible to the TCP stack as we can. We used a 32 KB send buffer size in our tests. Many stacks allow a 64-KB send buffer size. MTU (maximum transmission unit) is the frame size allowed on the link. For Ethernet, this is normally already set to 1,500 bytes, which means 1,500 bytes is all that can be sent at a time. One 32- KB send will result in 25 MTUs. Notice that 32 KB divided by 1,500 is so how do we get 25 MTUs? 32 KB is the amount of user data requested to be sent. The 1,500-byte MTU size is the total data that can be sent on the link, which includes user data and protocol headers (20 bytes for TCP and 20 bytes for IP). The amount of user data that can be sent is thus 1,460 bytes. To improve throughput, the TCP stack should send full frames. To do this best, we send in multiples of 1,460. We used a file size of 1,460,000 with FILESNDL, so each frame sent would be full. To demonstrate the effect of not filling frames, we ran two tests, the first with a send buffer size of 1,460 (a full frame) and the second with a send buffer size of 1,461 bytes. Between our two NT computers, performance drops by over one Mbps on a 10 Mbps Ethernet link. To increase performance it is important to understand what is taking place on the underlying network, and then set your parameters accordingly. Testing the Performance of Multiple TCP/IP Stacks 9
10 bytes 1461 bytes Figure 9: Throughput on NT with 10 Mbps Ethernet, comparing send buffer sizes of 1,460 and 1,461 bytes (where the MTU was 1,500, including 40 bytes of header). Units are shown in Mbps; higher is better. Window Size is the amount of user data that can be sent before an acknowledgment is required. A common default for this parameter is 8,760 bytes, which means that after sending 8,760 bytes of data, the TCP stack must wait for an acknowledgment before proceeding. Acknowledgments are a safeguard against one computer sending data faster than the other computer can process it. To increase throughput, the number of acknowledgments should be low. If all computers can handle it, the window size can be increased close to 64 KB. With our computers, increasing the window size did not help performance noticeably after we had adjusted the other parameters. In a different configuration, such as multiple file transfers executing simultaneously, the window size was seen to have a greater effect. Response Time Our next set of tests was designed to show differences in response time among the stacks. Response time is the average time it takes to complete a transaction. We used the CREDITS script (short transactions which include the connection setup and takedown) in the same configuration as the throughput tests NT WFW 3.11 OS/2 Warp 4.0 NetWare 4.11 Solaris 2.4 HP-UX 9.0 AIX 4.1 Figure 10: Response time with 10 Mbps Ethernet, using the CREDITS script. Units are shown in seconds per transaction; lower is better. 10 Copyright NetIQ Coporation,
11 NT WFW 3.11 OS/2 Warp 4.0 NetWare 4.11 Figure 11: Response time with 100 Mbps Ethernet, using the CREDITS script. Units are shown in seconds per transaction; lower is better. Figure 11 shows that NT and OS/2 have a poorer response time than 95, 3.11, or NetWare To determine the cause, we closely compared NT and 95. First, we ran 10 concurrent pairs of the CREDITS script using NT and 95. We expected to see NT have a faster average response time than 95 because of NT s ability to handle multiple tasks better than 95. The results were the same as the single pair test. Next, we analyzed a line trace of NT and 95, using the CREDITS script. The line trace showed that the connection-setup processing for 95 is consistently faster than the NT connectionsetup processing. 95 took about 1 millisecond for connection setup, while NT took between 2 and 3 milliseconds. To validate this observation, we ran the CREDITL script between the five PC operating systems. The CREDITL script establishes a connection once and then sends and receives in the same manner as CREDITS. Running the test with CREDITL compares the data transfer without the connection overhead. As you can see in Figure 12, CREDITL improves on NT, but not on OS/2. We see that connection setup is slower on NT than on the other PC operating systems NT WFW 3.11 OS/2 Warp 4.0 NetWare 4.11 Figure 12: Response time with 100 Mbps Ethernet, using the CREDITL script. Units are shown in seconds per transaction; lower is better. Testing the Performance of Multiple TCP/IP Stacks 11
12 Performance Implications of the TCP Close We ve looked at performance influences of operating systems, protocol stacks, their tuning parameters, and the effects of application size parameters on data transfers. A surprising influence is the type of Close used for TCP connections. When short-connection TCP transactions are quickly repeated back-to-back, their transaction rate can significantly decrease over time depending on how the connections are closed. TCP offers two flavors of close: normal and abortive. Abortive Close gives the most consistent performance for repeated short connections. To illustrate the differences between these, we shall discuss the network flows and internal operation of the TCP stacks. TCP s Normal Close Operation A Normal Close consists of a three-way TCP close process, shown in Figure 13. One program (here located at side A) initiates the close at the application level by issuing a Close on the socket. This causes the FIN flag to flow to the TCP stack at side B. The FIN flag is acknowledged by the TCP stack at side B. The application at side B is only allowed to issue a Close, which causes a FIN flag to flow back to side A. While side B is able to completely close the connection, side A is not. The connection is left in TIME_WAIT state in case the ACK is not received by side B and side B retransmits the FIN. side A side B sockets internal TCP flag internal sockets call TCP state TCP state call close() FIN_WAIT_ FIN ---> CLOSE_WAIT <--- ACK ---- FIN_WAIT_2 close() LAST_ACK <--- FIN ---- TIME_WAIT ---- ACK ---> CLOSED Figure 13: The sockets calls, internal TCP states, and network flows for a Normal Close. While running tests with the CREDITS script between side A and side B, we saw the performance degrade as the test ran until it appeared to stabilize. Figure 14 was generated using the CREDITS scripts between two OS/2 computers. We saw similar results with other TCP stacks, although not as dramatic as exhibited by OS/2. Figure 14: Using TCP s Normal Close, the number of CREDITS transactions per second decreased as more were run back-to-back. The connection on side A is left in TCP s internal TIME_WAIT state, while the connection on side B is in CLOSED state. The connection on side A stays in TIME_WAIT state for 2 times the maximum segment lifetime (MSL). MSL is the maximum time a segment will live in the network. The reason for this is to be able to discard any spare segments that might exist in the network and to ensure side B receives the ACK. There has to be allowance for side B not receiving the ACK and thus retransmitting the FIN. 12 Copyright NetIQ Coporation,
13 As the number of connections in TIME_WAIT state increase, so does the processing overhead for each additional connection. Eventually connections are closing at the rate new connections are being started, and the performance levels off. TCP s Abortive Close Operation To achieve the best performance when running applications with short connections, it would be advantageous to close without leaving the connection in TIME_WAIT state. This is possible by using an Abortive Close, which causes an RST flag (reset) to flow across the network instead of a FIN flag. The RST flag immediately cleans up the connection. An Abortive Close is caused by setting the socket option SO_LINGER to 0 and then calling close. This solution is only appropriate in situations where the two partners in a connection know that all data has been sent and received. When the partners know for sure no more data will be sent on the session, they can get rid of the connection. In the defined scripts we used, we know that when the disconnect verb of the script is encountered, all data transfer verbs have already been issued. We just need a mechanism for ensuring that the data has arrived and been processed. Otherwise, an Abortive Close following a SEND can reset the connection before all the data is received by the partner. side A side B sockets internal TCP flag internal sockets call TCP state TCP state call recv() send() ===== the close processing starts here ======= recv() shutdown length=0 for write FIN_WAIT_1 <--- FIN ---- CLOSE_WAIT ---- ACK ---> setsockopt close() SO_LINGER=0 FIN_WAIT_2 close() CLOSED ---- RST ---> CLOSED Figure 15: The sockets calls, internal TCP states, and network flows for the Abortive Close. To ensure all data was sent and received, we implemented the following algorithm in the close processing shown above. If a RECEIVE was performed last, our program issues a recv to wait for the close to be issued by the other side. When the recv returns 0 bytes received, it issues an Abortive Close, which causes a RST flag to flow and clears the connection on the other side. If a SEND was performed last, our program issues a Shutdownfor-write, which forces the FIN flag to flow. Figure 15 illustrates this process. Figure 16 is the graph Chariot produced when close was implemented as seen in Figure 15. Figure 16: Using TCP s Abortive Close, the rate of CREDITS transactions remained constant as more were run. Conclusions We have shown that TCP provides a high level of performance and should be the protocol of choice over UDP for most applications. TCP performance is acceptable whenever the time for connection setup and takedown is trivial, such as in file transfers. If an application needs to avoid connection overhead, UDP can provide a performance gain. UDP can also be valuable for a specialized application that needs a reliable transport algorithm tuned for a specific type of data. Otherwise, the performance of TCP and UDP is almost identical when connection setup/takedown is factored out, regardless of LAN speed. We see that TCP provides very good performance and is easier for an application to use because of the built-in reliable transport. Testing the Performance of Multiple TCP/IP Stacks 13
14 When using 10 Mbps Ethernet, almost any operating system s TCP/IP stack is capable of using all the bandwidth with just one connection. When 100 Mbps Ethernet links are used, the performance difference between stacks can be easily seen and the choice of operating system becomes more important. This difference may become even more pronounced as faster networks become available. We have two caveats regarding the above observations. First, as we tested internally with faster PCs, we saw a significant improvement in performance on 100 Mbps Ethernet. With NT 4.0 on Pentium Pro 200 computers, we were able to attain over 80 Mbps throughput, compared to the 60 Mbps we saw with 166 MHz Pentiums. We have additional exploration to do in this area. Second, NetWare 4.11, which did the best at 100 Mbps in our tests, had a much higher default TCP Receive Window Size, which we did not reduce. We need to explore further the effect of this tuning parameter. We also looked at response time variation among operating systems. When connection setup/takedown was considered, NT and OS/2 had the slowest times. When connection setup/takedown was ignored, 95 and OS/2 were the slowest. Finally, when writing programs that do short, repeated connections over TCP, we recommend using the Abortive Close algorithm that we presented for closing Sockets. The Abortive Close algorithm can significantly improve the performance consistency of short connections when using TCP. We feel we have just scratched the surface in gaining an understanding of how these protocols behave on real computers and operating systems. All of the test results shown here involved just a single connection. As the number of concurrent connections is increased, we expect to learn more about the aggregate throughput and response times of these protocols. We also look forward to many lessons when intermediate devices, such as routers and switches, are included in the measurements. Appendix A: Test Methodology We ran our tests among a pool of computers on an isolated Ethernet network. Two identical computers were configured for testing the five PC operating systems: Dell Dimension XPS P166s, containing an Intel Pentium 166 MHz CPU, PCI bus, 32MB EDO RAM, and 3Com 3C905 Fast EtherLink 10/100 XL network adapter. The following five operating systems were loaded onto each of these two computers with the same hardware configuration: Microsoft NT Server 4.0 (with service pack 3) Microsoft 95 Microsoft for Workgroups 3.11 with Microsoft s TCP-32 stack (this stack was shipped with Microsoft s NT 4.0 Server) IBM OS/2 Warp 4 Novell NetWare 4.11 Three UNIX computers were used in the tests. Each had a 10 Mbps Ethernet adapter integrated with its motherboard. IBM RS/6000 POWER/PC, machine type 7248, with 64 MB RAM, running AIX version 4.1 HP model 712/60, with 32 MB RAM, running HP-UX version 9.0 Sun SPARCStation 5, with 64 MB RAM, running Solaris version 2.4 The tests run on the platforms, OS/2, and NetWare were run with identical computers as Endpoints 1 and 2. The tests performed with the AIX, HP-UX, and Sun Solaris, were run with NT as Endpoint 1 and the UNIX computer as Endpoint Copyright NetIQ Coporation,
15 Appendix B: Details of Test Scripts Here are the details of the three application scripts we used in our tests: CREDITS, CREDITL, and FILESNDL. In scripts identified as long connection, a single connection is used for the entire script, no Endpoint 1 Endpoint 2 number_of_timing_records=50 number_of_timing_records=50 START_TIMER transactions_per_record=25 transactions_per_record=25 CONNECT_INITIATE CONNECT_ACCEPT SEND RECEIVE size_of_record_to_send=100 size_of_record_to_send=100 send_buffer_size=default receive_buffer_size=default CONFIRM_REQUEST CONFIRM_ACKNOWLEDGE DISCONNECT DISCONNECT INCREMENT_TRANSACTION END_TIMER matter how many transactions are run. The time to start and stop the connections is not included in the timing results. For those identified as short connections, a new connection is started within each transaction. TCP and UDP have overhead associated with connection startup and takedown. Having these two variations of scripts lets you decide how much of the startup/takedown overhead to factor into performance measurements. CREDITS Script: This version of the Credit Check transaction uses short connections. This is a quick transaction that simulates a series of credit approvals. A 100-byte record is sent from Endpoint 1. Endpoint 2 receives the record and sends back a one-byte confirmation. Endpoint 1 Endpoint 2 CONNECT_INITIATE CONNECT_ACCEPT number_of_timing_records=50 number_of_timing_records=50 START_TIMER transactions_per_record=25 transactions_per_record=25 SEND RECEIVE size_of_record_to_send=100 size_of_record_to_send=100 send_buffer_size=default receive_buffer_size=default CONFIRM_REQUEST CONFIRM_ACKNOWLEDGE INCREMENT_TRANSACTION END_TIMER DISCONNECT DISCONNECT CREDITL Script: This version of the Credit Check transaction uses a long connection. This is a quick transaction that simulates a series of credit approvals. A 100-byte record is sent from Endpoint 1. Endpoint 2 receives the record and sends back a 1-byte confirmation. Testing the Performance of Multiple TCP/IP Stacks 15
16 Endpoint 1 Endpoint 2 CONNECT_INITIATE CONNECT_ACCEPT number_of_timing_records=100 number_of_timing_records=100 START_TIMER transactions_per_record=1 transactions_per_record=1 SEND RECEIVE file_size= file_size= send_buffer_size=default receive_buffer_size=default CONFIRM_REQUEST CONFIRM_ACKNOWLEDGE INCREMENT_TRANSACTION END_TIMER DISCONNECT DISCONNECT FILESNDL Script: This transaction uses a long connection to simulate sending a file from Endpoint 1 to Endpoint 2 and getting a confirmation back. We used file_size values of both 100,000 and 1,460,000 bytes in the tests described here. Operating System TCP Receive Window Size TCP send buffer size TCP receive buffer size UDP send buffer size UDP receive buffer size NT 4.0 8,760 32,767 32,767 8,183 8, ,760 4,096 32,767 8,183 8,183 WFW ,760 4,096 32,767 8,183 8,183 OS/2 Warp 4 28,672 32,767 32,767 8,183 8,183 NetWare ,768 32,767 32,767 8,183 8,183 AIX 4.1 8,760 32,767 32,767 8,183 8,183 HP-UX 9.0 8,760 32,767 32,767 8,183 8,183 Sun Solaris 2.4 8,760 32,767 32,767 8,183 8,183 These are the values we used for TCP Receive Window Size and the DEFAULT buffer sizes for the SEND and RECEIVE commands in the scripts (in bytes). Acknowledgments Luke S. Zettlemoyer happily handled the large task of designing and running the tests, and coordinating all the data. We could not have completed this work without him. We also appreciate the excellent feedback we received from our reviewers: Peter J. Schwaller, Joel Solkoff, Anne Schick, and Carl Lewis. An earlier version of this paper was presented at the Computer Measurement Group in 1997: John L. Wood, Christopher D. Selvaggi, and John Q. Walker II. Testing the Performance of Multiple TCP/IP Stacks, Proceedings of CMG97, December 7-12, 1997, volume 1, pages Bibliography 1) Comer, D. E. and D. L. Stevens. Internetworking with TCP/IP, Volume III. Prentice Hall, Englewood Cliffs, NJ, 1993 (ISBN ). 2) Quinn, B. and D. Shute. Sockets Network Programming. Addison-Wesley, Reading, MA, 1995 (ISBN ). 3) Stevens, W. R. TCP/IP Illustrated, Volume 1. Addison-Wesley, Reading MA, 1994 (ISBN ). 16 Copyright NetIQ Coporation,
17 Copyright Information NetIQ Corporation provides this document as is without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability or fitness for a particular purpose. Some states do not allow disclaimers of express or implied warranties in certain transactions; therefore, this statement may not apply to you. This document and the software described in this document are furnished under a license agreement or a non-disclosure agreement and may be used only in accordance with the terms of the agreement. This document may not be lent, sold, or given away without the written permission of NetIQ Corporation. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, or otherwise, with the prior written consent of NetIQ Corporation. Companies, names, and data used in this document are fictitious unless otherwise noted. This document could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein. These changes may be incorporated in new editions of the document. NetIQ Corporation may make improvements in and/or changes to the products described in this document at any time NetIQ Corporation, all rights reserved. U.S. Government Restricted Rights: Use, duplication, or disclosure by the Government is subject to the restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause of the DFARs and FAR (c) and any successor rules or regulations. AppManager, the AppManager logo, AppAnalyzer, Knowledge Scripts, Work Smarter, NetIQ Partner Network, the NetIQ Partner Network logo, Chariot, Pegasus, Qcheck, OnePoint, the OnePoint logo, OnePoint Directory Administrator, OnePoint Resource Administrator, OnePoint Exchange Administrator, OnePoint Domain Migration Administrator, OnePoint Operations Manager, OnePoint File Administrator, OnePoint Event Manager, Enterprise Administrator, Knowledge Pack, ActiveKnowledge, ActiveAgent, ActiveEngine, Mission Critical Software, the Mission Critical Software logo, Ganymede, Ganymede Software, the Ganymede logo, NetIQ, and the NetIQ logo are trademarks or registered trademarks of NetIQ Corporation or its subsidiaries in the United States and other jurisdictions. All other company and product names mentioned are used only for identification purposes and may be trademarks or registered trademarks of their respective companies. Testing the Performance of Multiple TCP/IP Stacks 17
The Network or The Server? How to find out fast!
The Network or The Server? How to find out fast! White Paper Contents Getting to the Bottom of Performance Problems Quickly.2 Collaborating across the IT Performance Boundary...6 Copyright Information...7
More informationApplication Note. Windows 2000/XP TCP Tuning for High Bandwidth Networks. mguard smart mguard PCI mguard blade
Application Note Windows 2000/XP TCP Tuning for High Bandwidth Networks mguard smart mguard PCI mguard blade mguard industrial mguard delta Innominate Security Technologies AG Albert-Einstein-Str. 14 12489
More informationTCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to
Introduction to TCP Offload Engines By implementing a TCP Offload Engine (TOE) in high-speed computing environments, administrators can help relieve network bottlenecks and improve application performance.
More informationReporting and Incident Management for Firewalls
Reporting and Incident Management for Firewalls The keys to unlocking your firewall s secrets Contents White Paper November 8, 2001 The Role Of The Firewall In Network Security... 2 Firewall Activity Reporting
More informationRARP: Reverse Address Resolution Protocol
SFWR 4C03: Computer Networks and Computer Security January 19-22 2004 Lecturer: Kartik Krishnan Lectures 7-9 RARP: Reverse Address Resolution Protocol When a system with a local disk is bootstrapped it
More informationNew!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
More informationD1.2 Network Load Balancing
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June ronald.vanderpol@sara.nl,freek.dijkstra@sara.nl,
More informationUnderstanding TCP/IP. Introduction. What is an Architectural Model? APPENDIX
APPENDIX A Introduction Understanding TCP/IP To fully understand the architecture of Cisco Centri Firewall, you need to understand the TCP/IP architecture on which the Internet is based. This appendix
More informationEvaluating Data Networks for VoIP
Evaluating Data Networks for VoIP by John Q. Walker and Jeff Hicks NetIQ Corporation Contents Introduction... 2 Emulating a Call... 4 Creating a Test... 6 Running a Test... 8 Analyzing the Data... 8 Calculating
More informationSolving complex performance problems in TCP/IP and SNA environments.
IBM Global Services Solving complex performance problems in TCP/IP and SNA environments. Key Topics Discusses how performance analysis of networks relates to key issues in today's business environment
More informationB-2 Analyzing TCP/IP Networks with Wireshark. Ray Tompkins Founder of Gearbit www.gearbit.com
B-2 Analyzing TCP/IP Networks with Wireshark June 15, 2010 Ray Tompkins Founder of Gearbit www.gearbit.com SHARKFEST 10 Stanford University June 14-17, 2010 TCP In this session we will examine the details
More informationGigabit Ethernet Design
Gigabit Ethernet Design Laura Jeanne Knapp Network Consultant 1-919-254-8801 laura@lauraknapp.com www.lauraknapp.com Tom Hadley Network Consultant 1-919-301-3052 tmhadley@us.ibm.com HSEdes_ 010 ed and
More informationAccess Control: Firewalls (1)
Access Control: Firewalls (1) World is divided in good and bad guys ---> access control (security checks) at a single point of entry/exit: in medieval castles: drawbridge in corporate buildings: security/reception
More informationTransport Layer Protocols
Transport Layer Protocols Version. Transport layer performs two main tasks for the application layer by using the network layer. It provides end to end communication between two applications, and implements
More informationEvaluating Data Networks for Voice Readiness
Evaluating Data Networks for Voice Readiness by John Q. Walker and Jeff Hicks NetIQ Corporation Contents Introduction... 2 Determining Readiness... 2 Follow-on Steps... 7 Summary... 7 Our focus is on organizations
More informationComputer Networks. Chapter 5 Transport Protocols
Computer Networks Chapter 5 Transport Protocols Transport Protocol Provides end-to-end transport Hides the network details Transport protocol or service (TS) offers: Different types of services QoS Data
More informationPerformance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU
Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU Savita Shiwani Computer Science,Gyan Vihar University, Rajasthan, India G.N. Purohit AIM & ACT, Banasthali University, Banasthali,
More informationTCP Performance Management for Dummies
TCP Performance Management for Dummies Nalini Elkins Inside Products, Inc. Monday, August 8, 2011 Session Number 9285 Our SHARE Sessions Orlando 9285: TCP/IP Performance Management for Dummies Monday,
More informationNetIQ and LECCOTECH: Integrated Solutions for Optimal SQL Server Performance October 2003
NetIQ and LECCOTECH: Integrated Solutions for Optimal SQL Server Performance October 2003 Contents Introduction... 1 Traditional Methods of Monitoring and Tuning... 1 The NetIQ and LECCOTECH Solution...
More informationChapter 3. TCP/IP Networks. 3.1 Internet Protocol version 4 (IPv4)
Chapter 3 TCP/IP Networks 3.1 Internet Protocol version 4 (IPv4) Internet Protocol version 4 is the fourth iteration of the Internet Protocol (IP) and it is the first version of the protocol to be widely
More informationHigh-Speed TCP Performance Characterization under Various Operating Systems
High-Speed TCP Performance Characterization under Various Operating Systems Y. Iwanaga, K. Kumazoe, D. Cavendish, M.Tsuru and Y. Oie Kyushu Institute of Technology 68-4, Kawazu, Iizuka-shi, Fukuoka, 82-852,
More informationMailMarshal 6.0 SMTP Sizing Guide White Paper June 2004
MailMarshal 6.0 SMTP Sizing Guide White Paper June 2004 Contents MailMarshal Sizing Guidelines... 1 Minimum Hardware and Software Requirements... 2 Performance Matrix... 4 Performance Tuning Recommendations...
More informationtechnical brief Optimizing Performance in HP Web Jetadmin Web Jetadmin Overview Performance HP Web Jetadmin CPU Utilization utilization.
technical brief in HP Overview HP is a Web-based software application designed to install, configure, manage and troubleshoot network-connected devices. It includes a Web service, which allows multiple
More informationA Transport Protocol for Multimedia Wireless Sensor Networks
A Transport Protocol for Multimedia Wireless Sensor Networks Duarte Meneses, António Grilo, Paulo Rogério Pereira 1 NGI'2011: A Transport Protocol for Multimedia Wireless Sensor Networks Introduction Wireless
More informationICOM 5026-090: Computer Networks Chapter 6: The Transport Layer. By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 UPRM
ICOM 5026-090: Computer Networks Chapter 6: The Transport Layer By Dr Yi Qian Department of Electronic and Computer Engineering Fall 2006 Outline The transport service Elements of transport protocols A
More informationNetwork Performance Optimisation and Load Balancing. Wulf Thannhaeuser
Network Performance Optimisation and Load Balancing Wulf Thannhaeuser 1 Network Performance Optimisation 2 Network Optimisation: Where? Fixed latency 4.0 µs Variable latency
More informationNames & Addresses. Names & Addresses. Hop-by-Hop Packet Forwarding. Longest-Prefix-Match Forwarding. Longest-Prefix-Match Forwarding
Names & Addresses EE 122: IP Forwarding and Transport Protocols Scott Shenker http://inst.eecs.berkeley.edu/~ee122/ (Materials with thanks to Vern Paxson, Jennifer Rexford, and colleagues at UC Berkeley)
More informationComparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
More informationFile Sharing. Peter Lo. CP582 Peter Lo 2003 1
File Sharing Peter Lo CP582 Peter Lo 2003 1 File Sharing What is it? How is it different from File Transfer How it it done? CP582 Peter Lo 2003 2 This lecture we move away from the topic of transferring
More informationClearing the Way for VoIP
Gen2 Ventures White Paper Clearing the Way for VoIP An Alternative to Expensive WAN Upgrades Executive Overview Enterprises have traditionally maintained separate networks for their voice and data traffic.
More informationOverview. Securing TCP/IP. Introduction to TCP/IP (cont d) Introduction to TCP/IP
Overview Securing TCP/IP Chapter 6 TCP/IP Open Systems Interconnection Model Anatomy of a Packet Internet Protocol Security (IPSec) Web Security (HTTP over TLS, Secure-HTTP) Lecturer: Pei-yih Ting 1 2
More informationEthernet. Ethernet. Network Devices
Ethernet Babak Kia Adjunct Professor Boston University College of Engineering ENG SC757 - Advanced Microprocessor Design Ethernet Ethernet is a term used to refer to a diverse set of frame based networking
More informationIP SAN Best Practices
IP SAN Best Practices A Dell Technical White Paper PowerVault MD3200i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
More informationOperating System Concepts. Operating System 資 訊 工 程 學 系 袁 賢 銘 老 師
Lecture 7: Distributed Operating Systems A Distributed System 7.2 Resource sharing Motivation sharing and printing files at remote sites processing information in a distributed database using remote specialized
More informationRequirements of Voice in an IP Internetwork
Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.
More informationThe Fundamentals of Intrusion Prevention System Testing
The Fundamentals of Intrusion Prevention System Testing New network-based Intrusion Prevention Systems (IPS) complement traditional security products to provide enterprises with unparalleled protection
More informationIMPLEMENTING VOICE OVER IP
51-20-78 DATA COMMUNICATIONS MANAGEMENT IMPLEMENTING VOICE OVER IP Gilbert Held INSIDE Latency is the Key; Compression; Interprocessing Delay; Network Access at Origin; Network Transmission Delay; Network
More informationVMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
More informationRemote Access Server - Dial-Out User s Guide
Remote Access Server - Dial-Out User s Guide 95-2345-05 Copyrights IBM is the registered trademark of International Business Machines Corporation. Microsoft, MS-DOS and Windows are registered trademarks
More informationFrequently Asked Questions
Frequently Asked Questions 1. Q: What is the Network Data Tunnel? A: Network Data Tunnel (NDT) is a software-based solution that accelerates data transfer in point-to-point or point-to-multipoint network
More information[Prof. Rupesh G Vaishnav] Page 1
Basics The function of transport layer is to provide a reliable end-to-end communications service. It also provides data transfer service for the user layers above and shield the upper layers from the
More informationObjectives of Lecture. Network Architecture. Protocols. Contents
Objectives of Lecture Network Architecture Show how network architecture can be understood using a layered approach. Introduce the OSI seven layer reference model. Introduce the concepts of internetworking
More informationMailMarshal Exchange in a Windows Server Active/Passive Cluster
MailMarshal Exchange in a Windows Server Active/Passive Cluster November, 2006 Contents Introduction 2 Preparation 3 Generic Resource Creation 4 Cluster MailMarshal Install 4 Antivirus Software 8 Known
More informationMOBILITY AND MOBILE NETWORK OPTIMIZATION
MOBILITY AND MOBILE NETWORK OPTIMIZATION netmotionwireless.com Executive Summary Wireless networks exhibit uneven and unpredictable performance characteristics which, if not correctly managed, can turn
More informationPerformance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
More informationUSB Print Server User Manual (GPSU01)
USB Print Server User Manual (GPSU01) Welcome Thank you for purchasing this 1-port USB Print Server that allows any networked computer to share a USB printer. It complies with USB 1.1 specifications,
More informationChapter 3. Internet Applications and Network Programming
Chapter 3 Internet Applications and Network Programming 1 Introduction The Internet offers users a rich diversity of services none of the services is part of the underlying communication infrastructure
More informationHow do I get to www.randomsite.com?
Networking Primer* *caveat: this is just a brief and incomplete introduction to networking to help students without a networking background learn Network Security. How do I get to www.randomsite.com? Local
More informationAn Introduction to VoIP Protocols
An Introduction to VoIP Protocols www.netqos.com Voice over IP (VoIP) offers the vision of a converged network carrying multiple types of traffic (voice, video, and data, to name a few). To carry out this
More informationWindows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
More informationImproving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation
Improving the Performance of TCP Using Window Adjustment Procedure and Bandwidth Estimation R.Navaneethakrishnan Assistant Professor (SG) Bharathiyar College of Engineering and Technology, Karaikal, India.
More informationMike Canney Principal Network Analyst getpackets.com
Mike Canney Principal Network Analyst getpackets.com 1 My contact info contact Mike Canney, Principal Network Analyst, getpackets.com canney@getpackets.com 319.389.1137 2 Capture Strategies capture Capture
More informationTCP/IP Support Enhancements
TPF Users Group Spring 2005 TCP/IP Support Enhancements Mark Gambino AIM Enterprise Platform Software IBM z/transaction Processing Facility Enterprise Edition 1.1.0 Any references to future plans are for
More informationPerformance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
More informationCommunications and Computer Networks
SFWR 4C03: Computer Networks and Computer Security January 5-8 2004 Lecturer: Kartik Krishnan Lectures 1-3 Communications and Computer Networks The fundamental purpose of a communication system is the
More informationQuestion: 3 When using Application Intelligence, Server Time may be defined as.
1 Network General - 1T6-521 Application Performance Analysis and Troubleshooting Question: 1 One component in an application turn is. A. Server response time B. Network process time C. Application response
More informationApplications. Network Application Performance Analysis. Laboratory. Objective. Overview
Laboratory 12 Applications Network Application Performance Analysis Objective The objective of this lab is to analyze the performance of an Internet application protocol and its relation to the underlying
More informationNetwork Station - Thin Client Computing - Overview
Network Station - Thin Client Computing - Overview Overview The objective of this document is to help develop an understanding of a Server Based Computing/Thin-Client environment using MS Windows NT 4.0,
More informationThe new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group
The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links Filippo Costa on behalf of the ALICE DAQ group DATE software 2 DATE (ALICE Data Acquisition and Test Environment) ALICE is a
More informationOver the past few years organizations have been adopting server virtualization
A DeepStorage.net Labs Validation Report Over the past few years organizations have been adopting server virtualization to reduce capital expenditures by consolidating multiple virtual servers onto a single
More informationCS335 Sample Questions for Exam #2
CS335 Sample Questions for Exam #2.) Compare connection-oriented with connectionless protocols. What type of protocol is IP? How about TCP and UDP? Connection-oriented protocols Require a setup time to
More informationWeb Servers Should Turn Off Nagle to Avoid Unnecessary 200 ms Delays +
Web Servers Should Turn Off Nagle to Avoid Unnecessary 200 ms Delays + 1 Abstract Robert Buff, Arthur Goldberg Computer Science Department Courant Institute of Mathematical Science New York University
More informationProtocols. Packets. What's in an IP packet
Protocols Precise rules that govern communication between two parties TCP/IP: the basic Internet protocols IP: Internet Protocol (bottom level) all packets shipped from network to network as IP packets
More informationLecture (02) Networking Model (TCP/IP) Networking Standard (OSI) (I)
Lecture (02) Networking Model (TCP/IP) Networking Standard (OSI) (I) By: Dr. Ahmed ElShafee ١ Dr. Ahmed ElShafee, ACU : Fall 2015, Networks II Agenda Introduction to networking architecture Historical
More informationComputer Networks UDP and TCP
Computer Networks UDP and TCP Saad Mneimneh Computer Science Hunter College of CUNY New York I m a system programmer specializing in TCP/IP communication protocol on UNIX systems. How can I explain a thing
More informationThe Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc.
The Effect of Packet Reordering in a Backbone Link on Application Throughput Michael Laor and Lior Gendel, Cisco Systems, Inc. Abstract Packet reordering in the Internet is a well-known phenomenon. As
More informationImproved Digital Media Delivery with Telestream HyperLaunch
WHITE PAPER Improved Digital Media Delivery with Telestream THE CHALLENGE Increasingly, Internet Protocol (IP) based networks are being used to deliver digital media. Applications include delivery of news
More informationAvaya ExpertNet Lite Assessment Tool
IP Telephony Contact Centers Mobility Services WHITE PAPER Avaya ExpertNet Lite Assessment Tool April 2005 avaya.com Table of Contents Overview... 1 Network Impact... 2 Network Paths... 2 Path Generation...
More informationTesting and Restoring the Nasuni Filer in a Disaster Recovery Scenario
Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.2 November 2015 Last modified: November 3, 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing
More informationIxChariot Virtualization Performance Test Plan
WHITE PAPER IxChariot Virtualization Performance Test Plan Test Methodologies The following test plan gives a brief overview of the trend toward virtualization, and how IxChariot can be used to validate
More informationThe OSI Model: Understanding the Seven Layers of Computer Networks
Expert Reference Series of White Papers The OSI Model: Understanding the Seven Layers of Computer Networks 1-800-COURSES www.globalknowledge.com The OSI Model: Understanding the Seven Layers of Computer
More informationTechnical Support Information Belkin internal use only
The fundamentals of TCP/IP networking TCP/IP (Transmission Control Protocol / Internet Protocols) is a set of networking protocols that is used for communication on the Internet and on many other networks.
More informationAPPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM
152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented
More informationVXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
More informationImproving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008
Improving Effective WAN Throughput for Large Data Flows By Peter Sevcik and Rebecca Wetzel November 2008 When you buy a broadband Wide Area Network (WAN) you want to put the entire bandwidth capacity to
More informationACHILLES CERTIFICATION. SIS Module SLS 1508
ACHILLES CERTIFICATION PUBLIC REPORT Final DeltaV Report SIS Module SLS 1508 Disclaimer Wurldtech Security Inc. retains the right to change information in this report without notice. Wurldtech Security
More informationIP Network Layer. Datagram ID FLAG Fragment Offset. IP Datagrams. IP Addresses. IP Addresses. CSCE 515: Computer Network Programming TCP/IP
CSCE 515: Computer Network Programming TCP/IP IP Network Layer Wenyuan Xu Department of Computer Science and Engineering University of South Carolina IP Datagrams IP is the network layer packet delivery
More informationPer-Flow Queuing Allot's Approach to Bandwidth Management
White Paper Per-Flow Queuing Allot's Approach to Bandwidth Management Allot Communications, July 2006. All Rights Reserved. Table of Contents Executive Overview... 3 Understanding TCP/IP... 4 What is Bandwidth
More informationOverview of TCP/IP. TCP/IP and Internet
Overview of TCP/IP System Administrators and network administrators Why networking - communication Why TCP/IP Provides interoperable communications between all types of hardware and all kinds of operating
More informationTransport Layer. Chapter 3.4. Think about
Chapter 3.4 La 4 Transport La 1 Think about 2 How do MAC addresses differ from that of the network la? What is flat and what is hierarchical addressing? Who defines the IP Address of a device? What is
More informationIP - The Internet Protocol
Orientation IP - The Internet Protocol IP (Internet Protocol) is a Network Layer Protocol. IP s current version is Version 4 (IPv4). It is specified in RFC 891. TCP UDP Transport Layer ICMP IP IGMP Network
More informationChapter 5. Transport layer protocols
Chapter 5. Transport layer protocols This chapter provides an overview of the most important and common protocols of the TCP/IP transport layer. These include: User Datagram Protocol (UDP) Transmission
More informationz/os V1R11 Communications Server System management and monitoring Network management interface enhancements
IBM Software Group Enterprise Networking Solutions z/os V1R11 Communications Server z/os V1R11 Communications Server System management and monitoring Network management interface enhancements z/os Communications
More informationInternet Control Protocols Reading: Chapter 3
Internet Control Protocols Reading: Chapter 3 ARP - RFC 826, STD 37 DHCP - RFC 2131 ICMP - RFC 0792, STD 05 1 Goals of Today s Lecture Bootstrapping an end host Learning its own configuration parameters
More informationIntroduction To Computer Networking
Introduction To Computer Networking Alex S. 1 Introduction 1.1 Serial Lines Serial lines are generally the most basic and most common communication medium you can have between computers and/or equipment.
More informationBased on Computer Networking, 4 th Edition by Kurose and Ross
Computer Networks Ethernet Hubs and Switches Based on Computer Networking, 4 th Edition by Kurose and Ross Ethernet dominant wired LAN technology: cheap $20 for NIC first widely used LAN technology Simpler,
More informationCPS221 Lecture: Layered Network Architecture
CPS221 Lecture: Layered Network Architecture Objectives last revised 9/10/12 1. To discuss the OSI layered architecture model 2. To discuss the specific implementation of this model in TCP/IP Materials:
More informationWeb Enabled Software for 8614xB-series Optical Spectrum Analyzers. Installation Guide
for 8614xB-series Optical Spectrum Analyzers Installation Guide Copyright Agilent Technologies Company 2001 All Rights Reserved. Reproduction, adaptation, or translation without prior written permission
More informationUPPER LAYER SWITCHING
52-20-40 DATA COMMUNICATIONS MANAGEMENT UPPER LAYER SWITCHING Gilbert Held INSIDE Upper Layer Operations; Address Translation; Layer 3 Switching; Layer 4 Switching OVERVIEW The first series of LAN switches
More informationThe Problem with TCP. Overcoming TCP s Drawbacks
White Paper on managed file transfers How to Optimize File Transfers Increase file transfer speeds in poor performing networks FileCatalyst Page 1 of 6 Introduction With the proliferation of the Internet,
More informationA Dell Technical White Paper Dell PowerConnect Team
Flow Control and Network Performance A Dell Technical White Paper Dell PowerConnect Team THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
More informationUsing hp OpenView Omniback II GUI Via Slow Remote Connections
hp OpenView Omniback II technical whitepaper Using hp OpenView Omniback II GUI Via Slow Remote Connections Using Omniback II GUI via slow remote connections Technical Whitepaper Table of Contents 1. Introduction...
More informationVisio Enabled Solution: One-Click Switched Network Vision
Visio Enabled Solution: One-Click Switched Network Vision Tim Wittwer, Senior Software Engineer Alan Delwiche, Senior Software Engineer March 2001 Applies to: All Microsoft Visio 2002 Editions All Microsoft
More informationNetwork Security TCP/IP Refresher
Network Security TCP/IP Refresher What you (at least) need to know about networking! Dr. David Barrera Network Security HS 2014 Outline Network Reference Models Local Area Networks Internet Protocol (IP)
More informationTHE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS
THE IMPORTANCE OF TESTING TCP PERFORMANCE IN CARRIER ETHERNET NETWORKS 159 APPLICATION NOTE Bruno Giguère, Member of Technical Staff, Transport & Datacom Business Unit, EXFO As end-users are migrating
More information2057-15. First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring
2057-15 First Workshop on Open Source and Internet Technology for Scientific Environment: with case studies from Environmental Monitoring 7-25 September 2009 TCP/IP Networking Abhaya S. Induruwa Department
More informationNetIQ AppManager for NetBackup UNIX
NetIQ AppManager for NetBackup UNIX Management Guide January 2008 Legal Notice NetIQ AppManager is covered by United States Patent No(s): 05829001, 05986653, 05999178, 06078324, 06397359, 06408335. THIS
More informationOutline. TCP connection setup/data transfer. 15-441 Computer Networking. TCP Reliability. Congestion sources and collapse. Congestion control basics
Outline 15-441 Computer Networking Lecture 8 TCP & Congestion Control TCP connection setup/data transfer TCP Reliability Congestion sources and collapse Congestion control basics Lecture 8: 09-23-2002
More informationpco.interface GigE & USB Installation Guide
pco.interface GigE & USB Installation Guide In this manual you find installation instructions for the GigE Vision and USB2.0 interface on Microsoft Windows platforms. Target Audience: This camera is designed
More informationUsing NetIQ Security and Administration Products to Ensure HIPAA Compliance March 25, 2002. Contents
Using NetIQ Security and Administration Products to Ensure HIPAA Compliance March 25, 2002 Contents HIPAA Overview...1 NetIQ Products Offer a HIPAA Solution...2 HIPAA Requirements...3 How NetIQ Security
More information