Monitoring Network QoS in a Dynamic Real-Time System 1



Similar documents
MONITORING NETWORK QUALITY OF SERVICE IN A DYNAMIC REAL-TIME SYSTEM

How To Understand Network Performance Monitoring And Performance Monitoring Tools

Dynamic Resource Management Architecture Patterns

LOW OVERHEAD CONTINUOUS MONITORING OF IP NETWORK PERFORMANCE

Using SNMP for Remote Measurement and Automation

MANAGING NETWORK COMPONENTS USING SNMP

A Framework for End-to-End Proactive Network Management

Cisco Performance Visibility Manager 1.0.1

Analysis of Bursty Packet Loss Characteristics on Underutilized Links Using SNMP

A Guide to Understanding SNMP

QoSpy an approach for QoS monitoring in DiffServ Networks.

Development of Monitoring Tools for Measuring Network Performances: A Passive Approach

STANDPOINT FOR QUALITY-OF-SERVICE MEASUREMENT

Avaya ExpertNet Lite Assessment Tool

A Design and Implementation of Network Traffic Monitoring System for PC-room Management

Measure wireless network performance using testing tool iperf

Monitoring Traffic manager

PANDORA FMS NETWORK DEVICES MONITORING

Region 10 Videoconference Network (R10VN)

RARP: Reverse Address Resolution Protocol

PANDORA FMS NETWORK DEVICE MONITORING

How To Set Up Foglight Nms For A Proof Of Concept

Study of Network Performance Monitoring Tools-SNMP

OpenDaylight Project Proposal Dynamic Flow Management

Performance Analysis of IPv4 v/s IPv6 in Virtual Environment Using UBUNTU

Monitoring Network Traffic Using sflow Technology on EX Series Ethernet Switches

Lecture 12: Network Management Architecture

With the rapid development of

Traffic monitoring with sflow and ProCurve Manager Plus

Network Discovery Protocol LLDP and LLDP- MED

NMS300 Network Management System

VOICE OVER IP AND NETWORK CONVERGENCE

Whitepaper. A Guide to Ensuring Perfect VoIP Calls. blog.sevone.com info@sevone.com

Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network

Requirements of Voice in an IP Internetwork

A Summary of Network Traffic Monitoring and Analysis Techniques

CLOUD MONITORING BASED ON SNMP

SNMP Network Management Concepts

MONITORING NETWORK TRAFFIC USING sflow TECHNOLOGY ON EX SERIES ETHERNET SWITCHES

Network Monitoring Comparison

IP Addressing A Simplified Tutorial

Analysis of IP Network for different Quality of Service

Measurement of IP Transport Parameters for IP Telephony

Carrier Ethernet: New Game Plan for Media Converters

Simulation of an SNMP Agent: Operations, Analysis and Results

CiscoWorks Internetwork Performance Monitor 4.0

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Transport and Network Layer

Iperf Tutorial. Jon Dugan Summer JointTechs 2010, Columbus, OH

ITL Lab 5 - Performance Measurements and SNMP Monitoring 1. Purpose

A NOVEL RESOURCE EFFICIENT DMMS APPROACH

EINDHOVEN UNIVERSITY OF TECHNOLOGY Department of Mathematics and Computer Science

Network Discovery Protocol LLDP and LLDP- MED

Bandwidth Aggregation, Teaming and Bonding

Index terms Management, Measurement, Performance Monitoring, Middleware. Keywords Wi-Fi Networks, Service Level Agreement, Visualization

TE in action. Some problems that TE tries to solve. Concept of Traffic Engineering (TE)

Clearing the Way for VoIP

Internet Protocol: IP packet headers. vendredi 18 octobre 13

Network Considerations for IP Video

Effects of Filler Traffic In IP Networks. Adam Feldman April 5, 2001 Master s Project

TUTORIAL SNMP: STATUS AND APPLICATION FOR LAN/MAN MANAGEMENT. Aiko Pras

The Problem with TCP. Overcoming TCP s Drawbacks

How To Manage A Network Management System (Hitachi)

Frequently Asked Questions

Service Definition. Internet Service. Introduction. Product Overview. Service Specification

A Web-Based Real-Time Traffic Monitoring Scheme Using CORBA

SNMP Extensions for a Self Healing Network

4 Internet QoS Management

SNMP and Network Management

RMON, the New SNMP Remote Monitoring Standard Nathan J. Muller

Network Management. Jaakko Kotimäki. Department of Computer Science Aalto University, School of Science. 21. maaliskuuta 2016

Faculty of Engineering Computer Engineering Department Islamic University of Gaza Network Chapter# 19 INTERNETWORK OPERATION

Final for ECE374 05/06/13 Solution!!

Disjoint Path Algorithm for Load Balancing in MPLS network

Investigation and Comparison of MPLS QoS Solution and Differentiated Services QoS Solutions

MPLS-TP. Future Ready. Today. Introduction. Connection Oriented Transport

Quality of Service using Traffic Engineering over MPLS: An Analysis. Praveen Bhaniramka, Wei Sun, Raj Jain

Minimal network traffic is the result of SiteAudit s design. The information below explains why network traffic is minimized.

Simple Network Management Protocol

Basic Networking Concepts. 1. Introduction 2. Protocols 3. Protocol Layers 4. Network Interconnection/Internet

packet retransmitting based on dynamic route table technology, as shown in fig. 2 and 3.

Troubleshooting VoIP and Streaming Video Problems

Network performance and capacity planning: Techniques for an e-business world

How Router Technology Shapes Inter-Cloud Computing Service Architecture for The Future Internet

Cisco IOS Flexible NetFlow Technology

QoS Parameters. Quality of Service in the Internet. Traffic Shaping: Congestion Control. Keeping the QoS

Influence of Load Balancing on Quality of Real Time Data Transmission*

Introduction to IP v6

Zarząd (7 osób) F inanse (13 osób) M arketing (7 osób) S przedaż (16 osób) K adry (15 osób)

Enterprise Network Control and Management: Traffic Flow Models

Integrated Traffic Monitoring

MS MS - Management Station MNE - Managed Network Element. Link l LAN MNE MNE...

Network Basics GRAPHISOFT. for connecting to a BIM Server (version 1.0)

On real-time delay monitoring in software-defined networks

Transcription:

Monitoring Network QoS in a Dynamic Real-Time System Hong Chen, Brett Tjaden, Lonnie Welch, Carl Bruggeman, Lu Tong, Barbara Pfarr School of Electrical Engineering and Computer Science Ohio University, Athens, OH 457 {hc52787 tjaden welch bruggema lt356587@ohio.edu Real-Time Software Engineering Branch NASA GSFC/Code 584, Greenbelt, MD 277 Barbara.B.Pfarr.@gsfc.nasa.gov Abstract This paper presents our design and tests of a realtime network monitoring program for DeSiDeRaTa, an existing resource management system. This monitor will assist DeSiDeRaTa in maintaining an acceptable Quality of Service (QoS) for groups of real-time applications by reporting the communication delays caused by inadequate network bandwidth. The network monitoring application we developed uses SNMP and network topology information gleaned from the DeSiDeRaTa application specification files. Network bandwidth utilization of each real-time communication path is computed, and experiments have been run to demonstrate the accuracy of these measurements.. Introduction Significant work has been done on resource management (RM) of distributed real-time computer systems. Such systems usually contain a large number of computers connected by one or more computer networks. Maintaining adequate quality of service (QoS) for realtime applications requires the resource management middleware to manage not only computing resources but also network resources. In many cases, proper management of network resources is vital for delivering adequate QoS to a dynamic real-time system. A large amount of data communication between the computers and improper use of network resources can lead to congestion and delay, and ultimately a QoS violation. To properly manage network resources, the resource management middleware must monitor their performance. This paper focuses on a technique for monitoring network resources. It extends the work of DeSiDeRaTa, Funded by NASA Office of Earth Science, NASA Research Announcement 99-OES-8. an existing resource management middleware solution for dynamic, scalable, dependable, real-time systems []. DeSiDeRaTa assumed that network resources were never a bottleneck. Our network monitoring program provides real-time network performance information to the resource management middleware and helps the middleware to detect and diagnose potential QoS violations due to a sub-optimal allocation of network resources. To obtain network performance information, the network resource monitoring program must know the topology of the computer network. DeSiDeRaTa includes a specification language with which to specify the components of a real-time system. For our work, we have extended the specification language to include networkrelated information of the real-time system, such as computer hosts, network devices, network interfaces, and network connections. Our network monitoring program obtains the topology and connectivity of the real-time system from this specification file. The monitor implements an algorithm that traverses the communication path between hosts based on the topology information in the specification file, and calculates the bandwidth (both available and used) between pairs of hosts. This real-time network performance is obtained by querying network components periodically using the Simple Network Management Protocol (SNMP) to gather performance information from hosts and network devices. Combining the SNMP query results and network topology information, bandwidth statistics are then calculated. The outline of the remainder of this paper is as follows. Section 2 presents some background on SNMP and the DeSiDeRaTa resource manager. Section 3 discusses the algorithms and implementation details of our approach. Section 4 provides some preliminary test results for our network monitor. Section 5 presents our conclusions and gives some thoughts on future work. -7695-573-8/2/$7. (C) 22 IEEE

2. Background 2.. Network Monitoring We considered many different network monitoring techniques as the basis for our work. SNMP is a traditional technology and it has been widely used for network monitoring [2-4]. Other techniques, such as Java [5,6], agent technology [7], switch monitoring [8], were also considered. We chose SNMP for its simplicity and comprehensiveness. 2.2. SNMP The Simple Network Management Protocol (SNMP) [9] provides a basic network-management tool for TCP/IP-based environments. SNMP defines the structure of a hierarchically organized database, which is stored in a series of network components. Each network component contains information such as services offered, the device s routing table, and a wide variety of statistics. Such information can be accessed through a local Management Information Base (MIB) [] that specifies the structure of the database. More information about SNMP and MIB can be found in []. Our network monitor uses the SNMP protocol and the interface table of the MIB to obtain data communication statistics from each SNMP-enabled network component. The interface table provides a static data transmission rate as well as counters for data and packets being sent and received through each network interface. Real-time network information is obtained by polling SNMP servers on each device (e.g. hosts, switches, routers) periodically. This data can then be used to determine the amount of bandwidth used and available for the network component. By combining these metrics (see section 3.3), we compute the available and used bandwidth of a real-time path. 2.3. Resource Management and Network Topology The work in this paper is part of the effort to build DeSiDeRaTa, adaptive resource management middleware for dynamic, scalable, dependable real-time systems []. The middleware performs QoS monitoring and failure detection, QoS diagnosis, and reallocation of resources to adapt the system to achieve acceptable levels of QoS. The current version of DeSiDeRaTa middleware manages only computational resources and assumes no QoS violation is caused by network delays. Our work extends DeSiDeRaTa to allow the management of network resources. We developed an application that does network QoS monitoring and provides the middleware with network metrics regarding data communication information, which enables the middleware to manage the resources based on the network metrics and network QoS specification. To obtain the network metrics, the network monitoring software has to be able to discover the network topologies and combine them with SNMP data from each network component. Network topology discovery usually is difficult due to the complexity of computer networks. In the DeSiDeRaTa environment, this problem can be solved easily using some of RM infrastructure already in place. The resource management middleware must know details about hardware systems and all the software applications under its control. A specification language was developed to describe such information of hardware and software systems, including network connections and interfaces. The network monitoring software can obtain this network information from the specification files and construct the network topology graph for the system. 3. Methodology 3.. SNMP Polling The goal of the network-monitoring program is to obtain real-time network bandwidth usage information. SNMP information can be polled from an SNMP-enabled host or network device (i.e., the host/device has an SNMP demon running). Table shows some of the MIB-II (second version of MIB) objects being accessed during our periodic SNMP polling. The data transmitted through an interface in both directions, as well as the static bandwidth, can be obtained using these MIB-II objects. Because the polling results are cumulative numbers, this data has to be polled periodically. The old value is subtracted from the new one to determine statistics for the polling interval. The time interval between two polling processes can be found using the system uptime data. The data transmission rate, including packets and bytes per unit time can then be calculated. 3.2. Network Topology and Specification Language The network-monitoring program was built using a local area network (LAN) model. The topology of such networks is modeled using hosts (or network devices), network interfaces, and network connections. Figure shows the LAN topology model. Each host or network device has one or more network interfaces. For example, in the figure, hosts A, C, and E each have a single connection (interface) to the network, while B and D have multiple interfaces. B and D can be hosts with multiple network connections, or network devices such as -7695-573-8/2/$7. (C) 22 IEEE

MIB-II Object (Numbers) Description [] system.sysuptime The time (in hundredths of a second) since the network (.3.6..2...3) management portion of the system was last re-initialized. interfaces.iftable.ifentry.ifspeed An estimate of the interface's current bandwidth in bits per (.3.6..2..2.2..5) second (static bandwidth). interfaces.iftable.ifentry.ifinoctets Accumulated number of octets received on the interface. (.3.6..2..2.2..) interfaces.iftable.ifentry.ifinucastpkts Accumulated number of subnetwork-unicast packets delivered to (.3.6..2..2.2..) a higher-layer protocol. interfaces.iftable.ifentry.ifoutoctets Accumulated number of octets transmitted out of the interface. (.3.6..2..2.2..6) interfaces.iftable.ifentry.ifoutnucastpkts The total number of packets that higher-level protocols (.3.6..2..2.2..7) requested to be transmitted to a subnetwork-unicast address. Table. MIB-II Objects Used in Network Monitoring. A Host/Device 3 B D 2 2 Network Interface C E Network Connection Figure 2 shows the pseudo code for data structures that specify the network topology. A host/device is specified by its name, a list of all network interfaces on the host, and other host information. The interfaces are distinguished by their unique local names. A network connection is specified as two host-interface pairs, which give the two ends of the connection. Finally, the network topology can be described as a list of all the hosts/devices and all the network connections among them. The specification file is parsed, and related network topology information is then passed to the networkmonitoring program. 3.3. Path Traversal and Bandwidth Calculation Figure. Schematic Diagram of Network Connections. switches or hubs. A network connection is specified as a pair of interfaces that are physically connected to each other. In this model, the connection must be -to-, i.e., one interface may only be connected to one interface on another host/device. Utilizing the DeSiDeRaTa specification language is a straightforward approach to obtain network topology. Pure network discovery is not feasible in the DeSiDeRaTa environment because the resource management middleware has to know exactly what resources are under its control, and this requires at least some level of specification of resources. A hybrid approach may be a better solution in the future, however, due to its complexity, we chose the simpler solution using the specification language for this stage of the research. A new extension to the DeSiDeRaTa specification language [2] was developed to describe the topology of the network resources of the real-time system under control of the resource management middleware. The network topology is defined using the model described above. Based on the information from the specification language, the communication path between two hosts can be traversed. A simple recursive algorithm is designed to traverse the path, with a necessary infinite-loop detecting function implemented. The result of the path is described as a series of network connections defined in the previous section. To calculate the available bandwidth between two hosts, one has to know the bandwidth of each connection in the path. The available bandwidth of the whole path is simply the minimum of all the individual available bandwidths. Assume a communication path consists of n network connections, and the available bandwidth for connection i is a i (i =, 2, n). Then the available bandwidth of the path, A = minimum(a, a 2, a n ). For each individual connection, the available bandwidth a i is just the difference between maximum bandwidth m i and used bandwidth u i : a i = m i - u i. A measure of m i can be obtained directly through SNMP polling, while the u i must be computed by the network monitoring program. It is relatively easy to calculate used bandwidth of a host connected to switches because a switch does not forward packets for one host to other hosts connected to the same switch. Hence, the amount of bandwidth used on -7695-573-8/2/$7. (C) 22 IEEE

Host { //data structure of a host or network device host_name; //name of the host LinkedList interfaces; //list of interfaces of the host...... //other host information Interface { //data structure of a network interface localname; //local name of the interface...... //other interface information HostPairConnection { Host host; Interface interface; Host host2; Interface interface2; //data structure of a network connection NetworkTopology { //data structure of network topology LinkedList hosts; //list of all hosts LinkedList hostpairconnections; //list of all connections Figure 2. Specification of Network Topology. a host connected to a switch is simply the amount of data transmitted as reported by SNMP polling from either the host or the switch. If the traffic reported is t i, then we simple have u i = t i. The traffic of another connection t j (j i) will not affect u i. However, for hosts connected to hubs, all packets that go through the hub will be sent to every host connected to the hub. Therefore, the amount of bandwidth used for a host connected to a hub is the sum of all the data sent to the hub, instead of just the one to this host. Assume there are n hosts connected to the hub and the traffic reported by SNMP polling for each host is t i (i =, 2, n). Then u i = t + t 2 + + t n. Notice that u i cannot exceed the maximum speed of the hub. An algorithm was implemented in our network monitoring program to distinguish these two cases and calculate available bandwidth accordingly. The available bandwidth between two hosts is calculated by traversing the communication path between the two and computing the minimum value. 4. Experiments and Results 4.. Experimental Setups The experiments described below were performed in the Laboratory for Intelligent Real-time Secure Systems (LIRTSS) at Ohio University. The network is a LAN D M SNMP Demon Network Monitor N D Win NT N2 D Win NT Hub ( Mbps) Switch ( Mbps) D M L D Linux S D Solaris 7 S2 D Solaris 7 S3 S4 S5 S6 Figure 3. Layout of LAN Test Bed. -7695-573-8/2/$7. (C) 22 IEEE

system with one Mbps switch and one Mbps hub. As shown in Figure 3, one Linux machine (L), two Solaris 7 machines (S, S2), and four machines (S3-S6) are connected to the switch. Two other Windows NT machines (N and N2) are connected to the hub, which is connected to the switch. Our network monitoring program was running on the Linux machine L. SNMP demons were available on L, N, N2, S, S2, and the switch at the time of experiment. Such a network arrangement is sufficient for monitoring the bandwidth between any pair of hosts in the system. For example, even though there is no SNMP demon on either S4 or S5, the bandwidth between S4 and S5 can still be monitored by polling the interfaces on the switch that are connected to S4 and S5. 4.2. Network Load Generator To test our network monitoring program, a simple network load generator program was developed. It sends data streams to a designated host at a given speed. The data are sent as UDP packets to the DISCARD port (UDP port number 9) on the host. The real speed of traffic generated by the program is slightly larger than the specified value due to the extra bytes of UDP and IP headers and acknowledgements. The size of these extra bytes may vary depending on the speed of data traffic, but is small compared to the amount of data being sent in our experiments. 4.3. Experiment Results Preliminary experiment results are presented in this section. Communication paths through both the hub and switch were tested, and end-to-end bandwidth was observed using our network monitoring program, and compared to expected values. 4.3.. Dynamically varying network load. A set of experiments was performed to observe the network traffic between a Windows NT machine, N, and the Solaris 7 machine, S. The communication path between these two hosts was computed by our program using network topology information from the specification file. The path that data followed was: S switch hub N (Figure 3). To check the correctness of the network bandwidth usage reported by our network monitoring program, network traffic was generated from L to N using the network load generator. Starting at Kbytes/second for 2 seconds, we increased the amount of data sent by the load generator by Kbytes/second each 6 seconds. After 36 seconds, the load generator was sending 5 Kbytes/second from L to N. The entire load was eliminated at 42 seconds (Figure 4a). Generated Load Measured Bandwidth Usage 6 4 2 6 4 2 (a) (b) L ==> N S <==> N 6 2 8 24 3 36 42 48 54 Time (seconds) Figure 4. Experiment results for our network monitoring program. (a) Pattern of traffic load generated by the load generator. (b) Measured traffic between hosts according to our network monitoring program. The output of our network monitor exhibits a similar pattern to the actual amount of data being transmitted (Figure 4b). The reported value of used bandwidth is slightly larger than the generated traffic load due to the background traffic over the network and the extra data in the packet headers. The fluctuation and spikes seen in Figure 4b is caused by the fluctuation of data transmission over the network and a slight delay in SNMP polling Table 2 gives some statistics of the measured results. The background traffic was calculated as the average of measured values at generated load. The average traffic was obtained for different generated load by subtracting the background from the average of measured traffic. The average measured load less background was about 4% larger than the values of generated load. Part of the Generated Load Average Measured Load.824 Average Load Less Background % Error Maximum % Error 5.463 3.639 3.64% 6.4% 2 29.338 27.54 3.76% 8.4% 3 322.52 3.688 3.56% 6.6% 4 428.28 46.456 4.% 5.% 5 529.278 57.454 3.49% 5.7% Table 2. Statistics of Measured Traffic Load (Kbytes/second). -7695-573-8/2/$7. (C) 22 IEEE

difference is due to the packet headers of generated traffic. The IP and UDP headers in a system with,5- byte MTU size can contribute about 2%. The other 2% may come from the traffic caused by SNMP queries and acknowledgements. Table 2 also shows maximum percentage error of individual value of measured traffic. The large error (6%) was caused by delays in SNMP polling. Occasionally, some data bytes are counted in a later SNMP message instead of an earlier one, resulting in an abnormally small value followed by and abnormally large one. 4.3.2. Hosts connected by a hub. A hub forwards data packets to all the connected hosts, not just the one for which a packet is destined. This affects the bandwidth of all hosts connected to a hub if data is sent to any host connected to the same hub. Our monitoring program considers this by summing the traffic through a hub when computing the amount of bandwidth used on any communication path through the hub. An experiment was run to monitor the amount of Generated Load Measured Bandwidth Usage 2 2 4 2 4 2 (a) (b) (c) (d) L ==> N L ==> N2 S <==> N S <==> N2 2 4 6 8 Time (seconds) Figure 5. Experiment results for hub-connected hosts. (a-b) Patterns of traffic loads generated by the load generator. (c-d) Measured traffic between hosts according to our network monitoring program. bandwidth used by the two Windows NT machines connected to the hub. Data was sent from the Linux machine to the two NT machines as shown in Figure 5a-b. We started with no data being sent to either NT machines. After 2 seconds, we began to send 2 Kbytes/second from L to N. 2 seconds later, we began to send 2 Kbytes/second from L to N2. After another 2 seconds, the traffic from L to N was reduced to. 2 seconds later the traffic from L to N2 was also eliminated. The observed traffic load for the two paths (S N, S N2) is as we expected. The statistical analysis, the same as the one in previous section, shows 3.7% error on average values of measured traffic (less background), with maximum individual error of 7.8%. 4.3.3. Hosts connected by a switch. A switch only forwards packets to the host for which they are destined, not all the hosts connected to the switch. Therefore, our monitoring program treats switches differently than hubs. The traffic through a switch is not summed up. Instead, Generated Load Measured Bandwidth Usage 2 2 2 2 2 (a) (b) (c) (d) (e) L ==> S2 L ==> S3 L ==> S S <==> S2 S <==> S3 2 4 6 8 2 4 6 Time (seconds) Figure 6. Experiment results for switchconnected hosts. (a-c) Patterns of traffic loads generated by the load generator. (d-e) Measured traffic between hosts according to our network monitoring program. -7695-573-8/2/$7. (C) 22 IEEE

only traffic going to and from a particular host is considered when computing the amount of bandwidth being used. An experiment was performed on two hosts connected by the switch. Traffic between S and S2, and between S and S3 was measured by our program. All three machines are Solaris machines and are connected to the switch. 2, Kbytes/second of traffic was generated at time 2-6, 4-8, and -2 seconds from L to S2, S3, and S respectively (Figure 6a-c). As shown in Figure 6d-e, the load sent to S2 can only be seen between S and S2, and the load to S3 appears only between S and S3, while the load to S is present in both paths because S has only one connection to the switch. The statistical analysis shows 2.2% error on average values of measured traffic (less background), with maximum individual error of 7.8%. The smaller percentage error on average values is due to the much larger volume of traffic generated compared to the previous experiment. 5. Conclusions and Future Work This paper has presented our approach to monitoring the network QoS in a dynamic real-time system. Our network monitoring program uses SNMP and network topology information, and computes network bandwidth usage information of a communication path in real-time. Different algorithms are used for host pairs connected by different network devices (hub and switch). The results of experiments on a LAN environment show the effectiveness and correctness of our program. Future work includes measurement of network latency, network QoS violation detection, dynamic network topology discovery, and distributed network monitoring. 6. References [] B. Ravindran, L. Welch, and B. Shirazi, Resource management middleware for dynamic, dependable realtime systems, Real-Time Systems, 2(2), March 2, 83-96. [2] G. Mansfeild, M. Murata, K. Higuchi, K. Jayanthi, B. Chakraborty, Y. Nemoto, and S. Noguchi, An SNMPbased expert network management-system, IEICE Transactions on Communications, E75B(8), August 992, 7-78. [3] M.R. Siegl, and G. Trausmuth, Hierarchical network management: A concept and its prototype in SNMPv2, Computer Networks and ISDN Systems, 28(4), February 996, 44-452. [4] M. Hegde, M.K. Narana, and A. Kumar, netmon: An SNMP based network performance monitoring tool for packet data networks, IETE Journal of Research, 46(-2), January-April 2, 5-25. [5] M. Leppinen, P. Pulkkinen, and A. Rautiainen, Java- and CORBA-based network management, Computer, 3(6), June 997, 83. [6] J.O. Lee, Enabling network management using Java technologies, IEEE Communications Magazine, 38(), January 2, 6-23. [7] D. Gavalas, D. Greenwood, M. Ghanbari, and M. O Mahony, Advanced network monitoring applications based on mobile/intelligent agent, Computer Communications, 23(8), April 2, 72-73. [8] D. Romascanu, and I.E. Zilbershtein, Switch monitoring The new generation of monitoring for local area networks, Bell Labs Technical Journal, 4(4), October-December 999, 42-54. [9] J. Case, M. Fedor, M. Schoffstall, and J. Davin, A simple network management protocol (SNMP), RFC 57, May 99. [] K. McCloghrie, and M. Rose, Management information base for network management of TCP/IP-based internets: MIB-II, RFC 23, March 99. [] W. Stallings, SNMP, SNMPv2, and CMIP: The Practical Guide to Network-Management Standards, Addison- Wesley, 993. [2] L. Tong, C. Bruggeman, B. Tjaden, H. Chen, and L.R. Welch, Specification and Modeling of Network Resources in Dynamic, Distributed Real-time System, 4th International Conference on Parallel and Distributed Computing Systems (PDCS 2), Dallas, Texas, August 8-, 2. -7695-573-8/2/$7. (C) 22 IEEE