: A New Distributed Disk Array for I/O-Centric Cluster Computing Kai Hwang 1, Hai Jin 1,, and Roy Ho University of Southern California 1 The University of Hong Kong Email: {kaihwang, hjin}@ceng.usc.edu Abstract A new (redundant array of inexpensive disks at level x) architecture is presented for distributed I/O processing on a serverless cluster of computers. The architecture is based on a new concept of orthogonal striping and mirroring (OSM) across all distributed disks in the cluster. The primary advantages of this OSM approach lie in: (1) a significant improvement in parallel I/O bandwidth, () hiding disk mirroring overhead in the background, and (3) greatly enhanced scalability and reliability in cluster computing applications. All claimed advantages are substantiated with benchmark performance results on the Trojans cluster built at USC in 1999. Throughout the paper, we discuss the issues of scalable I/O performance, enhanced system reliability, and striped checkpointing on distributed in a serverless cluster environment. 1. Introduction In a serverless cluster of computers, no central server is used. The conventional client-server architecture does not apply in this class of multicomputer clusters. Instead, all client hosts in the cluster divide the server functions in a distributed manner. Often, a serverless cluster applies a distributed RAID architecture, which is built over dispersed disks physically attached to client hosts. This paper presents a new architecture for highbandwidth, distributed I/O processing on serverless clusters. In early RAID architecture [5], all disks are densely packaged in the same rack with a centralized control. Centralized RAIDs are often attached to a storage server or as a network-attached I/O subsystem. The TickerTAIP [] project offered a parallel RAID architecture for supporting parallel disk I/O with multiple controllers. Our paper deals with distributed RAID architectures. Distributed RAID concept was explored by Stonebraker and Schloss [7]. Prototyping of distributed RAIDs started with the Petal project [19] at Digital Lab and the Tertiary Disk project [] at UC Berkeley. This paper reports the architecture and performance of a new distributed system, built at the USC Trojans cluster project. The level x in is yet to be assigned by the RAID Advisory Board []. To build a distributed RAID, one must establish three capabilities: (i) a single I/O space (SIOS) for all disks in the cluster, (ii) high scalability, availability, and compatibility with current cluster architectures and applications, and (iii) local and remote disk I/O operations performed with comparable latency. These requirements Table 1 Research Projects on Parallel and Distributed RAIDs System Attributes RAID architecture environment Enabling mechanism for SIOS Data consistency Maintenance Reliability Implementation USC [15] Orthogonal striping and mirroring over a Linux cluster Cooperative device drivers in Linux kernel Locks at device driver level OSM and striped checkpointing Princeton TickerTAIP [] with multiple controllers as a centralized subsystem Single server implements the SIOS Sequencing of user requests Parity checks in Digital Petal [19][9] Chained declustering in an Unix cluster Petal device drivers at user level Frangipani file system Striped mirroring Berkeley Tertiary Disk [] A built with a Solaris PC cluster xfs storage servers at file system level Locks in the xfs file system SCSI disks with parity checks
imply a total transparency to the users, who can utilize all disks without knowing the physical locations of the data blocks. The development of the architecture was inspired by several projects summarized in Table 1. The enabling mechanisms for SIOS are quite different among the above distributed RAID architectures. Petal achieves the SIOS with user-level device drivers. The Tertiary Disk uses the xfs file system [1] to yield the SIOS. We realize the SIOS with cooperative device drivers at the Linux kernel level. The four RAID architectures differ also in their handling of the data consistency problem in order to establish a global file hierarchy. Locks are implemented at our device drivers at Linux kernel level. Furthermore, the reliability is also supported different mechanisms in the four I/O subsystems.. Orthogonal Striping and Mirroring In addition to those projects listed in Table 1, our new architecture is influenced by the following RAID projects: The AFRAID [], AutoRAID [31], and the chained declustering. Our differs from the above distributed RAID architectures in mainly two aspects. First, the is built with a new disk mirroring technique, called orthogonal striping and mirroring (OSM). The small write problem associated with is completely eliminated in this OSM approach. Second, we have developed cooperative disk drivers (CDD) to implement the OSM at the kernel level. The CDD maintains data consistency directly without using or inter-space Unix system calls. Figure 1 shows the architecture of (Fig.1a) along with the chained declustering RAID (Fig.1b). The original data blocks are denoted as B i. The corresponding Disk Disk 1 Disk Disk 3 mirrored blocks are labeled as M i. With disks in Fig.1a, each mirroring group involves 3 consecutive disk blocks, such as (M, M 1, M ) for data blocks (B, B 1, B ). Different mirroring groups are distinguished with different shadings in the disk blocks. Data blocks in (Fig.1a) are striped across all disks on the top half of the disk array, behaving like a RAID-. The image blocks (such as M, M 1, M ) are clustered in the same disk (Disk 3) vertically. For a stripe group of blocks, the image blocks are saved in exactly two disks, occupying the lower half of the disk array. The clustered images in a mirroring group are simultaneously updated at the background. This results in a lower latency and higher bandwidth in using the. The new OSM concept has the advantages of both and chained declustering. The completely avoids the small write problem associated with the. The orthogonal mapping implies that no data block and its image should be mapped in the same disk. For large write, the data blocks are written in parallel to all disks in the stripe simultaneously. The image blocks are gathered as a long block written into the same disk. Table presents the expected peak performance of four RAID architectures. Let n be the number of disks in RAID subsystem, B be the maximum bandwidth per disk, and m be the number of blocks in a file. R and W refer to average block read and write time, respectively. shows the same bandwidth potential as RAID- and chained declustering. It can tolerate single-disk failures, same as. improves from the chained declustering RAID mainly in parallel write operations. For large array size, the improvement factor approaches two. Thus the performs much better in write operations than all other three RAID architectures in Table. Disk Disk 1 Disk Disk 3 Data blocks B B B 1 B 5 B B B 3 B 7 B M 3 B 1 M B M 1 B 3 M B B 9 B 1 B 11 B B 5 B B 7 Mirrored blocks M 9 M 1 M M 7 M 3 M M M 1 M 7 B M B 9 M 5 B 1 M B 11 M 11 M M 5 M M 11 M M 9 M 1 (a) Orthogonal striping and mirroring (b) Skewed mirroring in a chained (OSM) in the declustering RAID Figure 1. Disk mirroring schemes in the and in a chained declustering RAID
Max. I/O Bandwidth Parallel Read or Write Time Table Expected Performance of Four RAID Architectures Performance Chained Indicators Declustering Read nb (n-1) B n B n B Large Write nb (n-1) B n B n B Small Write nb nb / n B n B Large Read mr / n mr /( n-1) mr / n mr / n Small Read R R R R Large Write mw / n mw / (n-1) mw / n mw / n + mw / n(n-1) Max. Fault Coverage Small Write W R+W W W n/ disk Single disk n/ disk failures Single failures failure failure 3. Trojans Cluster and Experiments AT USC Internet and Cluster Computing Laboratory, a prototype Linux PC cluster was built with 1 Pentium II/MHz processors, running the Redhat Linux. with kernel..5. These PC engines are connected by a 1 Mbps Fast Ethernet switch. This Linux cluster became operational since Fall 1999. At present, each node is attached with a 1-GB disk. With 1 nodes, the total capacity of the disk array is 1 GB. All 1 disks form a single I/O space. Figure shows the front view of the prototype Trojans cluster. This cluster is connected to the Internet over fiber links. More information about the Trojans project is available at the Web site: http://andy.usc.edu/trojan/. Figure 3 shows a two-dimensional architecture with 3 disks attached to each node. We call this a x 3 disk array configuration. All disks within the same stripe group, such as (B, B 1, B, B 3 ), are accessed in parallel. Consecutive stripe groups, such as (B, B 1, B, B 3 ), (B, B 5, B, B 7 ), and (B, B 9, B 1, B 11 ) can be accessed in a pipelined fashion, because they are retrieved from disk groups attached to the same SCSI buses. Cluster Network Node Node 1 Node P/M CDD D B B1 B B5 B B7 P/M CDD D 1 B1 B13 B5 B1 B15 B P/M CDD D B B1 B B3 B1 B13 Node 3 P/M CDD D 3 B3 B15 B7 B B1 B D B B1 B B9 B3 B31 D 5 B5 B17 B9 B1 B19 B D B B1 B3 B7 B1 B17 D 7 B7 B19 B31 B B5 B D B B B3 B33 B3 B35 D 9 B9 B1 B33 B B3 B3 D1 B1 B B3 B11 B B1 D11 B11 B3 B35 B B9 B1 Figure The Trojans cluster built in 1999 at USC Internet and Cluster Computing Lab. Figure 3 A x 3 architecture with orthogonal striping and mirroring (P: processor, M: memory, CCD: cooperative disk driver, Dj: the j-th disk, B i : the i-th data block, B i ' the i-th mirrored image in a shaded block)
In general, an n-by-k configuration has a stripe group involving n disk blocks residing on n disks. Each mirroring group has n-1 blocks residing on one disk. The images of all data blocks in the same stripe group are saved on exactly two disks. The block addressing scheme stripes across all nk disks sequentially and repeatedly. The parameter n represents the degree of parallelism in accessing the disks. The parameter k implies the depth of pipelining. Tradeoffs do exist between these two concepts. Both concepts will be applied to implement a new checkpointing scheme for the in Section.. Cooperative Disk Drivers The Single I/O space (SIOS) is crucial to building scalable cluster of computers. A loosely coupled cluster use distributed disks driven by different hosts independently. The independent disk drivers handle distinct I/O address spaces. Without the SIOS, remote disk I/O must be done by a sequence of time-consuming system calls through a centralized file server (such as the use of ) across the cluster network. On the other hand, the CDDs work together to establish the SIOS across all physically distributed disks. Once the SIOS is established, all disks are used collectively as a single global virtual disk shown in Fig.a. Each node perceives the illusion that it has several physical disks attached locally. Figure b shows the internal design of a CDD. Each CDD is essentially made from three working modules. The storage manager receives and processes the I/O requests from remote client modules. The client module redirects local I/O requests to remote disk managers. Data consistency problems arise when multiple cluster nodes have cached copies of the same set of data blocks. The xfs approach and the Frangipani approach maintain the data consistency at the file system level. In our design, data consistency checking is maintained at driver level. The consistency module is responsible for maintaining data consistency among distributed disks. A CDD can be configured to run as a storage manager or as a client, or both at the same time. Three possible states of each disk: are (1) a manager to coordinate use of local disk storage by remote nodes, () a client accessing remote disks through remote disk managers, and (3) both of the above functions. The OSM scheme outperforms the chained declustering scheme mainly in parallel write operations. The scheme demonstrates scalable I/O bandwidth with much reduced latency in a cluster environment. Both Petal and Tertiary Disk achieve the SIOS at the user level. We achieved the SIOS at the Linux kernel level. Using the CDDs, the cluster can be built serverless and offers remote disk access directly at the kernel level. Parallel I/O is made possible on any subset of local disks, because all distributed disks form SIOS. No heavy cross-space system calls are needed to perform remote file access. A device masquerading technique is adopted here. Multiple CDDs run cooperatively to redirect I/O requests to remote disks. Our approach simplifies the design and implementation of distributed file management services. Data consistency is maintained by all CDDs with higher speed and efficiency at the data block level. We introduced a special lock-group table for developing distributed file management services. Each record in this table corresponds to a group of data blocks that have been granted to a specific CDD client with write permissions. The write locks in each record are granted and released atomically. This lock-group table is replicated among the data consistency modules in the CDDs. Which guarantee that file management operations are performed atomically. Cluster node Interconnection Network Cooperative Disk Driver (CDD) Data Consistency Module Storage Manager Cluster node CDD Client Module Communications through the network (b) The CDD architecture Cluster node CDD CDD CDD (a) A global virtual disk with a SIOS formed by cooperative disks Figure Single I/O space in built at Linux kernel level.
5. Benchmark Performance Results In this section, we report the experimental results obtained in parallel disk I/O and Andrew benchmark experiments. The experiments were designed to test the scalability of different RAID architectures, with respect to increasing number of client requests or increasing number of disks in each of the disk array configurations, implemented on the Trojans cluster. We show the aggregate I/O bandwidth and Andrew benchmark results, followed with a scalability analysis. 5.1. I/O bandwidth vs. request number Figure 5 shows the performance of four I/O subsystem architectures. For large read and large write, each client accesses a large file of MB long, striping across all disks in the array. Therefore, the test is truly focused on parallel I/O capability of the disk array. All files are uncached and each client only reads its own private file. All reads are performed simultaneously using the MPI_Barrier() command. In case of small read or small write, 3KB data is accessed in one block of the stripe group. The results for small read are very close to that for large read. For parallel writes, achieves the best scalability among the four with a 15.3MB/s for 1 clients. In contrast, scales slowly due to the heavy overhead involved in parity calculations. scales even better than. Table 3 shows the improvement factor of 1 clients over 1 client on the USC Trojans cluster. The demonstrates the highest improvement factor among the three RAID architectures. Almost three times increase in I/O bandwidth was observed on, as the I/O request increases from 1 to 1. Aggregate Bandwidth (MB/s) 1 1 1 1 1 Aggregate Bandwidth (MB/s) 1 1 1 1 1 1 1 1 1 1 1 (a) Large read ( MB per client) (b) Small read (3 KB per client) Aggregate Bandwidth (MB/s) 1 1 1 1 1 Aggregate Bandwidth (MB/s) 1 1 1 1 1 1 1 1 1 1 (c) Large write ( MB per client) (d) Small write (3 KB per client) Figure 5 Aggregate I/O bandwidth of three RAID architectures compared with the on the Trojans cluster
Table 3 Achievable I/O Bandwidth and Improvement Factor on the Trojans Cluster I/O Operations 1 Client 1 Clients Improve 1 Client 1 Clients Improve Large Read.5 MB/s.3 MB/s.9.59 MB/s 15.3 MB/s.3 Large Write.11 MB/s.77 MB/s 1.31.9 MB/s 15.9 MB/s 5. Small Write.7 MB/s.1 MB/s 1.3.35 MB/s 15.1 MB/s.3 I/O Operations 1 Client 1 Clients Improve 1 Client 1 Clients Improve Large Read.3 MB/s 15.97 MB/s..37 MB/s 1.7 MB/s.5 Large Write. MB/s.3 MB/s 3.3.31 MB/s 9.9 MB/s.31 Small Write 1.5 MB/s 5.3 MB/s 3.9.7 MB/s 9.9 MB/s.39 5.. Andrew benchmark results Andrew benchmark is a popular benchmark to test the performance of a file system. In this experiment, the Andrew benchmark was executed on four I/O subsystems with respect to increasing number of client requests up to 3. The performance is indicated by the elapsed time in executing the Andrew benchmark on the target I/O subsystem. These tests demonstrate how the underlying storage structures can affect the performance of the file system being supported. Figure shows the benchmark results for four I/O subsystems. The shows a poor performance especially in reading files, scanning directories, and copying files. The other three RAID architectures do not share this weakness. The elapsed time to copy files in increases with the number of clients. This is mainly due to the small write problem associated with the, because most files copied in Andrew benchmark are small in size. Elapsed Time (sec) Elapsed Time (sec) 7 5 3 1 1 9 7 5 3 1 Compile Read File Make Dir Scan Dir Copy Files 1 1 3 (a) results 1 1 3 (c) results (b) results (d) results Figure Elapsed times of the Andrew benchmark on the Trojans cluster with the,, and configurations Elapsed Time (sec) Elapsed Time (sec) 1 9 7 5 3 1 1 9 7 5 3 1 1 1 3 1 1 3
shows a slow increase in the elapsed time in all 5 phases of the Andrew benchmark. The overall execution time of the outperforms both and by a factor of 7% to % for up to 3 I/O clients. Comparing with, shows a 7% improvement in Andrew benchmark performance.. Striped Checkpointing on The parallel I/O capability of the distributed can be applied to achieve fast checkpointing in the cluster system. A striped checkpointing scheme was introduced [3]. This scheme takes advantage of the striping feature of a RAID system. Simultaneous writing of multiple processes may cause a network contention and I/O bottleneck problem to a central stable storage. To alleviate the network contention, Vaidya [3] has proposed a staggered writing scheme for centralized storage. However, the Vaidya scheme cannot solve the I/O bottleneck problem to access a central stable storage. We solve both problems by distributing data blocks and their mirrored images orthogonally. Figure 7 shows the concept of striped staggering in coordinated checkpointing on the disk array. Successive stripes are accessed in a staggered manner from different stripes on successive -disk groups, as demonstrated in Fig.3. Process Process 1 Process Process 3 Process Process 5 Process Process 7 Process Process 9 Process1 Process11 C S Time C: Checkpointing overhead S: Synchronization overhead Figure 7 Striped checkpointing with staggering on a distributed Stripe Stripe1 Stripe Staggering implies pipelined accesses of the disk array. As proved in [3], there exists trade-off between striped parallelism and staggering depth. The layout in Fig.7 can be reconfigured from a x 3 array to a x array, if pipelined access shows less advantage. Using the OSM, each striped checkpointing file has its mirrored image in its local disk. For each node, transient failures can be recovered from its mirrored image in local disk. Permanent failures can be recovered from striped checkpointing. The proposed architecture can recover from any single disk failure in each stripe group. The total number of disk failure depends on the number of stripe groups to be accessed. For the x 3 array in Fig.3, up-to-3 disk failures in 3 stripe groups can be tolerated. 7. Conclusions The new architecture shows its strength in building distributed, high-bandwidth, I/O storage for serverless PC or workstation clusters. The is unique with the OSM architecture, which exploits fullstripe bandwidth, similar to what a RAID- can provide. The reliability comes from clustered mirroring on local disks, while orthogonal striping across distributed disks. We have developed new disk drivers at Linux kernel level to support efficient implementation of the OSM. These device drivers enable remote disk accesses and parallel disk I/O without using a central file server. RAIDx matches in reliability, both are capable of recovering from single disk failures. The I/O performance of the is experimentally proven superior to or, evidenced by the measured I/O bandwidth on the Linux cluster at USC. For parallel reads with 1 active clients, the achieved 9.7 MB/s throughput, 1.5 and 3.7 times higher than using and, respectively. For small writes, achieved 9 MB/s, 3 times higher than using. Running the Andrew benchmark, results in a 17% cut in elapsed time, compared with those experienced on a or on a configuration. The achieved I/O bandwidth corresponds to 7% of the limit of the Fast Ethernet. Linux extensions and reliable middleware streamline the single I/O space, shared virtual memory, global file management, and distributed checkpointing in cluster operations. The new design is shown highly scalable with distributed control. In next phase of the Trojans project, we will develop a distributed file system with I/O load balancing capabilities along with an enlarged prototype of several hundreds of disks on a much larger Trojans cluster. In addition to implement,, and configurations, we will also consider other configurations, such as and chained declustering. Scalable I/O bandwidth makes the especially appealing to I/O-centric cluster applications, such as biological sequence analysis, collaborative engineering design, secure E-commerce and data mining, specialized digital libraries, and distributed multimedia processing.
References [1] T. Anderson, M. Dahlin, J. Neefe, D. Patterson, D. Roselli, and R. Wang. Serverless Network File Systems. ACM Trans. on Computer Systems, Jan. 199, pp.1-79. [] Rajkumar Buyya (ed.). High Perf. Cluster Computing, Prentice Hall PTR, New Jersey, 1999. [3] L. F. Cabrera, and D. E. Long, Swift: Using Distributed Disk Striping to Provide High I/O Data Rates, Proceedings of USENIX Computing Systems, Fall 1991, pp.5-33. [] P. Cao, S. B. Lim, S. Venkataraman, and J. Wilkes, The TickerTAIP Parallel RAID Architecture, ACM Trans. on Computer System, Vol.1, No.3, August 199, pp.3-9. [5] P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz and D. A. Patterson; RAID: High-Performance, Reliable Secondary Storage, ACM Computing Surveys, Vol., No., June 199, pp.15-15. [] M. Dahlin, R. Wang, T. Anderson, D. Patterson. Cooperative Caching: Using Remote Client Memory to Improve File System Performance. Proc. of Operating System Design and Implementation, 199. [7] I. Foster and C. Kesselman, (Eds.). The Grid: Buleprint for a New Computing Infrastructure. Morgan-Kaufmann, 199. [] I. Foster, D. Kohr, Jr., R. Krishnaiyer, and J. Mogill. Remote I/O: Fast Access to Distant Storage. Proc. of the Fifth Annual Workshop on I/O in Parallel and Distri. Systems, Nov. 1997, pp.1-5. [9] G. Gibson, D. Nagle, K. Amiri, F. Chang, H. Gobioff, E. Riedel, D. Rochberg and J. Zelenka, A Cost-effective, High-bandwidth Storage Architecture, Proc. of the th Conf. on Architectural Support for Programming Langagues and Operating Systems, 199. [1] J. H. Hartman, I. Murdock, and T. Spalink. "The Swarm Scalable Storage System." Proceedings of the 19th IEEE International Conf. on Distributed Computing Systems (ICDCS '99), June 1999. [11] J. H. Hartman, and J. K. Ousterhout, The Zebra Striped Network File System, ACM Transactions on Computer System, Vol.13, No3, Aug. 1995, pp.7-31. [1] J. H. Howard, et al., Scale and Performance in a Distributed File System. ACM Trans. on Computer System, Vol., No.1, pp.51-1, February, 19. [13] H. I. Hsiao and D. DeWitt, Chained Declustering: A New Availability Strategy for Multiprocessor Database Machines, Proc. of th International Data Engineering Conf., 199, pp.5-5. [1] Y. Hu and Q. Yang, DCD - Disk Caching Disk: A New Approach for Boosting I/O Performance, Proc. of the 3rd International Symp. on Computer Architecture, 199, pp.19-177. [15] K. Hwang, H. Jin, et al, Reliable Cluster Computing with a New Checkpointing Architecture, Proc. of 9-th Heter. Comp. Workshop (HCW-), Cancun, Mexico, May 1,, pp.171-1. [1] K. Hwang, H. Jin, E. Chow, C.L. Wang, and Z. Xu. Designing SSI Clusters with Hierarchical Checkpointing and Single I/O Space. IEEE Concurrency, March 1999, pp.-9. [17] K. Hwang and Z. Xu. Scalable Parallel Computing, McGraw-Hill, New York, 199. [1] H. Jin and K. Hwang, Striped Mirroring Disk Array, Journal of Systems Architecture, Elsevier Science, Vol., No., April,, pp.53-55. [19] E. K. Lee and C. A. Thekkath. Petal: Distributed Virtual Disks. Proc. of the Seventh Int l Conf. on Arch. Support for Prog. Languages and Operating Systems, October 199, pp.-9. [] W. B. Ligon and R. B. Ross. An Overview of the Parallel Virtual File System. Proceedings of the 1999 Extreme Linux Workshop, June 1999. [1] J. Nieplocha, I. Foster, H. Dachsel, Distant I/O: One- Sided Access to Secondary Storage on Remote Processors, Proc. of Symp. High Perf. Distri. Computing (HPDC-7), 199, pp.1-15. [] N. Nieuwejaar and D. Kotz. Performance of the Galley Parallel File System. Proceedings of the Fourth Workshop on I/O in Parallel and Distributed Systems, pp.3-9, Philadelphia, May 199. [3] W. Ro and K. Hwang, Striped and Staggered Checkpointing on Distributed RAID, Technical Report, University of Southern California. [] RAID Advisory Board, The RAIDbook, Seventh Edition, December, 199. [5] R. Sandberg, D. Goldberg, S. Kleiman, D. Walsh, and B. Lyon, Design and Implementation of the Sun Network Filesystem, Proc. of the USENIX Conference, June 195, pp.119-13. [] S. Savage and J. Wilkes, AFRAID -- A Frequently Redundant Array of Independent Disks, Proc. of 199 USENIX Technical Conference, January 199, pp. 7-39. [7] M. Stonebraker and G. A. Schloss, Distributed RAID a New Multiple Copy Algorithm, Proc. of the Sixth International Conf. on Data Engineering, Feb. 199, pp.3-37. [] N. Talagala, S. Asami, D. Patterson, and K. Lutz, Tertiary Disk: Large Scale Distributed Storage, UCB Technical Report No. UCB//CSD-9-99. [9] C. A. Thekkath, T. Mann, and E. K. Lee. Frangipani: A Scalable Distributed File System. Proceedings of ACM Symp. of Operating Systems Principles, Oct. 1997, pp.-37. [3] N. H. Vaidya, A Case for Two-Level Distributed Recovery Schemes, Proc. ACM International Conf. on Measurement and Modeling of Computer Systems (Sigmetrics '95), pp.-73. [31] J. Wilkes, R. Golding, C. Staelin, and T. Sullivan, The HP AutoRAID Hierarchical Storage System, ACM Transactions on Computer Systems, Vol.1, No.1, February 199, pp.1-13. [3] N. H. Vaidya, Staggered Consistent Checkpointing, IEEE Transactions on Parallel and Distributed Systems, 1999, Vol. 1, No. 7, pp. 9-7.