2310 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 Fault Simulation and Response Compaction in Full Scan Circuits Using HOPE Sunil R. Das, Life Fellow, IEEE, Chittoor V. Ramamoorthy, Life Fellow, IEEE, Mansour H. Assaf, Member, IEEE, Emil M. Petriu, Fellow, IEEE, Wen-Ben Jone, Senior Member, IEEE, and Mehmet Sahinoglu, Senior Member, IEEE Abstract This paper presents results on fault simulation and response compaction on ISCAS 89 full scan sequential benchmark circuits using HOPE a fault simulator developed for synchronous sequential circuits that employs parallel fault simulation with heuristics to reduce simulation time in the context of designing space-efficient support hardware for built-in self-testing of very large-scale integrated circuits. The techniques realized in this paper take advantage of the basic ideas of sequence characterization previously developed and utilized by the authors for response data compaction in the case of ISCAS 85 combinational benchmark circuits, using simulation programs ATALANTA, FSIM, and COMPACTEST, under conditions of both stochastic independence and dependence of single and double line errors in the selection of specific gates for merger of a pair of output bit streams from a circuit under test (CUT). These concepts are then applied to designing efficient space compression networks in the case of full scan sequential benchmark circuits using the fault simulator HOPE. Index Terms Built-in self-test (BIST), circuit under test (CUT), detectable error probability estimates, fault simulation using HOPE, Hamming distance, optimal sequence mergeability, response compaction, sequence weights, single stuck-line faults, space compactor. Fig. 1. Block diagram of the BIST environment. I. INTRODUCTION WITH continued growth in semiconductor industries and development of extremely complex systems with higher levels of integration densities, the real urge to find better and more efficient methods of testing that ensure reliable operations of chips, a mainstay of today s many sophisticated digital systems, has become the single most pressing issue to design and test engineers. The very concept of testing has a broad applicability, and finding highly effective test techniques that Manuscript received November 11, 2003; revised December 7, 2004. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grant A 4750. S. R. Das is with the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada, and with the Department of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL 36103 USA. C. V. Ramamoorthy is with the Department of Electrical Engineering and Computer Sciences, Computer Science Division, University of California, Berkeley, CA 94720 USA. M. H. Assaf and E. M. Petriu are with the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada. W.-B. Jone is with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA. M. Sahinoglu is with the Department of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL 36103 USA. Digital Object Identifier 10.1109/TIM.2005.858102 Fig. 2. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using compacted input test sets. guarantee correct system performance has been gaining importance [1] [57]. Consider, for example, medical test and diagnostic instruments, airplane controllers, and other safety-critical systems that have to be tested before (off-line testing) and during use (on-line testing). Another application where failure can have severe economic consequences is real-time transactions processing. The testing process in all these circumstances must be fast and effective to make sure that such systems operate correctly. In general, the cost of testing integrated circuits (ICs) is rather prohibitive; it ranges from 35% to 55% of their total manufacturing cost [7]. Besides, testing a chip is also time consuming, taking up to about one-half of the total design cycle time [8]. The amount of time available for manufacturing, testing, and marketing a product, on the other hand, continues to decrease. Moreover, as a result of global competition, customers demand lower cost and better quality products. Therefore, in 0018-9456/$20.00 2005 IEEE
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2311 Fig. 3. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using compacted input test sets. Fig. 6. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using compacted input test sets. Fig. 4. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using pseudorandom testing. Fig. 7. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using compacted input test sets. Fig. 5. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using pseudorandom testing. order to achieve this superior quality at lower cost, testing techniques need to be improved. The conventional testing techniques of digital circuits require application of test patterns generated by a test pattern generator (TPG) to the circuit under test (CUT) and comparing the responses produced with known correct circuit responses. Fig. 8. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using pseudorandom testing. However, for large circuits, because of higher storage requirements for the fault-free responses, the test procedures become very expensive and hence alternative approaches are sought to minimize the amount of needed storage. Built-in self-testing (BIST) is a design methodology that provides the capability of solving many of the problems otherwise encountered in conventional testing of digital systems. It combines the concepts of
2312 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE I FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) both built-in test (BIT) and self-test (ST) in one. In BIST, test generation, test application, and response verification are all accomplished through built-in hardware, which allows different parts of a chip to be tested in parallel, thereby reducing the required testing time, besides eliminating the need for external test equipment. As the cost of testing is becoming the major component of the manufacturing cost of a new product, BIST thus tends to reduce manufacturing, test, and maintenance costs through improved diagnosis. Several companies such as Motorola, AT&T, IBM, AMD, and Intel have incorporated BIST in many of their products [10], [12], [19] [21]. AT&T, for example, has incorporated BIST into more than 200 of their chips. The three large programmable logic arrays and microcode read-only memory (ROM) in the Intel 80 386 microprocessor were built-in self-tested [56]. The general-purpose microprocessor chip Alpha AXP21164 and Motorola microprocessor 68020 were also tested using BIST techniques [12], [56]. More recently, Intel, for its Pentium Pro architecture microprocessor, with its unique requirements of meeting very high production goals, superior performance standards, and impeccable test quality, put strong emphasis on its design-for-test (DFT) direction [21]. A set of constraints, however, limits Intel s ability to tenaciously explore Fig. 9. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using pseudorandom testing. DFT and test generation techniques, i.e., full or partial scan or scan-based BIST [4]. AMD s K6 processor is a reduced instruction set computer (RISC) core named enhanced RISC86 microarchitecture [20]. K6 processor incorporates BIST into its DFT process. Each RAM array of K6 processor has its
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2313 TABLE II FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) own BIST controller. BIST executes simultaneously on all of the arrays for a predefined number of clock cycles that ensures completion for the largest array. Hence, BIST execution time depends on the size of the largest array [4]. AMD uses commercial automatic test pattern generation tool to create scan test patterns for stuck-faults in their processor. The DFT framework for a 500-MHz IBM S/390 microprocessor utilizes a wide range of tests and techniques to ensure superb reliability of components within a system [4]. Register arrays are tested through the scan chain, while nonregister memories are tested with programmable RAM BIST. Hewlett-Packard s PA8500 is a 0.25- m superscalar processor that achieves fast but thorough test with its cache test hardware s ability to perform March tests, which is an effective way to detect several kinds of functional faults [19]. Digital s Alpha 21 164 processor combines both structured and adhoc DFT solutions, for which a combination of hardware and software BIST was adopted [4]. Sun Microsystems UltraSparc processor incorporates several DFT constructs as well. The achievement of its quality performance coupled with reduced chip area conflicts with a design requirement that is easy to debug, test, and manufacture [4]. BIST is also widely used to test embedded regular structures that exhibit a high degree of periodicity such as memory arrays (SRAMs, ROMs, FIFOs, and registers). These types of circuits do not require complex extra hardware for test generation and response compaction. Also, including BIST in these circuits can guarantee high fault coverage with zero aliasing. Unlike regular circuits, random-logic circuits cannot be adequately tested only with BIST techniques, since generating adequate on-chip test sets using simple hardware is a difficult task to be accomplished. Moreover, since test responses generated by random-logic circuits seldom exhibit regularity, it is extremely difficult to ensure zero aliasing compaction. Therefore, random-logic circuits are most usually tested using a combination of BIST, scan design techniques, and external test equipment. A typical BIST environment, as shown in the block diagram representation of Fig. 1, uses a test pattern generator (TPG) that sends its outputs to a circuit under test (CUT), and output streams from the CUT are fed into a test data analyzer. A fault is detected if the test sequence is different from the response of the fault-free circuit. The test data analyzer is comprised of a response compaction unit (RCU), storage for the fault-free responses of the CUT, and a comparator. In order to reduce the amount of data represented by the fault-free and faulty CUT responses, data compression is used to create signatures (short binary sequences) from the CUT and its corresponding fault-free
2314 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE III FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) circuit. Signatures are compared and faults are detected if a match does not occur. BIST techniques may be used during normal functional operating conditions of the unit under test (on-line testing), as well as when a system is not carrying out its normal functions (off-line testing). In the case where detecting real-time errors is not that important, systems, boards, and chips can be tested in off-line BIST mode. BIST techniques use pseudorandom, or pseudoexhaustive TPGs, or on-chip storing of reduced test sets. These days, testing logic circuits exhaustively is seldom used, since only a few test patterns are needed to ensure full fault coverage for single stuck-line faults [12]. Reduced pattern test sets can be generated using existing algorithms such as FAN and others. Built-in test generators can often generate such reduced test sets at low cost, making BIST techniques suitable for on-chip self-testing. The primary concern of the current paper is the general response compaction process of built-in self-testing techniques that translates into a process of reducing the test response from the CUT to a signature. Instead of comparing bit-by-bit the fault-free responses to the observed outputs of the CUT as in conventional testing methods, the observed signature is compared to the correct one, thereby reducing the storage needed for the correct circuit responses. The response compaction in BIST is carried out through a space compaction unit followed by time compaction. In general, input sequences coming from a CUT are fed into a space compactor, providing output streams of bits such that ; most often, test responses are compressed into only one sequence ( ). Space compaction brings a solution to the problem of achieving high-quality built-in self-testing of complex chips without the necessity of monitoring a large number of internal test points, thereby reducing both testing time and area overhead by merging test sequences coming from these internal test points into a single stream of bits. This single bit stream of length is eventually fed into a time compactor, and finally a shorter sequence of length ( ) is obtained at the output. The extra logic representing the compaction circuit, however, must be as simple as possible, to be easily embedded within the CUT and should not introduce signal delays to affect either the test execution time or normal functionality of the circuit being tested. Moreover, the length of the signature must be as short as it can be in order to minimize the amount of memory needed to store the fault-free response signatures. Also, signatures derived from faulty output responses and their corresponding fault-free signatures should not be the same,
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2315 TABLE IV FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) which unfortunately is not always the case. A fundamental problem with compaction techniques is error masking or aliasing [7], [49], [56] which occurs when the signatures from faulty output responses map into the fault-free signatures, usually calculated by identifying a good circuit, applying test patterns to it, and then having the compaction unit generate the fault-free references. Aliasing causes loss of information, which affects the test quality of BIST and reduces the fault coverage (the number of faults detected, after compaction, over the total number of faults injected). Several methods have been suggested in the literature for computing the aliasing probability. The exact computation of this aliasing probability is known to be an NP-hard problem [58]. In practice, high fault coverage, over 99%, is generally required, and thus, any space compression technique that maintains more percentage error coverage information is considered worthy of investigation. This paper specifically deals with the general problem of designing space-efficient support hardware for BIST of full scan sequential circuits using fault simulation program HOPE [60]. HOPE is a fault simulator for synchronous sequential circuits developed at the Virginia Polytechnic Institute and State University and employs parallel fault simulation with several heuristics to reduce fault simulation time, besides providing many advantages over existing simulators. The compaction techniques used in this paper with simulator HOPE take advantage of certain inherent properties of the test responses of the CUT, together with the knowledge of their failure probabilities. A major objective to realize in space compaction is to provide methods that are simple, suitable for on-chip self-testing, require low area overhead, and have little adverse impact on the overall CUT performance. With that objective in perspective, compaction techniques were developed in the paper that take advantage of some well known concepts, i.e., those of Hamming distance, sequence weights, and derived sequences as utilized by the authors earlier in sequence characterization [46], [49], [57], in conjunction with the probabilities of error occurrence for optimal mergeability of a pair of output bit streams from the CUT. The proposed techniques guarantee simple design and achieve a high measure of fault coverage for single stuck-line faults with low CPU simulation time and acceptable area overhead, as evident from extensive simulation runs on the ISCAS 89 full scan sequential benchmark circuits with simulator HOPE, under conditions of both stochastic independence and dependence of single and double line output errors.
2316 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE V FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) II. BRIEF OVERVIEW OF TEST COMPACTION TECHNIQUES The choice of a compression technique is mainly influenced by hardware considerations and loss of effective fault coverage due to fault masking or aliasing. In this section, we first briefly review some of the important test compaction techniques in space for BIST that have been proposed in the literature. We describe these, concentrating only on some of their relevant properties like the area overhead, fault coverage, error masking probability, etc. There also exist a number of efficient time compaction schemes including ones counting, syndrome testing, transition counting, signature analysis, and others, which are also considered. Some of the common space compression techniques include the parity tree space compaction, hybrid space compression, dynamic space compression, quadratic functions compaction, programmable space compaction, and cumulative balance testing. The parity tree compactor circuits [36], [39], [46], [49], [50] are composed of only XOR gates. An XOR gate has very good signal-to-error propagation properties that are quite desirable for space compression. Functions realized by parity tree compactors are of the form. The parity tree space compactor propagates all errors that appear on an odd number of its inputs. Thereby, errors that appear on an even number of parity tree circuit inputs are masked. As experimentally demonstrated, most single stuck-line faults are detected in parity tree space compaction using pseudorandom input test patterns and deterministic reduced test sets [49], [57]. The hybrid space compression (HSC) technique, originally proposed by Li and Robinson [38], uses AND, OR, and XOR logic gates as output compaction tools to compress the multiple outputs of a CUT into a single line. The compaction tree is constructed based on the detectable error probability estimates. A modified version of the HSC method, called dynamic space compression (DSC), was subsequently proposed by Jone and Das [41]. Instead of assigning static values for the probabilities of single errors and double errors, the DSC method dynamically estimates those values based on the CUT structure during the computation process. The values of and are determined based on the number of single lines and shared lines connected to an output. A general theory to predict the performance of the space compression
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2317 TABLE VI FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) techniques was also developed. Experimental results show that the information loss, combined with syndrome counting as time compactor, is between 0% and 12.7%. DSC was later improved, in which some circuit-specific information was used to calculate the probabilities [42]. However, neither HSC nor DSC does provide an adequate measure of fault coverage because they both rely on estimates of error detection probabilities. Quadratic functions compaction (QFC) uses quadratic functions to construct the space compaction circuits, and has been shown to reduce aliasing errors [40]. In QFC, the observed output responses of the CUT are processed and compressed in a serial fashion based on a function of the type, where and are blocks of length, for. A new approach termed programmable space compaction (PSC) has recently been proposed for designing low-cost space compactors that provide high fault coverage [37]. In PSC, circuit-specific space compactors are designed to increase the likelihood of error propagation. However, PSC does not guarantee zero aliasing. A compaction circuit that minimizes aliasing and has the lowest cost can only be found by exhaustively enumerating all 2 -input Boolean functions, where represents the number of primary outputs of the CUT. A new class of space compactors based on parity tree circuits was recently proposed by Chakrabarty and Hayes [56]. The method is based on multiplexed parity trees (MPTs) and introduces zero aliasing. Multiplexed parity trees perform space compaction of test responses by combining the error propagation properties of multiplexers and parity trees through multiple time-steps. The authors show that the associated hardware overhead is moderate, and very high fault coverage is obtained for faults in the CUT, including even those in the compactor. Quite recently, a new space compaction approach for IP cores based on the use of orthogonal transmission functions was suggested in [47], which provides zero aliasing for all errors with optimal compaction ratio. Other approaches were given in [52] and [53] that are supposed to reduce test time and test data volume and improve testability with high compaction ratios, and could be applicable to several industrial circuits. We now briefly examine some time compaction methods like ones counting, syndrome testing, transition counting, signature analysis, and others. Ones counting [24] uses as its signature the number of ones in the binary circuit response stream. The hardware that represents the compaction unit consists of a simple counter, and is independent of the CUT; it only depends on the nature of the test response. Signature values do not depend on the order in which the input test patterns are applied to the CUT. In syndrome counting [27], all the 2 input patterns are exhaustively applied to an -input combinational circuit.
2318 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE VII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS. 3 INDICATES FAULTS WERE INJECTED INTO CUT + COMPACTOR) The syndrome, which is given by the normalized number of ones in the response stream, is defined as, with being the number of minterms of the function being implemented by the single-output CUT. Any switching function can be so realized that all its single stuck-line faults are syndrometestable. Transition counting [25] counts the number of times the output bit stream changes from one to zero and vice versa. In transition counting, the signature length is less than or equal to, with being the length of a response stream. The error masking probability takes high values when the signature value is close to 2 and low values when it is close to zero or. In Walsh spectral analysis [28], [29], switching functions are represented by their spectral coefficients that are compared to known correct coefficients values. In a sense, in this method, the truth table of the given switching function is basically verified. The process of collecting and comparing a subset of the complete set of Walsh functions is described as a mechanism for data compaction. The use of spectral coefficients promises higher percentage of error coverage, whereas the resulting higher area overhead for generating them is deemed as a disadvantage. In parity checking [31], the response bit stream coming from a circuit under test reduces a multitude of output data to a signature of length 1 bit. The single bit signature has a value that equals the parity of the test response sequence. Parity checking detects all errors involving an odd number of bits, while faults that give rise to an even number of error bits are not detected. This method is relatively ineffective since a large number of possible response bit streams from a faulty circuit will result in the same parity as that of the correct bit stream. All single stuck-line faults in fanout-free circuits are detected by the parity check technique. These obvious shortcomings of parity checking are eliminated in methods devised based on single-output parity bit signature [31] and multiple-output parity bit signature [32], [34]. Signature analysis is probably the most popular time compaction technique currently available [26]. It uses linear feedback shift registers (LFSRs) consisting of flip-flops and exclusive-or (XOR) gates. The signature analysis technique is based on the concept of cyclic redundancy checking (CRC). LFSRs are used for generating pseudorandom input test patterns, and for response compaction as well. The nature of the generated sequence patterns is determined by the LFSR s characteristic polynomial as defined by its interconnection structure. A test-input sequence is fed into the signature analyzer,
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2319 TABLE VIII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) which is divided by the characteristic polynomial of the signature analyzer s LFSR. The remainder obtained by dividing by over a Galois field such that. represents the state of the LFSR, with being the corresponding quotient. In other words, represents the observed signature. Signature analysis involves comparing the observed signature to a known fault-free signature. An error is detected if these two signatures differ. Suppose that is the correct response and is the faulty one, where is an error polynomial; it can be shown that aliasing occurs whenever is a multiple of. The masking probability in this case is estimated as 1/2, where is the number of flip-flop stages in the LFSR. When is larger than 16, the aliasing probability is negligible. Many commercial applications have reported good success with LFSR-implemented signature analysis. Different methods for computing and reducing the aliasing probability in signature analysis have been proposed, e.g., the signature analysis model proposed by Williams et al. [30] that uses Markov chains and derives an upper bound on the aliasing probability in terms of the test length and probability of an error s occurring at the output of the CUT. Another approach to the computation of aliasing probability is presented in [33]. An error pattern in signature analysis causes aliasing if and only if it is a codeword in the cyclic code generated by the LFSR s characteristic polynomial. Unlike other methods, the fault coverage in signature analysis may be improved without changing the test set. This can be done by playing with the length of the LFSR or by using a different characteristic polynomial. As demonstrated in [35], for short test length, signature analysis detects all single-bit errors. However, there is no known theory that characterizes fault detection in signature analysis. Testing using two different compaction schemes in parallel has also been extensively investigated. The combination of signature analysis and transition counting has also been analyzed [49], which shows that using simultaneously both techniques leads to a very small overlap in their error masking. As a result of using two different compaction schemes in parallel, the fault coverage is improved, while the fault signature size and hardware overhead are greatly increased. III. DESIGNING COMPACTION TREES BASED ON SEQUENCE CHARACTERIZATION AND STOCHASTIC INDEPENDENCE AND DEPENDENCE OF LINE ERRORS The principal idea in space compaction is to compress functional test outputs of the CUT possibly into one single test
2320 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE IX FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) output line to derive the CUT signature without sacrificing too much information in the process. Generally, space compression has been accomplished using XOR gates in cascade or in a tree structure. We adopt a combination of both cascade and tree structures (cascade-tree) for our framework with AND (NAND), OR (NOR), and XOR (XNOR) operators. The logic function to be selected to build the compaction tree is determined solely by the characteristics of the sequences that are inputs to the gates based on some optimal mergeability criteria developed earlier by the authors [46], [49], [57]. The basic theme of the approaches proposed is to select appropriate logic gates to merge two candidate output lines of the CUT under conditions of stochastic independence and dependence of single and double line errors, using sequence characterization and other concepts introduced by the authors. However, the criteria of selecting a number of CUT output lines for optimal generalized sequence mergeability were also developed and utilized in the design of space compression networks, based on stochastic independence of multiple line errors, and also on stochastic dependence of multiple line errors using the concept of generalized detectable or missed error probability estimates [46], [50], [57], with extensive simulations conducted with ATALANTA, FSIM, and COM PACTEST [59], [61], [62]; however, optimal generalized sequence mergeability is not the concern of this paper, and as such is not discussed. In the following the mathematical basis of the approaches is briefly given with the introduction of appropriate notations and terminologies. A. Hamming Distance, Sequence Weights, and Derived Sequences Let represent a pair of output sequences of a CUT of length each, where the length is the number of bit positions in and. Let represent the Hamming distance between and (the number of bit positions in which and differ). Definition: The first-order one-weight, denoted by, of a sequence is the number of ones in the sequence. Similarly, the first-order zero-weight, denoted by, of a sequence is the number of zeroes in the sequence. Example: Consider an output sequence pair with and. The length of both output streams is eight ( ). The Hamming distance between and is. The first-order
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2321 TABLE X FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) one-weights and zero-weights of and are,,, and, respectively. Property: For any sequence, it can be shown that. For the output sequence given in the example above, we have found that and. Therefore, it is obvious that. Definition: Consider an output sequence pair of equal length. Then the sequence pair derived by discarding the bit positions in which the two sequences differ (and indicated by a dash) is called the second-order derived sequence pair. In this paper, we will denote the derived sequence of a sequence by, its first-order one-weight by, and its first-order zero-weight by. Example: For given in the example above,. The first-order oneweights and zero-weights of the derived pair are,,, and. Property: For any second-order derived sequences, we have and, i.e., as shown in the example above, for the same output sequence pair given above, we have shown that and. By the aforesaid property, when no ambiguity arises, we will denote one- and zero-weights for the derived sequence pair by simply and, respectively. The length of the derived sequence pair will be denoted by where. Also, since is always zero, we will simply use to denote.that is, for in the example above,,,, and. Therefore, and. Property: For every distinct pair of output sequences at the output of a CUT, the corresponding derived pair of equal length is distinct. Two derived sequence pairs may have the same length,but they are still distinct and not identical. Property: Two derived sequence pairs and of original output stream pairs and having the same length are identical. Thereby, if and only if. Consider,,, and as four
2322 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE XI FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) output sequence streams of a CUT. Let and be two distinct output pairs and, and their corresponding derived sequence pairs, respectively, such that and. Both of the derived sequence pairs have the same length, but they are not identical. However, in general, it is not expected that any two distinct pairs of sequences at the output of a CUT will be identical and hence the possibility of the corresponding derived pairs being identical is also remote. We can extend the concept of one-weight and zero-weight to deal with sequences readily, but since the concepts are not used in this paper, their discussions are omitted. B. Optimal Pairwise Mergeability and Gate Selection In this section we will briefly summarize the key results concerning optimal pairwise mergeability of response data at the CUT output in the design of space compactors. These will be provided in the form of certain theorems without proofs under conditions of stochastic dependence of line errors, of which the details could be found in [49] and [57]. In the case of stochastic dependence of line errors at the CUT output, we can assign distinct probabilities of error occurrence in different lines. The gate selection was primarily based on optimal mergeability criteria established utilizing the properties of Hamming distance, sequence weights, and derived sequences, together with the concept of detectable error probability estimate [38] for a two-input logic function, under condition of stochastic dependence of single and double line errors at the output of a CUT. C. Effects of Error Probabilities in Selection of Gates for Optimal Merger Li and Robinson [38] defined the detectable error probability estimate for a two-input logic function given two input sequences of length as follows:, where is the probability of single error effect felt at the output of the CUT; is the probability of double error effect felt at the output of the CUT; is the number of single line errors at the output of gate if gate is used for merger; and is the number of double line errors at the output of gate if gate is used for merger.
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2323 TABLE XII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) Based on the computation of the detectable error probability estimates of Li and Robinson as given above, we deduce the following results that profoundly influence the selection of gates for optimal merger. Theorem: For an output sequence pair of length and Hamming distance, anand (NAND) gate is preferable to an XOR (XNOR) gate for optimal merger if. Theorem: For an output sequence pair of length and Hamming distance,anor (NOR) gate is preferable to an XOR (XNOR) gate for optimal merger if. Theorem: For an output sequence pair of length and Hamming distance,anand (NAND) gate is preferable to an OR (NOR) gate for optimal merger if. Theorem: For two gates and, is preferable to for optimal merger if and only if. IV. EXPERIMENTAL RESULTS To demonstrate the feasibility of the proposed space compression schemes, independent simulations were conducted on various ISCAS 89 full scan sequential benchmark circuits using HOPE [60], a fault simulation program developed at the Virginia Polytechnic Institute and State University, to generate the fault-free output sequences needed to construct our space compactor circuits and to test the benchmark circuits using deterministic compacted input test sets, accompanied with random test sessions that generate pseudorandom test sets with different values of random number generator seeds. For each circuit, we determined the number of injected faults, number of applied test vectors (after compaction), CPU simulation time, and fault coverage (without space compactors and with space compactors) by considering only the combinational part of the circuit (full scan version) or using the complete sequential circuit, assuming either stochastic independence of single and double line errors or different values of their failure probabilities (under condition of stochastic dependence of single and double line errors ), by running the program on SUN SPARC 5 workstation. The extensive simulation results are presented in the form of graphs and tables that follow. The CPU times needed for simulations of all the different ISCAS 89 full scan sequential benchmark circuits on SUN SPARC 5 workstation were in the range of 300 ms 23 457 s, though for some of the largest circuits, simulations could not be completed since memory, CPU time, and disk usage limits did not permit.
2324 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE XIII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) The hardware overhead for the designed space compactors is not given in this paper, though for all the circuits, the overhead was well within acceptable limits. Figs. 2 and 4 show fault coverage for all the benchmark circuits without compactors when stochastic independence of single and double line errors was assumed, using deterministic compacted input test sets and pseudorandom testing, respectively, whereas Figs. 3 and 5 show CPU simulation times for these circuits under identical conditions. The benchmark circuits were actually tested here as combinational circuits. On the other hand, Figs. 6 and 8 show fault coverage for all the benchmark circuits, using deterministic compacted input test sets and pseudorandom testing, respectively, for the probability values of single and double line errors (, ) being equal to (0.66, 0.33), while Figs. 7 and 9 show their CPU simulation times under identical situations. Here the benchmark circuits were tested as sequential circuits. Tables I VI show the different simulation results for ISCAS 89 benchmark circuits under stochastic dependence of single and double line errors with values of (, ) being equal to (0.33, 0.66), (0.90, 0.10), and (0.10, 0.90) on application of compacted test sets and under pseudorandom testing for combinational parts of the circuits, while Tables IX XIV show the same for the different circuits with all the above values of line errors (, ) treating them as sequential circuits. Finally, Tables VII and VIII show similar results for deterministic compacted testing of all the sequential benchmark circuits. V. CONCLUDING REMARKS The design of space-efficient BIST support hardware in the synthesis of digital integrated circuits is of great significance. This paper reports compression techniques of test data outputs for full scan digital sequential circuits that facilitate the design of these kinds of space-efficient support hardware. The proposed techniques use AND (NAND), OR (NOR), and XOR (XNOR) gates as appropriate to construct an output compaction tree that compresses the functional outputs of the CUT to a single line. The compaction tree is generated based on sequence characterization and utilizing the concepts of Hamming distance, sequence weights, and derived sequences. The logic functions selected to build the compaction tree are determined primarily by the characteristics of the sequences that are inputs to the logic gates. The optimal mergeability criteria were obtained on the assumption of stochastic independence as well as dependence
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2325 TABLE XIV FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) of single and double line errors. In the case of stochastic dependence, output bit stream selection is based on calculating the detectable error probability estimates using an empirical formula developed by Li and Robinson [38]. It should be recalled that the effectiveness of the proposed approaches is critically dependent on the probabilities of error occurrence in different lines of the CUT, and this dependence may be affected by the circuit structure, partitioning, etc., e.g., by the number of inputs, outputs, internal lines, and types of gates the circuit is designed of and the way it is partitioned. In actual situations, the probability values for error occurrence in particular circuits have to be experimentally determined, that is, these are a posteriori probabilities rather than a priori probabilities. If the circuit structure changes, these probability values change, and evidently the corresponding compression networks that have to be designed based on pairwise optimal mergeability criteria change as well. Another point should be considered here as well. Since the empirical formula used for computing the detectable error probability estimates in the gate selection process uses exact values of these a posteriori probabilities of error occurrence, unless the formula is modified, whenever only intervals on the probability values are given rather than their exact values, they cannot be used as such in the gate selection process, except only to provide two extremes of selections consistent with the probability intervals. From the analytical viewpoint, the major issue involves the computation of the detectable error probability estimates, which is rather simple in the present case because of two-line mergers, compared to that in the case of generalized mergeability, where the computation is really intensive. Since the major emphasis of the paper is in synthesizing compaction networks that provide improved fault coverage for fixed complexity, realizing a tradeoff between coverage and complexity (storage) than conventional techniques, the complexity issues were not addressed in depth. Also, zero aliasing compaction [45], [48] was not emphasized in the present study, but rather an attempt was made simply to reinforce the connection between the input test sets and their lengths, in their reduction into recommended algorithms for the design of space-efficient compaction networks. ACKNOWLEDGMENT The authors are extremely grateful to the anonymous reviewers for their constructive comments that greatly helped in the preparation of the revised version of the manuscript. The authors are also thankful to the Associate Editor of this TRANSACTIONS for his helpful suggestions and kind encouragement.
2326 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 REFERENCES [1] A. Miczo, Digital Logic Testing and Simulation. New York: Harper and Row, 1986. [2] P. H. Bardell, W. H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Technique. New York: Wiley, 1987. [3] R. Rajsuman, System-on-a-Chip: Design and Test. Boston, MA: Artech House, 2000. [4] S. Mourad and Y. Zorian, Principles of Testing Electronic Systems. New York: Wiley, 2000. [5] K. Chakrabarty, V. Iyengar, and A. Chandra, Test Resource Partitioning for System-on-a-Chip. Boston, MA: Kluwer, 2002. [6] SOC (System-on-a-Chip) Testing for Plug and Play Test Automation,K. Chakrabarty, Ed., Kluwer, Boston, MA, 2002. [7] P. K. Lala, Fault Tolerant and Fault Testable Hardware Design. London, U.K.: Prentice-Hall, 1985. [8] T. W. Williams and K. P. Parker, Testing logic networks and design for testability, Computer, vol. 21, pp. 9 21, Oct. 1979. [9] S. M. Thatte and J. A. Abraham, Test generation for microprocessors, IEEE Trans. Comput., vol. C-29, pp. 429 441, Jun. 1980. [10] J. R. Kuban and W. C. Bruce, Self-testing the Motorola MC6804P2, IEEE Des. Test Comput., vol. 1, pp. 33 41, Oct. 1984. [11] E. J. McCluskey, Built-in self-test techniques, IEEE Des. Test Comput., vol. 2, pp. 21 28, Apr. 1985. [12] R. G. Daniels and W. B. Bruce, Built-in self-test trends in Motorola microprocessors, IEEE Des. Test Comput., vol. 2, pp. 64 71, Apr. 1985. [13] S. R. Das, Built-in self-testing of VLSI circuits, IEEE Potentials, vol. 10, pp. 23 26, Oct. 1991. [14] V. D. Agrawal, C. R. Kime, and K. K. Saluja, A tutorial on built-in self-test Part II, IEEE Des. Test Comput., vol. 10, pp. 69 77, Jun. 1993. [15] Y. Zorian, A distributed BIST control scheme for complex VLSI devices, in Proc. VLSI Test Symp., 1993, pp. 6 11. [16] H. J. Wunderlich and G. Kiefer, Scan-based BIST with complete fault coverage and low hardware overhead, in Proc. Euro. Test Workshop, 1996, pp. 60 64. [17] Y. Zorian, Test requirements for embedded core-based systems and IEEE P-1500, in Proc. Int. Test Conf., 1997, pp. 191 199. [18] P. Varma and S. Bhatia, A structured test reuse methodology for corebased system chip, in Proc. Int. Test Conf., 1997, pp. 294 302. [19] J. Brauch and J. Fleischman, Design of cache test hardware on the HP PA8500, IEEE Des. Test Comput., vol. 15, pp. 58 63, Jun. 1998. [20] R. Fetherston, Testability features of the AMD-K6 microprocessor, IEEE Des. Test Comput., vol. 15, pp. 64 69, Jun. 1998. [21] A. Carbine, Pentium Pro processor design for test and debug, IEEE Des. Test Comput., vol. 15, pp. 77 82, Jun. 1998. [22] K. Chakrabarty and S. R. Das, Test-set embedding based on width compression for mixed-mode BIST, IEEE Trans. Instrum. Meas., vol. 49, pp. 671 678, Jun. 2000. [23] S. R. Das, Self-testing of embedded cores-based systems with built-in hardware, in Proc. IEE, Cir. Dev. Syst., vol. 152, Oct. 2005, pp. 539 546. [24] J. P. Hayes, Check sum methods for test data compression, J. Design Automat. Fault-Tolerant Comput., vol. 1, pp. 3 7, Jan. 1976. [25], Transition count testing of combinational logic circuits, IEEE Trans. Comput., vol. C-25, pp. 613 620, Jun. 1976. [26] R. A. Frohwerk, Signature analysis A new digital field service method, Hewlett-Packard J., vol. 28, pp. 2 8, May 1977. [27] J. Savir, Syndrome-testable design of combinational circuits, IEEE Trans. Comput., vol. C-29, pp. 442 451, Jun. 1980. [28] A. K. Susskind, Testing by verifying Walsh coefficients, IEEE Trans. Comput., vol. C-32, pp. 198 201, Feb. 1983. [29] T.-C. Hsiao and S. C. Seth, An analysis of the use of Rademacher- Walsh spectrum in compact testing, IEEE Trans. Comput., vol. C-33, pp. 934 937, Oct. 1984. [30] T. W. Williams, W. Daehn, M. Gruetzner, and C. W. Starke, Aliasing errors in signature analysis registers, IEEE Des. Test Comput., vol. 4, pp. 39 45, Apr. 1987. [31] S. B. Akers, A parity bit signature for exhaustive testing, IEEE Trans. Computer-Aided Design Integr. Circuits Syst., vol. CAD-7, pp. 333 338, Mar. 1988. [32] W. -B. Jone and S. R. Das, Multiple-output parity bit signature for exhaustive testing, J. Electron. Test. Theory Applicat., vol. 1, pp. 175 178, Jun. 1990. [33] D. K. Pradhan and S. K. Gupta, A new framework for designing and analyzing BIST techniques and zero aliasing compression, IEEE Trans. Comput., vol. C-40, pp. 743 763, Jun. 1991. [34] S. R. Das, M. Sudarma, M. H. Assaf, E. M. Petriu, W. -B. Jone, K. Chakrabarty, and M. Sahinoglu, Parity bit signature in response data compaction and built-in self-testing of VLSI circuits with nonexhaustive test sets, IEEE Trans. Instrum. Meas., vol. 52, pp. 1363 1380, Oct. 2003. [35] N. R. Saxena and J. P. Robinson, A unified view of test response compression methods, IEEE Trans. Comput., vol. C-36, pp. 94 99, Jan. 1987. [36] K. K. Saluja and M. Karpovsky, Testing computer hardware through compression in space and time, in Proc. Int. Test Conf., 1983, pp. 83 88. [37] Y. Zorian and V. K. Agarwal, A general scheme to optimize error masking in built-in self testing, in Proc. Int. Symp. Fault-Tolerant Computing, 1986, pp. 410 415. [38] Y. K. Li and J. P. Robinson, Space compression method with output data modification, IEEE Trans. Computer-Aided Design Integr. Circuits Syst., vol. CAD-6, pp. 290 294, Mar. 1987. [39] S. M. Reddy, K. K. Saluja, and M. G. Karpovsky, Data compression technique for test responses, IEEE Trans. Comput., vol. C-37, pp. 1151 1157, Sep. 1988. [40] M. Karpovsky and P. Nagvajara, Optimal robust compression of test responses, IEEE Trans. Comput., vol. C-39, pp. 138 141, Jan. 1990. [41] W. -B. Jone and S. R. Das, Space compression method for built-in selftesting of VLSI circuits, Int. J. Comput. Aided VLSI Design, vol. 3, pp. 309 322, Sep. 1991. [42] S. R. Das, H. T. Ho, W. -B. Jone, and A. R. Nayak, An improved output compaction technique for built-in self-test in VLSI circuits, in Proc. Int. Conf. VLSI Design, 1994, pp. 403 407. [43] K. Chakrabarty and J. P. Hayes, Efficient test response compression for multiple-output circuits, in Proc. Int. Test Conf., 1994, pp. 501 510. [44] S. R. Das, M. Assaf, and A. R. Nayak, On the design of space compressor for VLSI circuits in BIST using nonexhaustive test sets, Trans. SDPS, vol. 2, pp. 1 12, Dec. 1998. [45] B. Pouya and N. A. Touba, Synthesis of zero-aliasing elementary-tree space compactors, in Proc. VLSI Test Symp., 1998, pp. 70 77. [46] S. R. Das, T. F. Barakat, E. M. Petriu, M. H. Assaf, and K. Chakrabarty, Space compression revisited, IEEE Trans. Instrum. Meas., vol. 49, pp. 690 705, Jun. 2000. [47] M. Seuring and K. Chakrabarty, Space compaction of test responses for IP cores using orthogonal transmission functions, in Proc. VLSI Test Symp., 2000, pp. 1 7. [48] S. R. Das, M. H. Assaf, E. M. Petriu, W. -B. Jone, and K. Chakrabarty, A novel approach to designing aliasing-free space compactors based on switching theory formulation, in Proc. IEEE Instrum. Meas. Tech. Conf., vol. 1, 2001, pp. 198 201. [49] S. R. Das, C. V. Ramamoorthy, M. H. Assaf, E. M. Petriu, and W. -B. Jone, Fault tolerance in systems design in VLSI using data compression under constraints of failure probabilities, IEEE Trans. Instrum. Meas., vol. 50, pp. 1725 1747, Dec. 2001. [50] S. R. Das, J. Y. Liang, E. M. Petriu, W. -B. Jone, and K. Chakrabarty, Data compression in space under generalized mergeability based on concepts of cover table and frequency ordering, IEEE Trans. Instrum. Meas., vol. 51, pp. 150 172, Feb. 2002. [51] S. R. Das, M. H. Assaf, E. M. Petriu, and W. -B. Jone, Fault simulation and response compaction in full scan circuits using HOPE, in IEEE Instrum. Meas. Tech. Conf., vol. 1, 2002, pp. 607 612. [52] S. Mitra and K. S. Kim, X-compact: An efficient response compaction technique for test cost reduction, in Proc. Int. Test Conf., 2002, pp. 311 320. [53] M. B. Tahoori, S. Mitra, S. Toutounchi, and E. J. McCluskey, Fault grading FPGA test configuration, in Proc. Int. Test Conf., 2002, pp. 608 617. [54] J. Rajski, J. Tyszer, C. Wang, and S. M. Reddy, Convolutional compaction of test responses, in Proc. Int. Test Conf., 2003, pp. 745 754. [55] S. R. Das, M. H. Assaf, E. M. Petriu, and M. Sahinoglu, Aliasing-free compaction in testing cores-based system-on-chip (SOC) using compatibility of response data outputs, Trans. SDPS, vol. 8, pp. 1 17, Mar. 2004. [56] K. Chakrabarty, Test response compaction for built-in self testing, Ph.D. dissertation, Dept. of Computer Science and Engineering, Univ. of Michigan, Ann Arbor, MI, 1995. [57] M. H. Assaf, Digital core output test data compression architecture based on switching theory concepts, Ph.D. dissertation, School of Information Technology and Engineering, Univ. of Ottawa, Ottawa, ON, Canada, 2003. [58] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman, 1979.
DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2327 [59] H. K. Lee and D. S. Ha, On the generation of test patterns for combinational circuits, Dept. of Electrical Engineering, Virginia Polytechnic Inst. and State Univ., Blacksburg, VA, Tech. Rep. 12 93, 1993. [60], HOPE: An efficient parallel fault simulator for synchronous sequential circuits, in Proc. Des. Automation Conf., 1992, pp. 336 340. [61], An efficient forward fault simulation algorithm based on the parallel pattern single fault propagation, in Proc. Int. Test Conf., 1991, pp. 946 955. [62] I. Pomeranz, L. N. Reddy, and S. M. Reddy, COMPACTEST: A method to generate compact test sets for combinational circuits, in Proc. Int. Test Conf., 1991, pp. 194 203. Sunil R. Das (M 70 SM 90 F 94 LF 04) received the B.Sc. degree (honors) in physics, the M.Sc. (Tech.) degree, and the Ph.D. degree in radiophysics and electronics from the University of Calcutta, Calcutta, West Bengal, India. He is an Emeritus Professor of Electrical and Computer Engineering at the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada, and a Professor of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL. He previously held academic and research positions with the University of California, Berkeley; Stanford University, Stanford, CA (on sabbatical leave); National Chiao Tung University, Hsinchu, Taiwan, R.O.C.; and the University of Calcutta. He has published around 300 papers in the areas of switching and automata theory, digital logic design, threshold logic, fault-tolerant computing, built-in self-test with emphasis on embedded cores-based systems-on-chip, microprogramming and microarchitecture, microcode optimization, applied theory of graphs, and combinatorics. He is an Associate Editor of the International Journal of Computers and Applications, a Regional Editor for Information Technology Journal, and a member of the Editorial Board and a Regional Editor for Canada of VLSI Design: An International Journal of Custom-Chip Design, Simulation and Testing. He is a former Associate Editor of SIGDA Newsletter, International Journal of Computer Aided VLSI Design, and International Journal of Parallel and Distributed Systems and Networks. He is a Coeditor (with P. K. Srimani) of Distributed Mutual Exclusion Algorithms (Los Alamitos, CA: IEEE Computer Society Press, 1992) and the Coauthor (with C. L. Sheng) of Digital Logic Design (Norwood, NJ: Ablex, to be published). Dr. Das is a Fellow of the Society for Design and Process Science and the Canadian Academy of Engineering. He is a Member of the IEEE Computer Society, IEEE Systems, Man, and Cybernetics Society, IEEE Circuits and Systems Society, and IEEE Instrumentation and Measurement Society. He is a Member of the Association for Computing Machinery. He received the IEEE Computer Society s Technical Achievement Award in 1996 and its Meritorious Service Award in 1997. He became a Golden Core Member of the IEEE Computer Society in 1998. He has received many Certificates of Appreciation from the IEEE Circuit and Systems Society. He was on the Technical Program Committees and Organizing Committees of many IEEE and non-ieee international conferences, symposia, and workshops, and also acted as Session Organizer, Session Chair, and Panelist. He became a Delegate of Good People, Good Deeds of the Republic of China in 1981. He was listed in Marquis Who s Who biographical directory of the computer graphics industry in 1984. He was Managing Editor of the IEEE VLSI TECHNICAL BULLETIN since its inception. He was an Executive Committee Member of the IEEE Computer Society Technical Committee on VLSI. He was an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS from 1991 until very recently. He is currently an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is a former Administrative Committee Member of the IEEE Systems, Man, and Cybernetics Society, and a former Associate Editor of the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS (for two consecutive terms). He was Cochair of the IEEE Computer Society Student Activities Committee from Region 7 (Canada). He was the Associate Guest Editor of the IEEE Journal of Solid-State Circuits Special Issues on Microelectronic Systems. With R. Rajsuman, he was a Co-Guest Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT Special Section on Innovations in VLSI Test Equipments (October 2003) and Future of Semiconductor Test (October 2005). He was a corecipient of the Rudolph Christian Karl Diesel Best Paper Award of the Society for Design and Process Science for a paper presented at the Fifth Biennial World Conference on Integrated Design and Process Technology, Dallas, TX, 2000, and the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. Chittoor V. Ramamoorthy (M 57 SM 76 F 78 LF 93) received two undergraduate degrees in physics and technology from the University of Madras, Madras, India, two graduate degrees in mechanical engineering from the University of California, Berkeley, and M.S. and Ph.D. degrees from Harvard University, Cambridge, MA, in applied mathematics (computer science), in 1964. His education was supported by the Computer Division of the Honeywell Inc., Waltham, MA, a company he was associated with till 1967, last as a Senior Staff Scientist. He later joined the University of Texas, Austin as a Professor in the Department of Electrical Engineering and Computer Science. After serving as a Chairman of the Department, he joined the University of California, Berkeley, in 1972 as a Professor of Electrical Engineering and Computer Sciences, Computer Science Division, a position that he still holds as Professor Emeritus. He supervised more than 70 doctoral students in his career. He has held the Control Data Distinguished Professorship at the University of Minnesota, Minneapolis, and Grace Hopper Chair at the U. S. Naval Postgraduate School, Monterey, CA. He was also a Visiting Professor at the Northwestern University, Evanston, IL, and a Visiting Research Professor at the University of Illinois, Urbana-Champaign. He is a Senior Research Fellow at the ICC Institute of the University of Texas, Austin. He served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. He is the founding Editor-in- Chief of the IEEE TRANSACTIONS ON KNOWLEDGEAND DATA ENGINEERING, which recently published a Special Issue in his honor. He is also the founding Co-Editor-in-Chief of the International Journal of Systems Integration published by Elsevier North- Holland, NY and of the Journal for Design and Process Science published by SDPS, TX. He served in various capacities in the IEEE Computer Society including as its First Vice President, and a Governing Board Member. He served on several Advisory Boards of the Federal Government and of the Academia that include the United States Army, Navy, Air Force, DOE s Los Alamos Lab, University of Texas, and State University System of Florida. He is one of the founding Directors of the International Institute of Systems Integration in Campinas, Brazil, supported by the Federal Government of Brazil, and for several years, was a Member of the International Board of Advisors of the Institute of Systems Science of the National University of Singapore. Dr. Ramamoorthy received the Group Award and Taylor Booth Award for education, Richard Merwin Award for outstanding professional contributions, and Golden Core Recognition from the IEEE Computer Society. He is the recipient of the IEEE Centennial Medal, and IEEE Millennium Medal. He also received the Computer Society s 2000 Kanai-Hitachi Award for pioneering and fundamental contributions in parallel and distributed computing. He is the Fellow of the Society for Design and Process Science, from which he received the R. T. Yeh Distinguished Achievement Award in 1997. He also received the Best Paper Award from the IEEE Computer Society in 1987. Three international conferences and one UC Berkeley Graduate Student Research Award were organized in his honor. Mansour H. Assaf (M 02) received the Honors degree in applied physics from the Lebanese University in Beirut in 1989 and the B.A.Sc., M.A.Sc., and Ph.D. degrees in electrical engineering from the University of Ottawa, Ottawa, ON, Canada, in 1994, 1996, and 2003, respectively. From 1994 to 1996, he was with the Fault-Tolerant Computing Group of the University of Ottawa, where he studied and worked as a Researcher. After working with Applications Technology, a subsidiary of Lernout and Hauspie Speech, McLean, VA, in the area of software localization and natural language processing, he joined the Sensing and Modeling Research Laboratory of the University of Ottawa, where he currently works on projects in the field of human-computer interaction, three-dimensional modeling, and virtual environments. His research interests are in the areas of human computer interactions and perceptual-user interfaces, and in fault diagnosis in digital systems. Dr. Assaf is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT.
2328 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 Emil M. Petriu (M 86 SM 88 F 01) is a Professor and University Research Chair in the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada. His research interests include robot sensing and perception, intelligent sensors, interactive virtual environments, soft computing, digital integrated circuit testing. He has published more than 200 technical papers, authored two books, edited two other books, and received two patents. Dr. Petriu is a Fellow of the Canadian Academy of Engineering and the Engineering Institute of Canada. He is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT and a recipient of the 2003 IEEE Instrumentation and Measurement Society Award. He is Chair of TC-15 Virtual Systems and Co-Chair of TC-28 Instrumentation and Measurement for Robotics and Automation and TC-30 Security and Contraband Detection of the IEEE Instrumentation and Measurement Society. He is an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT and a Member of the Editorial Board of the IEEE INSTRUMENTATION AND MEASUREMENT MAGAZINE. Wen-Ben Jone (S 85 M 88 SM 01) was born in Taipei, Taiwan, R.O.C. He received the B.S. degree in computer science and the M.S. degree in computer engineering from National Chiao-Tung University, Hsinchu, Taiwan, in 1979 and 1981, respectively, and the Ph.D. degree in computer engineering and science from Case Western Reserve University, Cleveland, OH in 1987. In 1987, he joined the Department of Computer Science, New Mexico Institute of Mining and Technology, Socorro, where he became an Associate Professor in 1992. From 1993 to 2000, he was with the Department of Computer Engineering and Information Science, National Chung-Cheng University, Chiayi, Taiwan. He was a Visiting Research Fellow with the Department of Computer Science and Engineering, Chinese University of Hong Kong, in summer 1997. Since 2001, he has been with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH. He was a Visiting Scholar with the Institute of Information Science, Academia Sinica, Taiwan, in summer 2002. His research interests include VLSI design for testability, built-in self-testing, memory testing, high-performance circuit testing, MEMS testing and repairing, and low-power circuit design. He has published more than 100 papers and has received one U.S. patent. He has served as a reviewer in his research areas in various technical journals and conferences. He has served on the Program Committee of VLSI Design/CAD Symposium (1993 1997, in Taiwan), was the General Chair of 1998 VLSI Design/CAD Symposium, served on the Program Committee of 1995, 1996, 2000 Asian Test Conference, 1995 1998 Asia and South Pacific Design Automation Conference, 1998 International Conference on Chip Technology, 2000 International Symposium on Defect and Fault Tolerance in VLSI Systems, and 2002 and 2003 Great Lake Symposium on VLSI. Dr. Jone is a member of the IEEE Computer Society Test Technology Technical Committee. He is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is listed in Marquis Who s Who in the World (1998, 2001). He received the Best Thesis Award from the Chinese Institute of Electrical Engineering in 1981. Mehmet Sahinoglu (S 78 M 81 SM 93) received the B.S. degree from METU, Ankara, Turkey, and the M.S. degree from UMIST, U.K., both in electrical and computer engineering. He received the Ph.D. degree from Texas A&M University, College Station, in electrical engineering and statistics. He is the Eminent Scholar for the Endowed Chair of the Alabama Commission of Higher Education. He has been Chairman of the Computer and Information Science Department at TSUM since 1999. Following 20 years with METU, he served as the first Dean, and founding Department Chair, in the College of Arts and Sciences, DEU, Izmir, Turkey (1992 1997). He was a Chief Reliability Consultant to the Turkish Electricity Authority from 1982 to 1997. He became an Emeritus Professor of METU and DEU in 2000. He has taught at Purdue University, West Lafayette, IN and Case Western Reserve University, Cleveland, OH, as a Fulbright and NATO scholar, respectively. He is accredited for the Compound Poisson Software Reliability Model to account for the multiple (clumped) failures in predicting the total number of failures at the end of a mission time and the MESAT: Compound Poisson Stopping Rule Algorithm in cost-effective digital software testing. He is jointly responsible (with D. Libby) for the original derivation of the G3B (Generalized Three-Parameter Beta) pdf in 1981, also known as Sahinoglu and Libby pdf in 1999. Dr. Sahinoglu is a Fellow of the Society for Design and Process Science, a member of ACM, AFCEA, and ASA, and an elected member of ISI.