WITH continued growth in semiconductor industries and. Fault Simulation and Response Compaction in Full Scan Circuits Using HOPE
|
|
|
- Joy McDaniel
- 10 years ago
- Views:
Transcription
1 2310 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 Fault Simulation and Response Compaction in Full Scan Circuits Using HOPE Sunil R. Das, Life Fellow, IEEE, Chittoor V. Ramamoorthy, Life Fellow, IEEE, Mansour H. Assaf, Member, IEEE, Emil M. Petriu, Fellow, IEEE, Wen-Ben Jone, Senior Member, IEEE, and Mehmet Sahinoglu, Senior Member, IEEE Abstract This paper presents results on fault simulation and response compaction on ISCAS 89 full scan sequential benchmark circuits using HOPE a fault simulator developed for synchronous sequential circuits that employs parallel fault simulation with heuristics to reduce simulation time in the context of designing space-efficient support hardware for built-in self-testing of very large-scale integrated circuits. The techniques realized in this paper take advantage of the basic ideas of sequence characterization previously developed and utilized by the authors for response data compaction in the case of ISCAS 85 combinational benchmark circuits, using simulation programs ATALANTA, FSIM, and COMPACTEST, under conditions of both stochastic independence and dependence of single and double line errors in the selection of specific gates for merger of a pair of output bit streams from a circuit under test (CUT). These concepts are then applied to designing efficient space compression networks in the case of full scan sequential benchmark circuits using the fault simulator HOPE. Index Terms Built-in self-test (BIST), circuit under test (CUT), detectable error probability estimates, fault simulation using HOPE, Hamming distance, optimal sequence mergeability, response compaction, sequence weights, single stuck-line faults, space compactor. Fig. 1. Block diagram of the BIST environment. I. INTRODUCTION WITH continued growth in semiconductor industries and development of extremely complex systems with higher levels of integration densities, the real urge to find better and more efficient methods of testing that ensure reliable operations of chips, a mainstay of today s many sophisticated digital systems, has become the single most pressing issue to design and test engineers. The very concept of testing has a broad applicability, and finding highly effective test techniques that Manuscript received November 11, 2003; revised December 7, This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grant A S. R. Das is with the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada, and with the Department of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL USA. C. V. Ramamoorthy is with the Department of Electrical Engineering and Computer Sciences, Computer Science Division, University of California, Berkeley, CA USA. M. H. Assaf and E. M. Petriu are with the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON K1N 6N5, Canada. W.-B. Jone is with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH USA. M. Sahinoglu is with the Department of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL USA. Digital Object Identifier /TIM Fig. 2. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using compacted input test sets. guarantee correct system performance has been gaining importance [1] [57]. Consider, for example, medical test and diagnostic instruments, airplane controllers, and other safety-critical systems that have to be tested before (off-line testing) and during use (on-line testing). Another application where failure can have severe economic consequences is real-time transactions processing. The testing process in all these circumstances must be fast and effective to make sure that such systems operate correctly. In general, the cost of testing integrated circuits (ICs) is rather prohibitive; it ranges from 35% to 55% of their total manufacturing cost [7]. Besides, testing a chip is also time consuming, taking up to about one-half of the total design cycle time [8]. The amount of time available for manufacturing, testing, and marketing a product, on the other hand, continues to decrease. Moreover, as a result of global competition, customers demand lower cost and better quality products. Therefore, in /$ IEEE
2 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2311 Fig. 3. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using compacted input test sets. Fig. 6. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using compacted input test sets. Fig. 4. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using pseudorandom testing. Fig. 7. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using compacted input test sets. Fig. 5. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic independence of single and double line errors using pseudorandom testing. order to achieve this superior quality at lower cost, testing techniques need to be improved. The conventional testing techniques of digital circuits require application of test patterns generated by a test pattern generator (TPG) to the circuit under test (CUT) and comparing the responses produced with known correct circuit responses. Fig. 8. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using pseudorandom testing. However, for large circuits, because of higher storage requirements for the fault-free responses, the test procedures become very expensive and hence alternative approaches are sought to minimize the amount of needed storage. Built-in self-testing (BIST) is a design methodology that provides the capability of solving many of the problems otherwise encountered in conventional testing of digital systems. It combines the concepts of
3 2312 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE I FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) both built-in test (BIT) and self-test (ST) in one. In BIST, test generation, test application, and response verification are all accomplished through built-in hardware, which allows different parts of a chip to be tested in parallel, thereby reducing the required testing time, besides eliminating the need for external test equipment. As the cost of testing is becoming the major component of the manufacturing cost of a new product, BIST thus tends to reduce manufacturing, test, and maintenance costs through improved diagnosis. Several companies such as Motorola, AT&T, IBM, AMD, and Intel have incorporated BIST in many of their products [10], [12], [19] [21]. AT&T, for example, has incorporated BIST into more than 200 of their chips. The three large programmable logic arrays and microcode read-only memory (ROM) in the Intel microprocessor were built-in self-tested [56]. The general-purpose microprocessor chip Alpha AXP21164 and Motorola microprocessor were also tested using BIST techniques [12], [56]. More recently, Intel, for its Pentium Pro architecture microprocessor, with its unique requirements of meeting very high production goals, superior performance standards, and impeccable test quality, put strong emphasis on its design-for-test (DFT) direction [21]. A set of constraints, however, limits Intel s ability to tenaciously explore Fig. 9. Simulation results of the ISCAS 89 full scan sequential benchmark circuits using HOPE under stochastic dependence of single and double line errors using pseudorandom testing. DFT and test generation techniques, i.e., full or partial scan or scan-based BIST [4]. AMD s K6 processor is a reduced instruction set computer (RISC) core named enhanced RISC86 microarchitecture [20]. K6 processor incorporates BIST into its DFT process. Each RAM array of K6 processor has its
4 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2313 TABLE II FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) own BIST controller. BIST executes simultaneously on all of the arrays for a predefined number of clock cycles that ensures completion for the largest array. Hence, BIST execution time depends on the size of the largest array [4]. AMD uses commercial automatic test pattern generation tool to create scan test patterns for stuck-faults in their processor. The DFT framework for a 500-MHz IBM S/390 microprocessor utilizes a wide range of tests and techniques to ensure superb reliability of components within a system [4]. Register arrays are tested through the scan chain, while nonregister memories are tested with programmable RAM BIST. Hewlett-Packard s PA8500 is a m superscalar processor that achieves fast but thorough test with its cache test hardware s ability to perform March tests, which is an effective way to detect several kinds of functional faults [19]. Digital s Alpha processor combines both structured and adhoc DFT solutions, for which a combination of hardware and software BIST was adopted [4]. Sun Microsystems UltraSparc processor incorporates several DFT constructs as well. The achievement of its quality performance coupled with reduced chip area conflicts with a design requirement that is easy to debug, test, and manufacture [4]. BIST is also widely used to test embedded regular structures that exhibit a high degree of periodicity such as memory arrays (SRAMs, ROMs, FIFOs, and registers). These types of circuits do not require complex extra hardware for test generation and response compaction. Also, including BIST in these circuits can guarantee high fault coverage with zero aliasing. Unlike regular circuits, random-logic circuits cannot be adequately tested only with BIST techniques, since generating adequate on-chip test sets using simple hardware is a difficult task to be accomplished. Moreover, since test responses generated by random-logic circuits seldom exhibit regularity, it is extremely difficult to ensure zero aliasing compaction. Therefore, random-logic circuits are most usually tested using a combination of BIST, scan design techniques, and external test equipment. A typical BIST environment, as shown in the block diagram representation of Fig. 1, uses a test pattern generator (TPG) that sends its outputs to a circuit under test (CUT), and output streams from the CUT are fed into a test data analyzer. A fault is detected if the test sequence is different from the response of the fault-free circuit. The test data analyzer is comprised of a response compaction unit (RCU), storage for the fault-free responses of the CUT, and a comparator. In order to reduce the amount of data represented by the fault-free and faulty CUT responses, data compression is used to create signatures (short binary sequences) from the CUT and its corresponding fault-free
5 2314 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE III FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) circuit. Signatures are compared and faults are detected if a match does not occur. BIST techniques may be used during normal functional operating conditions of the unit under test (on-line testing), as well as when a system is not carrying out its normal functions (off-line testing). In the case where detecting real-time errors is not that important, systems, boards, and chips can be tested in off-line BIST mode. BIST techniques use pseudorandom, or pseudoexhaustive TPGs, or on-chip storing of reduced test sets. These days, testing logic circuits exhaustively is seldom used, since only a few test patterns are needed to ensure full fault coverage for single stuck-line faults [12]. Reduced pattern test sets can be generated using existing algorithms such as FAN and others. Built-in test generators can often generate such reduced test sets at low cost, making BIST techniques suitable for on-chip self-testing. The primary concern of the current paper is the general response compaction process of built-in self-testing techniques that translates into a process of reducing the test response from the CUT to a signature. Instead of comparing bit-by-bit the fault-free responses to the observed outputs of the CUT as in conventional testing methods, the observed signature is compared to the correct one, thereby reducing the storage needed for the correct circuit responses. The response compaction in BIST is carried out through a space compaction unit followed by time compaction. In general, input sequences coming from a CUT are fed into a space compactor, providing output streams of bits such that ; most often, test responses are compressed into only one sequence ( ). Space compaction brings a solution to the problem of achieving high-quality built-in self-testing of complex chips without the necessity of monitoring a large number of internal test points, thereby reducing both testing time and area overhead by merging test sequences coming from these internal test points into a single stream of bits. This single bit stream of length is eventually fed into a time compactor, and finally a shorter sequence of length ( ) is obtained at the output. The extra logic representing the compaction circuit, however, must be as simple as possible, to be easily embedded within the CUT and should not introduce signal delays to affect either the test execution time or normal functionality of the circuit being tested. Moreover, the length of the signature must be as short as it can be in order to minimize the amount of memory needed to store the fault-free response signatures. Also, signatures derived from faulty output responses and their corresponding fault-free signatures should not be the same,
6 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2315 TABLE IV FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) which unfortunately is not always the case. A fundamental problem with compaction techniques is error masking or aliasing [7], [49], [56] which occurs when the signatures from faulty output responses map into the fault-free signatures, usually calculated by identifying a good circuit, applying test patterns to it, and then having the compaction unit generate the fault-free references. Aliasing causes loss of information, which affects the test quality of BIST and reduces the fault coverage (the number of faults detected, after compaction, over the total number of faults injected). Several methods have been suggested in the literature for computing the aliasing probability. The exact computation of this aliasing probability is known to be an NP-hard problem [58]. In practice, high fault coverage, over 99%, is generally required, and thus, any space compression technique that maintains more percentage error coverage information is considered worthy of investigation. This paper specifically deals with the general problem of designing space-efficient support hardware for BIST of full scan sequential circuits using fault simulation program HOPE [60]. HOPE is a fault simulator for synchronous sequential circuits developed at the Virginia Polytechnic Institute and State University and employs parallel fault simulation with several heuristics to reduce fault simulation time, besides providing many advantages over existing simulators. The compaction techniques used in this paper with simulator HOPE take advantage of certain inherent properties of the test responses of the CUT, together with the knowledge of their failure probabilities. A major objective to realize in space compaction is to provide methods that are simple, suitable for on-chip self-testing, require low area overhead, and have little adverse impact on the overall CUT performance. With that objective in perspective, compaction techniques were developed in the paper that take advantage of some well known concepts, i.e., those of Hamming distance, sequence weights, and derived sequences as utilized by the authors earlier in sequence characterization [46], [49], [57], in conjunction with the probabilities of error occurrence for optimal mergeability of a pair of output bit streams from the CUT. The proposed techniques guarantee simple design and achieve a high measure of fault coverage for single stuck-line faults with low CPU simulation time and acceptable area overhead, as evident from extensive simulation runs on the ISCAS 89 full scan sequential benchmark circuits with simulator HOPE, under conditions of both stochastic independence and dependence of single and double line output errors.
7 2316 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE V FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) II. BRIEF OVERVIEW OF TEST COMPACTION TECHNIQUES The choice of a compression technique is mainly influenced by hardware considerations and loss of effective fault coverage due to fault masking or aliasing. In this section, we first briefly review some of the important test compaction techniques in space for BIST that have been proposed in the literature. We describe these, concentrating only on some of their relevant properties like the area overhead, fault coverage, error masking probability, etc. There also exist a number of efficient time compaction schemes including ones counting, syndrome testing, transition counting, signature analysis, and others, which are also considered. Some of the common space compression techniques include the parity tree space compaction, hybrid space compression, dynamic space compression, quadratic functions compaction, programmable space compaction, and cumulative balance testing. The parity tree compactor circuits [36], [39], [46], [49], [50] are composed of only XOR gates. An XOR gate has very good signal-to-error propagation properties that are quite desirable for space compression. Functions realized by parity tree compactors are of the form. The parity tree space compactor propagates all errors that appear on an odd number of its inputs. Thereby, errors that appear on an even number of parity tree circuit inputs are masked. As experimentally demonstrated, most single stuck-line faults are detected in parity tree space compaction using pseudorandom input test patterns and deterministic reduced test sets [49], [57]. The hybrid space compression (HSC) technique, originally proposed by Li and Robinson [38], uses AND, OR, and XOR logic gates as output compaction tools to compress the multiple outputs of a CUT into a single line. The compaction tree is constructed based on the detectable error probability estimates. A modified version of the HSC method, called dynamic space compression (DSC), was subsequently proposed by Jone and Das [41]. Instead of assigning static values for the probabilities of single errors and double errors, the DSC method dynamically estimates those values based on the CUT structure during the computation process. The values of and are determined based on the number of single lines and shared lines connected to an output. A general theory to predict the performance of the space compression
8 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2317 TABLE VI FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) techniques was also developed. Experimental results show that the information loss, combined with syndrome counting as time compactor, is between 0% and 12.7%. DSC was later improved, in which some circuit-specific information was used to calculate the probabilities [42]. However, neither HSC nor DSC does provide an adequate measure of fault coverage because they both rely on estimates of error detection probabilities. Quadratic functions compaction (QFC) uses quadratic functions to construct the space compaction circuits, and has been shown to reduce aliasing errors [40]. In QFC, the observed output responses of the CUT are processed and compressed in a serial fashion based on a function of the type, where and are blocks of length, for. A new approach termed programmable space compaction (PSC) has recently been proposed for designing low-cost space compactors that provide high fault coverage [37]. In PSC, circuit-specific space compactors are designed to increase the likelihood of error propagation. However, PSC does not guarantee zero aliasing. A compaction circuit that minimizes aliasing and has the lowest cost can only be found by exhaustively enumerating all 2 -input Boolean functions, where represents the number of primary outputs of the CUT. A new class of space compactors based on parity tree circuits was recently proposed by Chakrabarty and Hayes [56]. The method is based on multiplexed parity trees (MPTs) and introduces zero aliasing. Multiplexed parity trees perform space compaction of test responses by combining the error propagation properties of multiplexers and parity trees through multiple time-steps. The authors show that the associated hardware overhead is moderate, and very high fault coverage is obtained for faults in the CUT, including even those in the compactor. Quite recently, a new space compaction approach for IP cores based on the use of orthogonal transmission functions was suggested in [47], which provides zero aliasing for all errors with optimal compaction ratio. Other approaches were given in [52] and [53] that are supposed to reduce test time and test data volume and improve testability with high compaction ratios, and could be applicable to several industrial circuits. We now briefly examine some time compaction methods like ones counting, syndrome testing, transition counting, signature analysis, and others. Ones counting [24] uses as its signature the number of ones in the binary circuit response stream. The hardware that represents the compaction unit consists of a simple counter, and is independent of the CUT; it only depends on the nature of the test response. Signature values do not depend on the order in which the input test patterns are applied to the CUT. In syndrome counting [27], all the 2 input patterns are exhaustively applied to an -input combinational circuit.
9 2318 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE VII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS. 3 INDICATES FAULTS WERE INJECTED INTO CUT + COMPACTOR) The syndrome, which is given by the normalized number of ones in the response stream, is defined as, with being the number of minterms of the function being implemented by the single-output CUT. Any switching function can be so realized that all its single stuck-line faults are syndrometestable. Transition counting [25] counts the number of times the output bit stream changes from one to zero and vice versa. In transition counting, the signature length is less than or equal to, with being the length of a response stream. The error masking probability takes high values when the signature value is close to 2 and low values when it is close to zero or. In Walsh spectral analysis [28], [29], switching functions are represented by their spectral coefficients that are compared to known correct coefficients values. In a sense, in this method, the truth table of the given switching function is basically verified. The process of collecting and comparing a subset of the complete set of Walsh functions is described as a mechanism for data compaction. The use of spectral coefficients promises higher percentage of error coverage, whereas the resulting higher area overhead for generating them is deemed as a disadvantage. In parity checking [31], the response bit stream coming from a circuit under test reduces a multitude of output data to a signature of length 1 bit. The single bit signature has a value that equals the parity of the test response sequence. Parity checking detects all errors involving an odd number of bits, while faults that give rise to an even number of error bits are not detected. This method is relatively ineffective since a large number of possible response bit streams from a faulty circuit will result in the same parity as that of the correct bit stream. All single stuck-line faults in fanout-free circuits are detected by the parity check technique. These obvious shortcomings of parity checking are eliminated in methods devised based on single-output parity bit signature [31] and multiple-output parity bit signature [32], [34]. Signature analysis is probably the most popular time compaction technique currently available [26]. It uses linear feedback shift registers (LFSRs) consisting of flip-flops and exclusive-or (XOR) gates. The signature analysis technique is based on the concept of cyclic redundancy checking (CRC). LFSRs are used for generating pseudorandom input test patterns, and for response compaction as well. The nature of the generated sequence patterns is determined by the LFSR s characteristic polynomial as defined by its interconnection structure. A test-input sequence is fed into the signature analyzer,
10 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2319 TABLE VIII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) which is divided by the characteristic polynomial of the signature analyzer s LFSR. The remainder obtained by dividing by over a Galois field such that. represents the state of the LFSR, with being the corresponding quotient. In other words, represents the observed signature. Signature analysis involves comparing the observed signature to a known fault-free signature. An error is detected if these two signatures differ. Suppose that is the correct response and is the faulty one, where is an error polynomial; it can be shown that aliasing occurs whenever is a multiple of. The masking probability in this case is estimated as 1/2, where is the number of flip-flop stages in the LFSR. When is larger than 16, the aliasing probability is negligible. Many commercial applications have reported good success with LFSR-implemented signature analysis. Different methods for computing and reducing the aliasing probability in signature analysis have been proposed, e.g., the signature analysis model proposed by Williams et al. [30] that uses Markov chains and derives an upper bound on the aliasing probability in terms of the test length and probability of an error s occurring at the output of the CUT. Another approach to the computation of aliasing probability is presented in [33]. An error pattern in signature analysis causes aliasing if and only if it is a codeword in the cyclic code generated by the LFSR s characteristic polynomial. Unlike other methods, the fault coverage in signature analysis may be improved without changing the test set. This can be done by playing with the length of the LFSR or by using a different characteristic polynomial. As demonstrated in [35], for short test length, signature analysis detects all single-bit errors. However, there is no known theory that characterizes fault detection in signature analysis. Testing using two different compaction schemes in parallel has also been extensively investigated. The combination of signature analysis and transition counting has also been analyzed [49], which shows that using simultaneously both techniques leads to a very small overlap in their error masking. As a result of using two different compaction schemes in parallel, the fault coverage is improved, while the fault signature size and hardware overhead are greatly increased. III. DESIGNING COMPACTION TREES BASED ON SEQUENCE CHARACTERIZATION AND STOCHASTIC INDEPENDENCE AND DEPENDENCE OF LINE ERRORS The principal idea in space compaction is to compress functional test outputs of the CUT possibly into one single test
11 2320 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE IX FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) output line to derive the CUT signature without sacrificing too much information in the process. Generally, space compression has been accomplished using XOR gates in cascade or in a tree structure. We adopt a combination of both cascade and tree structures (cascade-tree) for our framework with AND (NAND), OR (NOR), and XOR (XNOR) operators. The logic function to be selected to build the compaction tree is determined solely by the characteristics of the sequences that are inputs to the gates based on some optimal mergeability criteria developed earlier by the authors [46], [49], [57]. The basic theme of the approaches proposed is to select appropriate logic gates to merge two candidate output lines of the CUT under conditions of stochastic independence and dependence of single and double line errors, using sequence characterization and other concepts introduced by the authors. However, the criteria of selecting a number of CUT output lines for optimal generalized sequence mergeability were also developed and utilized in the design of space compression networks, based on stochastic independence of multiple line errors, and also on stochastic dependence of multiple line errors using the concept of generalized detectable or missed error probability estimates [46], [50], [57], with extensive simulations conducted with ATALANTA, FSIM, and COM PACTEST [59], [61], [62]; however, optimal generalized sequence mergeability is not the concern of this paper, and as such is not discussed. In the following the mathematical basis of the approaches is briefly given with the introduction of appropriate notations and terminologies. A. Hamming Distance, Sequence Weights, and Derived Sequences Let represent a pair of output sequences of a CUT of length each, where the length is the number of bit positions in and. Let represent the Hamming distance between and (the number of bit positions in which and differ). Definition: The first-order one-weight, denoted by, of a sequence is the number of ones in the sequence. Similarly, the first-order zero-weight, denoted by, of a sequence is the number of zeroes in the sequence. Example: Consider an output sequence pair with and. The length of both output streams is eight ( ). The Hamming distance between and is. The first-order
12 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2321 TABLE X FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) one-weights and zero-weights of and are,,, and, respectively. Property: For any sequence, it can be shown that. For the output sequence given in the example above, we have found that and. Therefore, it is obvious that. Definition: Consider an output sequence pair of equal length. Then the sequence pair derived by discarding the bit positions in which the two sequences differ (and indicated by a dash) is called the second-order derived sequence pair. In this paper, we will denote the derived sequence of a sequence by, its first-order one-weight by, and its first-order zero-weight by. Example: For given in the example above,. The first-order oneweights and zero-weights of the derived pair are,,, and. Property: For any second-order derived sequences, we have and, i.e., as shown in the example above, for the same output sequence pair given above, we have shown that and. By the aforesaid property, when no ambiguity arises, we will denote one- and zero-weights for the derived sequence pair by simply and, respectively. The length of the derived sequence pair will be denoted by where. Also, since is always zero, we will simply use to denote.that is, for in the example above,,,, and. Therefore, and. Property: For every distinct pair of output sequences at the output of a CUT, the corresponding derived pair of equal length is distinct. Two derived sequence pairs may have the same length,but they are still distinct and not identical. Property: Two derived sequence pairs and of original output stream pairs and having the same length are identical. Thereby, if and only if. Consider,,, and as four
13 2322 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE XI FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) output sequence streams of a CUT. Let and be two distinct output pairs and, and their corresponding derived sequence pairs, respectively, such that and. Both of the derived sequence pairs have the same length, but they are not identical. However, in general, it is not expected that any two distinct pairs of sequences at the output of a CUT will be identical and hence the possibility of the corresponding derived pairs being identical is also remote. We can extend the concept of one-weight and zero-weight to deal with sequences readily, but since the concepts are not used in this paper, their discussions are omitted. B. Optimal Pairwise Mergeability and Gate Selection In this section we will briefly summarize the key results concerning optimal pairwise mergeability of response data at the CUT output in the design of space compactors. These will be provided in the form of certain theorems without proofs under conditions of stochastic dependence of line errors, of which the details could be found in [49] and [57]. In the case of stochastic dependence of line errors at the CUT output, we can assign distinct probabilities of error occurrence in different lines. The gate selection was primarily based on optimal mergeability criteria established utilizing the properties of Hamming distance, sequence weights, and derived sequences, together with the concept of detectable error probability estimate [38] for a two-input logic function, under condition of stochastic dependence of single and double line errors at the output of a CUT. C. Effects of Error Probabilities in Selection of Gates for Optimal Merger Li and Robinson [38] defined the detectable error probability estimate for a two-input logic function given two input sequences of length as follows:, where is the probability of single error effect felt at the output of the CUT; is the probability of double error effect felt at the output of the CUT; is the number of single line errors at the output of gate if gate is used for merger; and is the number of double line errors at the output of gate if gate is used for merger.
14 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2323 TABLE XII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) Based on the computation of the detectable error probability estimates of Li and Robinson as given above, we deduce the following results that profoundly influence the selection of gates for optimal merger. Theorem: For an output sequence pair of length and Hamming distance, anand (NAND) gate is preferable to an XOR (XNOR) gate for optimal merger if. Theorem: For an output sequence pair of length and Hamming distance,anor (NOR) gate is preferable to an XOR (XNOR) gate for optimal merger if. Theorem: For an output sequence pair of length and Hamming distance,anand (NAND) gate is preferable to an OR (NOR) gate for optimal merger if. Theorem: For two gates and, is preferable to for optimal merger if and only if. IV. EXPERIMENTAL RESULTS To demonstrate the feasibility of the proposed space compression schemes, independent simulations were conducted on various ISCAS 89 full scan sequential benchmark circuits using HOPE [60], a fault simulation program developed at the Virginia Polytechnic Institute and State University, to generate the fault-free output sequences needed to construct our space compactor circuits and to test the benchmark circuits using deterministic compacted input test sets, accompanied with random test sessions that generate pseudorandom test sets with different values of random number generator seeds. For each circuit, we determined the number of injected faults, number of applied test vectors (after compaction), CPU simulation time, and fault coverage (without space compactors and with space compactors) by considering only the combinational part of the circuit (full scan version) or using the complete sequential circuit, assuming either stochastic independence of single and double line errors or different values of their failure probabilities (under condition of stochastic dependence of single and double line errors ), by running the program on SUN SPARC 5 workstation. The extensive simulation results are presented in the form of graphs and tables that follow. The CPU times needed for simulations of all the different ISCAS 89 full scan sequential benchmark circuits on SUN SPARC 5 workstation were in the range of 300 ms s, though for some of the largest circuits, simulations could not be completed since memory, CPU time, and disk usage limits did not permit.
15 2324 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 TABLE XIII FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (COMPACTED INPUT TEST SETS) The hardware overhead for the designed space compactors is not given in this paper, though for all the circuits, the overhead was well within acceptable limits. Figs. 2 and 4 show fault coverage for all the benchmark circuits without compactors when stochastic independence of single and double line errors was assumed, using deterministic compacted input test sets and pseudorandom testing, respectively, whereas Figs. 3 and 5 show CPU simulation times for these circuits under identical conditions. The benchmark circuits were actually tested here as combinational circuits. On the other hand, Figs. 6 and 8 show fault coverage for all the benchmark circuits, using deterministic compacted input test sets and pseudorandom testing, respectively, for the probability values of single and double line errors (, ) being equal to (0.66, 0.33), while Figs. 7 and 9 show their CPU simulation times under identical situations. Here the benchmark circuits were tested as sequential circuits. Tables I VI show the different simulation results for ISCAS 89 benchmark circuits under stochastic dependence of single and double line errors with values of (, ) being equal to (0.33, 0.66), (0.90, 0.10), and (0.10, 0.90) on application of compacted test sets and under pseudorandom testing for combinational parts of the circuits, while Tables IX XIV show the same for the different circuits with all the above values of line errors (, ) treating them as sequential circuits. Finally, Tables VII and VIII show similar results for deterministic compacted testing of all the sequential benchmark circuits. V. CONCLUDING REMARKS The design of space-efficient BIST support hardware in the synthesis of digital integrated circuits is of great significance. This paper reports compression techniques of test data outputs for full scan digital sequential circuits that facilitate the design of these kinds of space-efficient support hardware. The proposed techniques use AND (NAND), OR (NOR), and XOR (XNOR) gates as appropriate to construct an output compaction tree that compresses the functional outputs of the CUT to a single line. The compaction tree is generated based on sequence characterization and utilizing the concepts of Hamming distance, sequence weights, and derived sequences. The logic functions selected to build the compaction tree are determined primarily by the characteristics of the sequences that are inputs to the logic gates. The optimal mergeability criteria were obtained on the assumption of stochastic independence as well as dependence
16 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2325 TABLE XIV FAULT COVERAGE FOR ISCAS 89 BENCHMARK CIRCUITS USING HOPE (RANDOM TESTING; INITIAL RANDOM NUMBER GENERATOR SEED = 999) of single and double line errors. In the case of stochastic dependence, output bit stream selection is based on calculating the detectable error probability estimates using an empirical formula developed by Li and Robinson [38]. It should be recalled that the effectiveness of the proposed approaches is critically dependent on the probabilities of error occurrence in different lines of the CUT, and this dependence may be affected by the circuit structure, partitioning, etc., e.g., by the number of inputs, outputs, internal lines, and types of gates the circuit is designed of and the way it is partitioned. In actual situations, the probability values for error occurrence in particular circuits have to be experimentally determined, that is, these are a posteriori probabilities rather than a priori probabilities. If the circuit structure changes, these probability values change, and evidently the corresponding compression networks that have to be designed based on pairwise optimal mergeability criteria change as well. Another point should be considered here as well. Since the empirical formula used for computing the detectable error probability estimates in the gate selection process uses exact values of these a posteriori probabilities of error occurrence, unless the formula is modified, whenever only intervals on the probability values are given rather than their exact values, they cannot be used as such in the gate selection process, except only to provide two extremes of selections consistent with the probability intervals. From the analytical viewpoint, the major issue involves the computation of the detectable error probability estimates, which is rather simple in the present case because of two-line mergers, compared to that in the case of generalized mergeability, where the computation is really intensive. Since the major emphasis of the paper is in synthesizing compaction networks that provide improved fault coverage for fixed complexity, realizing a tradeoff between coverage and complexity (storage) than conventional techniques, the complexity issues were not addressed in depth. Also, zero aliasing compaction [45], [48] was not emphasized in the present study, but rather an attempt was made simply to reinforce the connection between the input test sets and their lengths, in their reduction into recommended algorithms for the design of space-efficient compaction networks. ACKNOWLEDGMENT The authors are extremely grateful to the anonymous reviewers for their constructive comments that greatly helped in the preparation of the revised version of the manuscript. The authors are also thankful to the Associate Editor of this TRANSACTIONS for his helpful suggestions and kind encouragement.
17 2326 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 REFERENCES [1] A. Miczo, Digital Logic Testing and Simulation. New York: Harper and Row, [2] P. H. Bardell, W. H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Technique. New York: Wiley, [3] R. Rajsuman, System-on-a-Chip: Design and Test. Boston, MA: Artech House, [4] S. Mourad and Y. Zorian, Principles of Testing Electronic Systems. New York: Wiley, [5] K. Chakrabarty, V. Iyengar, and A. Chandra, Test Resource Partitioning for System-on-a-Chip. Boston, MA: Kluwer, [6] SOC (System-on-a-Chip) Testing for Plug and Play Test Automation,K. Chakrabarty, Ed., Kluwer, Boston, MA, [7] P. K. Lala, Fault Tolerant and Fault Testable Hardware Design. London, U.K.: Prentice-Hall, [8] T. W. Williams and K. P. Parker, Testing logic networks and design for testability, Computer, vol. 21, pp. 9 21, Oct [9] S. M. Thatte and J. A. Abraham, Test generation for microprocessors, IEEE Trans. Comput., vol. C-29, pp , Jun [10] J. R. Kuban and W. C. Bruce, Self-testing the Motorola MC6804P2, IEEE Des. Test Comput., vol. 1, pp , Oct [11] E. J. McCluskey, Built-in self-test techniques, IEEE Des. Test Comput., vol. 2, pp , Apr [12] R. G. Daniels and W. B. Bruce, Built-in self-test trends in Motorola microprocessors, IEEE Des. Test Comput., vol. 2, pp , Apr [13] S. R. Das, Built-in self-testing of VLSI circuits, IEEE Potentials, vol. 10, pp , Oct [14] V. D. Agrawal, C. R. Kime, and K. K. Saluja, A tutorial on built-in self-test Part II, IEEE Des. Test Comput., vol. 10, pp , Jun [15] Y. Zorian, A distributed BIST control scheme for complex VLSI devices, in Proc. VLSI Test Symp., 1993, pp [16] H. J. Wunderlich and G. Kiefer, Scan-based BIST with complete fault coverage and low hardware overhead, in Proc. Euro. Test Workshop, 1996, pp [17] Y. Zorian, Test requirements for embedded core-based systems and IEEE P-1500, in Proc. Int. Test Conf., 1997, pp [18] P. Varma and S. Bhatia, A structured test reuse methodology for corebased system chip, in Proc. Int. Test Conf., 1997, pp [19] J. Brauch and J. Fleischman, Design of cache test hardware on the HP PA8500, IEEE Des. Test Comput., vol. 15, pp , Jun [20] R. Fetherston, Testability features of the AMD-K6 microprocessor, IEEE Des. Test Comput., vol. 15, pp , Jun [21] A. Carbine, Pentium Pro processor design for test and debug, IEEE Des. Test Comput., vol. 15, pp , Jun [22] K. Chakrabarty and S. R. Das, Test-set embedding based on width compression for mixed-mode BIST, IEEE Trans. Instrum. Meas., vol. 49, pp , Jun [23] S. R. Das, Self-testing of embedded cores-based systems with built-in hardware, in Proc. IEE, Cir. Dev. Syst., vol. 152, Oct. 2005, pp [24] J. P. Hayes, Check sum methods for test data compression, J. Design Automat. Fault-Tolerant Comput., vol. 1, pp. 3 7, Jan [25], Transition count testing of combinational logic circuits, IEEE Trans. Comput., vol. C-25, pp , Jun [26] R. A. Frohwerk, Signature analysis A new digital field service method, Hewlett-Packard J., vol. 28, pp. 2 8, May [27] J. Savir, Syndrome-testable design of combinational circuits, IEEE Trans. Comput., vol. C-29, pp , Jun [28] A. K. Susskind, Testing by verifying Walsh coefficients, IEEE Trans. Comput., vol. C-32, pp , Feb [29] T.-C. Hsiao and S. C. Seth, An analysis of the use of Rademacher- Walsh spectrum in compact testing, IEEE Trans. Comput., vol. C-33, pp , Oct [30] T. W. Williams, W. Daehn, M. Gruetzner, and C. W. Starke, Aliasing errors in signature analysis registers, IEEE Des. Test Comput., vol. 4, pp , Apr [31] S. B. Akers, A parity bit signature for exhaustive testing, IEEE Trans. Computer-Aided Design Integr. Circuits Syst., vol. CAD-7, pp , Mar [32] W. -B. Jone and S. R. Das, Multiple-output parity bit signature for exhaustive testing, J. Electron. Test. Theory Applicat., vol. 1, pp , Jun [33] D. K. Pradhan and S. K. Gupta, A new framework for designing and analyzing BIST techniques and zero aliasing compression, IEEE Trans. Comput., vol. C-40, pp , Jun [34] S. R. Das, M. Sudarma, M. H. Assaf, E. M. Petriu, W. -B. Jone, K. Chakrabarty, and M. Sahinoglu, Parity bit signature in response data compaction and built-in self-testing of VLSI circuits with nonexhaustive test sets, IEEE Trans. Instrum. Meas., vol. 52, pp , Oct [35] N. R. Saxena and J. P. Robinson, A unified view of test response compression methods, IEEE Trans. Comput., vol. C-36, pp , Jan [36] K. K. Saluja and M. Karpovsky, Testing computer hardware through compression in space and time, in Proc. Int. Test Conf., 1983, pp [37] Y. Zorian and V. K. Agarwal, A general scheme to optimize error masking in built-in self testing, in Proc. Int. Symp. Fault-Tolerant Computing, 1986, pp [38] Y. K. Li and J. P. Robinson, Space compression method with output data modification, IEEE Trans. Computer-Aided Design Integr. Circuits Syst., vol. CAD-6, pp , Mar [39] S. M. Reddy, K. K. Saluja, and M. G. Karpovsky, Data compression technique for test responses, IEEE Trans. Comput., vol. C-37, pp , Sep [40] M. Karpovsky and P. Nagvajara, Optimal robust compression of test responses, IEEE Trans. Comput., vol. C-39, pp , Jan [41] W. -B. Jone and S. R. Das, Space compression method for built-in selftesting of VLSI circuits, Int. J. Comput. Aided VLSI Design, vol. 3, pp , Sep [42] S. R. Das, H. T. Ho, W. -B. Jone, and A. R. Nayak, An improved output compaction technique for built-in self-test in VLSI circuits, in Proc. Int. Conf. VLSI Design, 1994, pp [43] K. Chakrabarty and J. P. Hayes, Efficient test response compression for multiple-output circuits, in Proc. Int. Test Conf., 1994, pp [44] S. R. Das, M. Assaf, and A. R. Nayak, On the design of space compressor for VLSI circuits in BIST using nonexhaustive test sets, Trans. SDPS, vol. 2, pp. 1 12, Dec [45] B. Pouya and N. A. Touba, Synthesis of zero-aliasing elementary-tree space compactors, in Proc. VLSI Test Symp., 1998, pp [46] S. R. Das, T. F. Barakat, E. M. Petriu, M. H. Assaf, and K. Chakrabarty, Space compression revisited, IEEE Trans. Instrum. Meas., vol. 49, pp , Jun [47] M. Seuring and K. Chakrabarty, Space compaction of test responses for IP cores using orthogonal transmission functions, in Proc. VLSI Test Symp., 2000, pp [48] S. R. Das, M. H. Assaf, E. M. Petriu, W. -B. Jone, and K. Chakrabarty, A novel approach to designing aliasing-free space compactors based on switching theory formulation, in Proc. IEEE Instrum. Meas. Tech. Conf., vol. 1, 2001, pp [49] S. R. Das, C. V. Ramamoorthy, M. H. Assaf, E. M. Petriu, and W. -B. Jone, Fault tolerance in systems design in VLSI using data compression under constraints of failure probabilities, IEEE Trans. Instrum. Meas., vol. 50, pp , Dec [50] S. R. Das, J. Y. Liang, E. M. Petriu, W. -B. Jone, and K. Chakrabarty, Data compression in space under generalized mergeability based on concepts of cover table and frequency ordering, IEEE Trans. Instrum. Meas., vol. 51, pp , Feb [51] S. R. Das, M. H. Assaf, E. M. Petriu, and W. -B. Jone, Fault simulation and response compaction in full scan circuits using HOPE, in IEEE Instrum. Meas. Tech. Conf., vol. 1, 2002, pp [52] S. Mitra and K. S. Kim, X-compact: An efficient response compaction technique for test cost reduction, in Proc. Int. Test Conf., 2002, pp [53] M. B. Tahoori, S. Mitra, S. Toutounchi, and E. J. McCluskey, Fault grading FPGA test configuration, in Proc. Int. Test Conf., 2002, pp [54] J. Rajski, J. Tyszer, C. Wang, and S. M. Reddy, Convolutional compaction of test responses, in Proc. Int. Test Conf., 2003, pp [55] S. R. Das, M. H. Assaf, E. M. Petriu, and M. Sahinoglu, Aliasing-free compaction in testing cores-based system-on-chip (SOC) using compatibility of response data outputs, Trans. SDPS, vol. 8, pp. 1 17, Mar [56] K. Chakrabarty, Test response compaction for built-in self testing, Ph.D. dissertation, Dept. of Computer Science and Engineering, Univ. of Michigan, Ann Arbor, MI, [57] M. H. Assaf, Digital core output test data compression architecture based on switching theory concepts, Ph.D. dissertation, School of Information Technology and Engineering, Univ. of Ottawa, Ottawa, ON, Canada, [58] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman, 1979.
18 DAS et al.: FAULT SIMULATION AND RESPONSE COMPACTION IN FULL SCAN CIRCUITS USING HOPE 2327 [59] H. K. Lee and D. S. Ha, On the generation of test patterns for combinational circuits, Dept. of Electrical Engineering, Virginia Polytechnic Inst. and State Univ., Blacksburg, VA, Tech. Rep , [60], HOPE: An efficient parallel fault simulator for synchronous sequential circuits, in Proc. Des. Automation Conf., 1992, pp [61], An efficient forward fault simulation algorithm based on the parallel pattern single fault propagation, in Proc. Int. Test Conf., 1991, pp [62] I. Pomeranz, L. N. Reddy, and S. M. Reddy, COMPACTEST: A method to generate compact test sets for combinational circuits, in Proc. Int. Test Conf., 1991, pp Sunil R. Das (M 70 SM 90 F 94 LF 04) received the B.Sc. degree (honors) in physics, the M.Sc. (Tech.) degree, and the Ph.D. degree in radiophysics and electronics from the University of Calcutta, Calcutta, West Bengal, India. He is an Emeritus Professor of Electrical and Computer Engineering at the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada, and a Professor of Computer and Information Science, Troy State University-Montgomery, Montgomery, AL. He previously held academic and research positions with the University of California, Berkeley; Stanford University, Stanford, CA (on sabbatical leave); National Chiao Tung University, Hsinchu, Taiwan, R.O.C.; and the University of Calcutta. He has published around 300 papers in the areas of switching and automata theory, digital logic design, threshold logic, fault-tolerant computing, built-in self-test with emphasis on embedded cores-based systems-on-chip, microprogramming and microarchitecture, microcode optimization, applied theory of graphs, and combinatorics. He is an Associate Editor of the International Journal of Computers and Applications, a Regional Editor for Information Technology Journal, and a member of the Editorial Board and a Regional Editor for Canada of VLSI Design: An International Journal of Custom-Chip Design, Simulation and Testing. He is a former Associate Editor of SIGDA Newsletter, International Journal of Computer Aided VLSI Design, and International Journal of Parallel and Distributed Systems and Networks. He is a Coeditor (with P. K. Srimani) of Distributed Mutual Exclusion Algorithms (Los Alamitos, CA: IEEE Computer Society Press, 1992) and the Coauthor (with C. L. Sheng) of Digital Logic Design (Norwood, NJ: Ablex, to be published). Dr. Das is a Fellow of the Society for Design and Process Science and the Canadian Academy of Engineering. He is a Member of the IEEE Computer Society, IEEE Systems, Man, and Cybernetics Society, IEEE Circuits and Systems Society, and IEEE Instrumentation and Measurement Society. He is a Member of the Association for Computing Machinery. He received the IEEE Computer Society s Technical Achievement Award in 1996 and its Meritorious Service Award in He became a Golden Core Member of the IEEE Computer Society in He has received many Certificates of Appreciation from the IEEE Circuit and Systems Society. He was on the Technical Program Committees and Organizing Committees of many IEEE and non-ieee international conferences, symposia, and workshops, and also acted as Session Organizer, Session Chair, and Panelist. He became a Delegate of Good People, Good Deeds of the Republic of China in He was listed in Marquis Who s Who biographical directory of the computer graphics industry in He was Managing Editor of the IEEE VLSI TECHNICAL BULLETIN since its inception. He was an Executive Committee Member of the IEEE Computer Society Technical Committee on VLSI. He was an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS from 1991 until very recently. He is currently an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is a former Administrative Committee Member of the IEEE Systems, Man, and Cybernetics Society, and a former Associate Editor of the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS (for two consecutive terms). He was Cochair of the IEEE Computer Society Student Activities Committee from Region 7 (Canada). He was the Associate Guest Editor of the IEEE Journal of Solid-State Circuits Special Issues on Microelectronic Systems. With R. Rajsuman, he was a Co-Guest Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT Special Section on Innovations in VLSI Test Equipments (October 2003) and Future of Semiconductor Test (October 2005). He was a corecipient of the Rudolph Christian Karl Diesel Best Paper Award of the Society for Design and Process Science for a paper presented at the Fifth Biennial World Conference on Integrated Design and Process Technology, Dallas, TX, 2000, and the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. Chittoor V. Ramamoorthy (M 57 SM 76 F 78 LF 93) received two undergraduate degrees in physics and technology from the University of Madras, Madras, India, two graduate degrees in mechanical engineering from the University of California, Berkeley, and M.S. and Ph.D. degrees from Harvard University, Cambridge, MA, in applied mathematics (computer science), in His education was supported by the Computer Division of the Honeywell Inc., Waltham, MA, a company he was associated with till 1967, last as a Senior Staff Scientist. He later joined the University of Texas, Austin as a Professor in the Department of Electrical Engineering and Computer Science. After serving as a Chairman of the Department, he joined the University of California, Berkeley, in 1972 as a Professor of Electrical Engineering and Computer Sciences, Computer Science Division, a position that he still holds as Professor Emeritus. He supervised more than 70 doctoral students in his career. He has held the Control Data Distinguished Professorship at the University of Minnesota, Minneapolis, and Grace Hopper Chair at the U. S. Naval Postgraduate School, Monterey, CA. He was also a Visiting Professor at the Northwestern University, Evanston, IL, and a Visiting Research Professor at the University of Illinois, Urbana-Champaign. He is a Senior Research Fellow at the ICC Institute of the University of Texas, Austin. He served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. He is the founding Editor-in- Chief of the IEEE TRANSACTIONS ON KNOWLEDGEAND DATA ENGINEERING, which recently published a Special Issue in his honor. He is also the founding Co-Editor-in-Chief of the International Journal of Systems Integration published by Elsevier North- Holland, NY and of the Journal for Design and Process Science published by SDPS, TX. He served in various capacities in the IEEE Computer Society including as its First Vice President, and a Governing Board Member. He served on several Advisory Boards of the Federal Government and of the Academia that include the United States Army, Navy, Air Force, DOE s Los Alamos Lab, University of Texas, and State University System of Florida. He is one of the founding Directors of the International Institute of Systems Integration in Campinas, Brazil, supported by the Federal Government of Brazil, and for several years, was a Member of the International Board of Advisors of the Institute of Systems Science of the National University of Singapore. Dr. Ramamoorthy received the Group Award and Taylor Booth Award for education, Richard Merwin Award for outstanding professional contributions, and Golden Core Recognition from the IEEE Computer Society. He is the recipient of the IEEE Centennial Medal, and IEEE Millennium Medal. He also received the Computer Society s 2000 Kanai-Hitachi Award for pioneering and fundamental contributions in parallel and distributed computing. He is the Fellow of the Society for Design and Process Science, from which he received the R. T. Yeh Distinguished Achievement Award in He also received the Best Paper Award from the IEEE Computer Society in Three international conferences and one UC Berkeley Graduate Student Research Award were organized in his honor. Mansour H. Assaf (M 02) received the Honors degree in applied physics from the Lebanese University in Beirut in 1989 and the B.A.Sc., M.A.Sc., and Ph.D. degrees in electrical engineering from the University of Ottawa, Ottawa, ON, Canada, in 1994, 1996, and 2003, respectively. From 1994 to 1996, he was with the Fault-Tolerant Computing Group of the University of Ottawa, where he studied and worked as a Researcher. After working with Applications Technology, a subsidiary of Lernout and Hauspie Speech, McLean, VA, in the area of software localization and natural language processing, he joined the Sensing and Modeling Research Laboratory of the University of Ottawa, where he currently works on projects in the field of human-computer interaction, three-dimensional modeling, and virtual environments. His research interests are in the areas of human computer interactions and perceptual-user interfaces, and in fault diagnosis in digital systems. Dr. Assaf is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT.
19 2328 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 54, NO. 6, DECEMBER 2005 Emil M. Petriu (M 86 SM 88 F 01) is a Professor and University Research Chair in the School of Information Technology and Engineering, University of Ottawa, Ottawa, ON, Canada. His research interests include robot sensing and perception, intelligent sensors, interactive virtual environments, soft computing, digital integrated circuit testing. He has published more than 200 technical papers, authored two books, edited two other books, and received two patents. Dr. Petriu is a Fellow of the Canadian Academy of Engineering and the Engineering Institute of Canada. He is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT and a recipient of the 2003 IEEE Instrumentation and Measurement Society Award. He is Chair of TC-15 Virtual Systems and Co-Chair of TC-28 Instrumentation and Measurement for Robotics and Automation and TC-30 Security and Contraband Detection of the IEEE Instrumentation and Measurement Society. He is an Associate Editor of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT and a Member of the Editorial Board of the IEEE INSTRUMENTATION AND MEASUREMENT MAGAZINE. Wen-Ben Jone (S 85 M 88 SM 01) was born in Taipei, Taiwan, R.O.C. He received the B.S. degree in computer science and the M.S. degree in computer engineering from National Chiao-Tung University, Hsinchu, Taiwan, in 1979 and 1981, respectively, and the Ph.D. degree in computer engineering and science from Case Western Reserve University, Cleveland, OH in In 1987, he joined the Department of Computer Science, New Mexico Institute of Mining and Technology, Socorro, where he became an Associate Professor in From 1993 to 2000, he was with the Department of Computer Engineering and Information Science, National Chung-Cheng University, Chiayi, Taiwan. He was a Visiting Research Fellow with the Department of Computer Science and Engineering, Chinese University of Hong Kong, in summer Since 2001, he has been with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, Cincinnati, OH. He was a Visiting Scholar with the Institute of Information Science, Academia Sinica, Taiwan, in summer His research interests include VLSI design for testability, built-in self-testing, memory testing, high-performance circuit testing, MEMS testing and repairing, and low-power circuit design. He has published more than 100 papers and has received one U.S. patent. He has served as a reviewer in his research areas in various technical journals and conferences. He has served on the Program Committee of VLSI Design/CAD Symposium ( , in Taiwan), was the General Chair of 1998 VLSI Design/CAD Symposium, served on the Program Committee of 1995, 1996, 2000 Asian Test Conference, Asia and South Pacific Design Automation Conference, 1998 International Conference on Chip Technology, 2000 International Symposium on Defect and Fault Tolerance in VLSI Systems, and 2002 and 2003 Great Lake Symposium on VLSI. Dr. Jone is a member of the IEEE Computer Society Test Technology Technical Committee. He is a corecipient of the IEEE 2003 Donald G. Fink Prize Paper Award for a paper published in the December 2001 issue of the IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT. He is listed in Marquis Who s Who in the World (1998, 2001). He received the Best Thesis Award from the Chinese Institute of Electrical Engineering in Mehmet Sahinoglu (S 78 M 81 SM 93) received the B.S. degree from METU, Ankara, Turkey, and the M.S. degree from UMIST, U.K., both in electrical and computer engineering. He received the Ph.D. degree from Texas A&M University, College Station, in electrical engineering and statistics. He is the Eminent Scholar for the Endowed Chair of the Alabama Commission of Higher Education. He has been Chairman of the Computer and Information Science Department at TSUM since Following 20 years with METU, he served as the first Dean, and founding Department Chair, in the College of Arts and Sciences, DEU, Izmir, Turkey ( ). He was a Chief Reliability Consultant to the Turkish Electricity Authority from 1982 to He became an Emeritus Professor of METU and DEU in He has taught at Purdue University, West Lafayette, IN and Case Western Reserve University, Cleveland, OH, as a Fulbright and NATO scholar, respectively. He is accredited for the Compound Poisson Software Reliability Model to account for the multiple (clumped) failures in predicting the total number of failures at the end of a mission time and the MESAT: Compound Poisson Stopping Rule Algorithm in cost-effective digital software testing. He is jointly responsible (with D. Libby) for the original derivation of the G3B (Generalized Three-Parameter Beta) pdf in 1981, also known as Sahinoglu and Libby pdf in Dr. Sahinoglu is a Fellow of the Society for Design and Process Science, a member of ACM, AFCEA, and ASA, and an elected member of ISI.
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
An Effective Deterministic BIST Scheme for Shifter/Accumulator Pairs in Datapaths
An Effective Deterministic BIST Scheme for Shifter/Accumulator Pairs in Datapaths N. KRANITIS M. PSARAKIS D. GIZOPOULOS 2 A. PASCHALIS 3 Y. ZORIAN 4 Institute of Informatics & Telecommunications, NCSR
Power Reduction Techniques in the SoC Clock Network. Clock Power
Power Reduction Techniques in the SoC Network Low Power Design for SoCs ASIC Tutorial SoC.1 Power Why clock power is important/large» Generally the signal with the highest frequency» Typically drives a
Arbitrary Density Pattern (ADP) Based Reduction of Testing Time in Scan-BIST VLSI Circuits
Arbitrary Density Pattern (ADP) Based Reduction of Testing Time in Scan-BIST VLSI Circuits G. Naveen Balaji S. Vinoth Vijay Abstract Test power reduction done by Arbitrary Density Patterns (ADP) in which
A Fast Signature Computation Algorithm for LFSR and MISR
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 19, NO. 9, SEPTEMBER 2000 1031 A Fast Signature Computation Algorithm for LFSR and MISR Bin-Hong Lin, Shao-Hui Shieh,
Testing of Digital System-on- Chip (SoC)
Testing of Digital System-on- Chip (SoC) 1 Outline of the Talk Introduction to system-on-chip (SoC) design Approaches to SoC design SoC test requirements and challenges Core test wrapper P1500 core test
ON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT
216 ON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT *P.Nirmalkumar, **J.Raja Paul Perinbam, @S.Ravi and #B.Rajan *Research Scholar,
Chapter 2 Logic Gates and Introduction to Computer Architecture
Chapter 2 Logic Gates and Introduction to Computer Architecture 2.1 Introduction The basic components of an Integrated Circuit (IC) is logic gates which made of transistors, in digital system there are
FAULT TOLERANCE FOR MULTIPROCESSOR SYSTEMS VIA TIME REDUNDANT TASK SCHEDULING
FAULT TOLERANCE FOR MULTIPROCESSOR SYSTEMS VIA TIME REDUNDANT TASK SCHEDULING Hussain Al-Asaad and Alireza Sarvi Department of Electrical & Computer Engineering University of California Davis, CA, U.S.A.
Test Resource Partitioning and Reduced Pin-Count Testing Based on Test Data Compression
Test esource Partitioning and educed Pin-Count Testing Based on Test Data Compression nshuman Chandra and Krishnendu Chakrabarty Department of Electrical and Computer Engineering Duke University, Durham,
A New Programmable RF System for System-on-Chip Applications
Vol. 6, o., April, 011 A ew Programmable RF System for System-on-Chip Applications Jee-Youl Ryu 1, Sung-Woo Kim 1, Jung-Hun Lee 1, Seung-Hun Park 1, and Deock-Ho Ha 1 1 Dept. of Information and Communications
Design and Implementation of Concurrent Error Detection and Data Recovery Architecture for Motion Estimation Testing Applications
Design and Implementation of Concurrent Error Detection and Data Recovery Architecture for Motion Estimation Testing Applications 1 Abhilash B T, 2 Veerabhadrappa S T, 3 Anuradha M G Department of E&C,
Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation
Efficient Data Recovery scheme in PTS-Based OFDM systems with MATRIX Formulation Sunil Karthick.M PG Scholar Department of ECE Kongu Engineering College Perundurau-638052 Venkatachalam.S Assistant Professor
VLSI Design Verification and Testing
VLSI Design Verification and Testing Instructor Chintan Patel (Contact using email: [email protected]). Text Michael L. Bushnell and Vishwani D. Agrawal, Essentials of Electronic Testing, for Digital,
Efficient Interconnect Design with Novel Repeater Insertion for Low Power Applications
Efficient Interconnect Design with Novel Repeater Insertion for Low Power Applications TRIPTI SHARMA, K. G. SHARMA, B. P. SINGH, NEHA ARORA Electronics & Communication Department MITS Deemed University,
Design Verification and Test of Digital VLSI Circuits NPTEL Video Course. Module-VII Lecture-I Introduction to Digital VLSI Testing
Design Verification and Test of Digital VLSI Circuits NPTEL Video Course Module-VII Lecture-I Introduction to Digital VLSI Testing VLSI Design, Verification and Test Flow Customer's Requirements Specifications
Efficient Test Access Mechanism Optimization for System-on-Chip
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 22, NO. 5, MAY 2003 635 Efficient Test Access Mechanism Optimization for System-on-Chip Vikram Iyengar, Krishnendu Chakrabarty,
CHAPTER 3 Boolean Algebra and Digital Logic
CHAPTER 3 Boolean Algebra and Digital Logic 3.1 Introduction 121 3.2 Boolean Algebra 122 3.2.1 Boolean Expressions 123 3.2.2 Boolean Identities 124 3.2.3 Simplification of Boolean Expressions 126 3.2.4
Attaining EDF Task Scheduling with O(1) Time Complexity
Attaining EDF Task Scheduling with O(1) Time Complexity Verber Domen University of Maribor, Faculty of Electrical Engineering and Computer Sciences, Maribor, Slovenia (e-mail: [email protected]) Abstract:
Design Verification & Testing Design for Testability and Scan
Overview esign for testability (FT) makes it possible to: Assure the detection of all faults in a circuit Reduce the cost and time associated with test development Reduce the execution time of performing
What is a System on a Chip?
What is a System on a Chip? Integration of a complete system, that until recently consisted of multiple ICs, onto a single IC. CPU PCI DSP SRAM ROM MPEG SoC DRAM System Chips Why? Characteristics: Complex
Introduction to VLSI Testing
Introduction to VLSI Testing 李 昆 忠 Kuen-Jong Lee Dept. of Electrical Engineering National Cheng-Kung University Tainan, Taiwan, R.O.C. Introduction to VLSI Testing.1 Problems to Think A 32 bit adder A
An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems
An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems Ardhendu Mandal and Subhas Chandra Pal Department of Computer Science and Application, University
Image Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
technology brief RAID Levels March 1997 Introduction Characteristics of RAID Levels
technology brief RAID Levels March 1997 Introduction RAID is an acronym for Redundant Array of Independent Disks (originally Redundant Array of Inexpensive Disks) coined in a 1987 University of California
List of courses MEngg (Computer Systems)
List of courses MEngg (Computer Systems) Course No. Course Title Non-Credit Courses CS-401 CS-402 CS-403 CS-404 CS-405 CS-406 Introduction to Programming Systems Design System Design using Microprocessors
Switching and Finite Automata Theory
Switching and Finite Automata Theory Understand the structure, behavior, and limitations of logic machines with this thoroughly updated third edition. New topics include: CMOS gates logic synthesis logic
How To Fix A 3 Bit Error In Data From A Data Point To A Bit Code (Data Point) With A Power Source (Data Source) And A Power Cell (Power Source)
FPGA IMPLEMENTATION OF 4D-PARITY BASED DATA CODING TECHNIQUE Vijay Tawar 1, Rajani Gupta 2 1 Student, KNPCST, Hoshangabad Road, Misrod, Bhopal, Pin no.462047 2 Head of Department (EC), KNPCST, Hoshangabad
On Demand Loading of Code in MMUless Embedded System
On Demand Loading of Code in MMUless Embedded System Sunil R Gandhi *. Chetan D Pachange, Jr.** Mandar R Vaidya***, Swapnilkumar S Khorate**** *Pune Institute of Computer Technology, Pune INDIA (Mob- 8600867094;
SECRET sharing schemes were introduced by Blakley [5]
206 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 1, JANUARY 2006 Secret Sharing Schemes From Three Classes of Linear Codes Jin Yuan Cunsheng Ding, Senior Member, IEEE Abstract Secret sharing has
A Tool for Generating Partition Schedules of Multiprocessor Systems
A Tool for Generating Partition Schedules of Multiprocessor Systems Hans-Joachim Goltz and Norbert Pieth Fraunhofer FIRST, Berlin, Germany {hans-joachim.goltz,nobert.pieth}@first.fraunhofer.de Abstract.
VHDL DESIGN OF EDUCATIONAL, MODERN AND OPEN- ARCHITECTURE CPU
VHDL DESIGN OF EDUCATIONAL, MODERN AND OPEN- ARCHITECTURE CPU Martin Straka Doctoral Degree Programme (1), FIT BUT E-mail: [email protected] Supervised by: Zdeněk Kotásek E-mail: [email protected]
ISSCC 2003 / SESSION 13 / 40Gb/s COMMUNICATION ICS / PAPER 13.7
ISSCC 2003 / SESSION 13 / 40Gb/s COMMUNICATION ICS / PAPER 13.7 13.7 A 40Gb/s Clock and Data Recovery Circuit in 0.18µm CMOS Technology Jri Lee, Behzad Razavi University of California, Los Angeles, CA
Computer Systems Structure Main Memory Organization
Computer Systems Structure Main Memory Organization Peripherals Computer Central Processing Unit Main Memory Computer Systems Interconnection Communication lines Input Output Ward 1 Ward 2 Storage/Memory
1. True or False? A voltage level in the range 0 to 2 volts is interpreted as a binary 1.
File: chap04, Chapter 04 1. True or False? A voltage level in the range 0 to 2 volts is interpreted as a binary 1. 2. True or False? A gate is a device that accepts a single input signal and produces one
International Journal of Electronics and Computer Science Engineering 1482
International Journal of Electronics and Computer Science Engineering 1482 Available Online at www.ijecse.org ISSN- 2277-1956 Behavioral Analysis of Different ALU Architectures G.V.V.S.R.Krishna Assistant
Eastern Washington University Department of Computer Science. Questionnaire for Prospective Masters in Computer Science Students
Eastern Washington University Department of Computer Science Questionnaire for Prospective Masters in Computer Science Students I. Personal Information Name: Last First M.I. Mailing Address: Permanent
Switched Interconnect for System-on-a-Chip Designs
witched Interconnect for ystem-on-a-chip Designs Abstract Daniel iklund and Dake Liu Dept. of Physics and Measurement Technology Linköping University -581 83 Linköping {danwi,dake}@ifm.liu.se ith the increased
DESIGN OF AN ERROR DETECTION AND DATA RECOVERY ARCHITECTURE FOR MOTION ESTIMATION TESTING APPLICATIONS
DESIGN OF AN ERROR DETECTION AND DATA RECOVERY ARCHITECTURE FOR MOTION ESTIMATION TESTING APPLICATIONS V. SWARNA LATHA 1 & K. SRINIVASA RAO 2 1 VLSI System Design A.I.T.S, Rajampet Kadapa (Dt), A.P., India
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis Aparna.C 1, Kavitha.V.kakade 2 M.E Student, Department of Computer Science and Engineering, Sri Shakthi Institute
THE Walsh Hadamard transform (WHT) and discrete
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 54, NO. 12, DECEMBER 2007 2741 Fast Block Center Weighted Hadamard Transform Moon Ho Lee, Senior Member, IEEE, Xiao-Dong Zhang Abstract
A RDT-Based Interconnection Network for Scalable Network-on-Chip Designs
A RDT-Based Interconnection Network for Scalable Network-on-Chip Designs ang u, Mei ang, ulu ang, and ingtao Jiang Dept. of Computer Science Nankai University Tianjing, 300071, China [email protected],
Computation of Forward and Inverse MDCT Using Clenshaw s Recurrence Formula
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 51, NO. 5, MAY 2003 1439 Computation of Forward Inverse MDCT Using Clenshaw s Recurrence Formula Vladimir Nikolajevic, Student Member, IEEE, Gerhard Fettweis,
MICROPROCESSOR. Exclusive for IACE Students www.iace.co.in iacehyd.blogspot.in Ph: 9700077455/422 Page 1
MICROPROCESSOR A microprocessor incorporates the functions of a computer s central processing unit (CPU) on a single Integrated (IC), or at most a few integrated circuit. It is a multipurpose, programmable
STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM
STUDY AND SIMULATION OF A DISTRIBUTED REAL-TIME FAULT-TOLERANCE WEB MONITORING SYSTEM Albert M. K. Cheng, Shaohong Fang Department of Computer Science University of Houston Houston, TX, 77204, USA http://www.cs.uh.edu
A Hardware-Software Cosynthesis Technique Based on Heterogeneous Multiprocessor Scheduling
A Hardware-Software Cosynthesis Technique Based on Heterogeneous Multiprocessor Scheduling ABSTRACT Hyunok Oh cosynthesis problem targeting the system-on-chip (SOC) design. The proposed algorithm covers
Implementation Of Clock Gating Logic By Matching Factored Forms A.kirankumari 1, P.P. Nagaraja Rao 2
www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 4 Issue 7 July 2015, Page No. 13585-13600 Implementation Of Clock Gating Logic By Matching Factored Forms A.kirankumari
FPGA Implementation of System-on-chip (SOC) Architecture for Spacecraft Application
The International Journal Of Engineering And Science (IJES) Volume 3 Issue 6 Pages 17-24 2014 ISSN (e): 2319 1813 ISSN (p): 2319 1805 FPGA Implementation of System-on-chip (SOC) Architecture for Spacecraft
Implementation of Full -Parallelism AES Encryption and Decryption
Implementation of Full -Parallelism AES Encryption and Decryption M.Anto Merline M.E-Commuication Systems, ECE Department K.Ramakrishnan College of Engineering-Samayapuram, Trichy. Abstract-Advanced Encryption
Nonlinear Multi-Error Correction Codes for Reliable NAND Flash Memories
1 Nonlinear Multi-Error Correction Codes for Reliable NAND Flash Memories Zhen Wang, Student Member, IEEE, Mark Karpovsky, Fellow, IEEE, and Ajay Joshi, Member, IEEE Abstract Multi-level cell (MLC NAND
Fault Modeling. Why model faults? Some real defects in VLSI and PCB Common fault models Stuck-at faults. Transistor faults Summary
Fault Modeling Why model faults? Some real defects in VLSI and PCB Common fault models Stuck-at faults Single stuck-at faults Fault equivalence Fault dominance and checkpoint theorem Classes of stuck-at
INTRODUCTION TO DIGITAL SYSTEMS. IMPLEMENTATION: MODULES (ICs) AND NETWORKS IMPLEMENTATION OF ALGORITHMS IN HARDWARE
INTRODUCTION TO DIGITAL SYSTEMS 1 DESCRIPTION AND DESIGN OF DIGITAL SYSTEMS FORMAL BASIS: SWITCHING ALGEBRA IMPLEMENTATION: MODULES (ICs) AND NETWORKS IMPLEMENTATION OF ALGORITHMS IN HARDWARE COURSE EMPHASIS:
Computer System: User s View. Computer System Components: High Level View. Input. Output. Computer. Computer System: Motherboard Level
System: User s View System Components: High Level View Input Output 1 System: Motherboard Level 2 Components: Interconnection I/O MEMORY 3 4 Organization Registers ALU CU 5 6 1 Input/Output I/O MEMORY
Dynamic Resource Allocation in Softwaredefined Radio The Interrelation Between Platform Architecture and Application Mapping
Dynamic Resource Allocation in Softwaredefined Radio The Interrelation Between Platform Architecture and Application Mapping V. Marojevic, X. Revés, A. Gelonch Polythechnic University of Catalonia Dept.
FPGA. AT6000 FPGAs. Application Note AT6000 FPGAs. 3x3 Convolver with Run-Time Reconfigurable Vector Multiplier in Atmel AT6000 FPGAs.
3x3 Convolver with Run-Time Reconfigurable Vector Multiplier in Atmel AT6000 s Introduction Convolution is one of the basic and most common operations in both analog and digital domain signal processing.
Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX
White Paper Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX April 2010 Cy Hay Product Manager, Synopsys Introduction The most important trend
Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems
Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems Melanie Berg 1, Kenneth LaBel 2 1.AS&D in support of NASA/GSFC [email protected] 2. NASA/GSFC [email protected]
Tolerating Multiple Faults in Multistage Interconnection Networks with Minimal Extra Stages
998 IEEE TRANSACTIONS ON COMPUTERS, VOL. 49, NO. 9, SEPTEMBER 2000 Tolerating Multiple Faults in Multistage Interconnection Networks with Minimal Extra Stages Chenggong Charles Fan, Student Member, IEEE,
路 論 Chapter 15 System-Level Physical Design
Introduction to VLSI Circuits and Systems 路 論 Chapter 15 System-Level Physical Design Dept. of Electronic Engineering National Chin-Yi University of Technology Fall 2007 Outline Clocked Flip-flops CMOS
An Open Architecture through Nanocomputing
2009 International Symposium on Computing, Communication, and Control (ISCCC 2009) Proc.of CSIT vol.1 (2011) (2011) IACSIT Press, Singapore An Open Architecture through Nanocomputing Joby Joseph1and A.
Static-Noise-Margin Analysis of Conventional 6T SRAM Cell at 45nm Technology
Static-Noise-Margin Analysis of Conventional 6T SRAM Cell at 45nm Technology Nahid Rahman Department of electronics and communication FET-MITS (Deemed university), Lakshmangarh, India B. P. Singh Department
Design of a High-speed and large-capacity NAND Flash storage system based on Fiber Acquisition
Design of a High-speed and large-capacity NAND Flash storage system based on Fiber Acquisition Qing Li, Shanqing Hu * School of Information and Electronic Beijing Institute of Technology Beijing, China
Flip-Flops, Registers, Counters, and a Simple Processor
June 8, 22 5:56 vra235_ch7 Sheet number Page number 349 black chapter 7 Flip-Flops, Registers, Counters, and a Simple Processor 7. Ng f3, h7 h6 349 June 8, 22 5:56 vra235_ch7 Sheet number 2 Page number
CHAPTER 5 FINITE STATE MACHINE FOR LOOKUP ENGINE
CHAPTER 5 71 FINITE STATE MACHINE FOR LOOKUP ENGINE 5.1 INTRODUCTION Finite State Machines (FSMs) are important components of digital systems. Therefore, techniques for area efficiency and fast implementation
Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.
Objectives The Central Processing Unit: What Goes on Inside the Computer Chapter 4 Identify the components of the central processing unit and how they work together and interact with memory Describe how
Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001
Agenda Introduzione Il mercato Dal circuito integrato al System on a Chip (SoC) La progettazione di un SoC La tecnologia Una fabbrica di circuiti integrati 28 How to handle complexity G The engineering
Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems
Fault Tolerance & Reliability CDA 5140 Chapter 3 RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components
Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection
Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection Y. Damchi* and J. Sadeh* (C.A.) Abstract: Appropriate operation of protection system
Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection
Effect of Remote Back-Up Protection System Failure on the Optimum Routine Test Time Interval of Power System Protection Y. Damchi* and J. Sadeh* (C.A.) Abstract: Appropriate operation of protection system
A PC-BASED TIME INTERVAL COUNTER WITH 200 PS RESOLUTION
35'th Annual Precise Time and Time Interval (PTTI) Systems and Applications Meeting San Diego, December 2-4, 2003 A PC-BASED TIME INTERVAL COUNTER WITH 200 PS RESOLUTION Józef Kalisz and Ryszard Szplet
SOFTWARE PERFORMANCE EVALUATION ALGORITHM EXPERIMENT FOR IN-HOUSE SOFTWARE USING INTER-FAILURE DATA
I.J.E.M.S., VOL.3(2) 2012: 99-104 ISSN 2229-6425 SOFTWARE PERFORMANCE EVALUATION ALGORITHM EXPERIMENT FOR IN-HOUSE SOFTWARE USING INTER-FAILURE DATA *Jimoh, R. G. & Abikoye, O. C. Computer Science Department,
Asynchronous IC Interconnect Network Design and Implementation Using a Standard ASIC Flow
Asynchronous IC Interconnect Network Design and Implementation Using a Standard ASIC Flow Bradley R. Quinton Dept. of Electrical and Computer Engineering University of British Columbia [email protected]
Optimal Service Pricing for a Cloud Cache
Optimal Service Pricing for a Cloud Cache K.SRAVANTHI Department of Computer Science & Engineering (M.Tech.) Sindura College of Engineering and Technology Ramagundam,Telangana G.LAKSHMI Asst. Professor,
Experiment # 9. Clock generator circuits & Counters. Eng. Waleed Y. Mousa
Experiment # 9 Clock generator circuits & Counters Eng. Waleed Y. Mousa 1. Objectives: 1. Understanding the principles and construction of Clock generator. 2. To be familiar with clock pulse generation
New implementions of predictive alternate analog/rf test with augmented model redundancy
New implementions of predictive alternate analog/rf test with augmented model redundancy Haithem Ayari, Florence Azais, Serge Bernard, Mariane Comte, Vincent Kerzerho, Michel Renovell To cite this version:
Introduction to Digital System Design
Introduction to Digital System Design Chapter 1 1 Outline 1. Why Digital? 2. Device Technologies 3. System Representation 4. Abstraction 5. Development Tasks 6. Development Flow Chapter 1 2 1. Why Digital
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana
Research on the UHF RFID Channel Coding Technology based on Simulink
Vol. 6, No. 7, 015 Research on the UHF RFID Channel Coding Technology based on Simulink Changzhi Wang Shanghai 0160, China Zhicai Shi* Shanghai 0160, China Dai Jian Shanghai 0160, China Li Meng Shanghai
Curriculum for a Master s Degree in ECE with focus on Mixed Signal SOC Design
Curriculum for a Master s Degree in ECE with focus on Mixed Signal SOC Design Department of Electrical and Computer Engineering Overview The VLSI Design program is part of two tracks in the department:
A Comparison of General Approaches to Multiprocessor Scheduling
A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA [email protected] Michael A. Palis Department of Computer Science Rutgers University
Determination of the normalization level of database schemas through equivalence classes of attributes
Computer Science Journal of Moldova, vol.17, no.2(50), 2009 Determination of the normalization level of database schemas through equivalence classes of attributes Cotelea Vitalie Abstract In this paper,
Load Balancing in MapReduce Based on Scalable Cardinality Estimates
Load Balancing in MapReduce Based on Scalable Cardinality Estimates Benjamin Gufler 1, Nikolaus Augsten #, Angelika Reiser 3, Alfons Kemper 4 Technische Universität München Boltzmannstraße 3, 85748 Garching
The Classes P and NP
The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the
How To Write A Disk Array
200 Chapter 7 (This observation is reinforced and elaborated in Exercises 7.5 and 7.6, and the reader is urged to work through them.) 7.2 RAID Disks are potential bottlenecks for system performance and
Reconfigurable Low Area Complexity Filter Bank Architecture for Software Defined Radio
Reconfigurable Low Area Complexity Filter Bank Architecture for Software Defined Radio 1 Anuradha S. Deshmukh, 2 Prof. M. N. Thakare, 3 Prof.G.D.Korde 1 M.Tech (VLSI) III rd sem Student, 2 Assistant Professor(Selection
Self-Compressive Approach for Distributed System Monitoring
Self-Compressive Approach for Distributed System Monitoring Akshada T Bhondave Dr D.Y Patil COE Computer Department, Pune University, India Santoshkumar Biradar Assistant Prof. Dr D.Y Patil COE, Computer
Load Balancing in Distributed Data Base and Distributed Computing System
Load Balancing in Distributed Data Base and Distributed Computing System Lovely Arya Research Scholar Dravidian University KUPPAM, ANDHRA PRADESH Abstract With a distributed system, data can be located
Computer Architecture
Computer Architecture Slide Sets WS 2013/2014 Prof. Dr. Uwe Brinkschulte M.Sc. Benjamin Betting Part 11 Memory Management Computer Architecture Part 11 page 1 of 44 Prof. Dr. Uwe Brinkschulte, M.Sc. Benjamin
