A. Binary-Constraint Search Algorithm for Minimizing Hardware during Hardware/Software Partitioning
|
|
|
- Thomasina Robertson
- 10 years ago
- Views:
Transcription
1 A. Binary-Constraint Search Algorithm for Minimizing Hardware during Hardware/Software Partitioning Frank Vahidf Jie Gong and Daniel D. Gajski Department of Information and Computer Science University of California, Irvine, CA, Abstract Partitioning a system 3 functionality among interacting hardware and software components is an important part of system design. We introduce a new partitioning approach that caters to the main objective of the hardware/software partitioningproblem, i.e., minimizing hardware,for given performance constraints. We demonstrate results superior to those of previously published algorithms intendedjor hardware/software partitioning. The approach may be genera&able to problems in which one metric must be minimized while other metrics must merely satisfy constraints. 1 Introduction Co.mbined hardware/software implementations are common in embedded systems. Software running on an existing processor is less expensive, more easily modifiable, and more quickly designable than an equivalent application-specific hardware implementation. However, hardware may provide better performance. A system designer s goal is to implement a system using a minimal amount of application-specific hardware, if any at all, to satisfy required performance. In other words, the designer attempts lo implement as much functionality as possible in software. Deficiencies of the much practiced ad-hoc approach to partitioning have led to research into more formal, algorithmic approaches. In the ad-hoc approach, a designer starts with an informal functional description of the desired system, such as an English description. Based on previous experience and mental estimations, the designer partitions the functionality among hardware and software components, and the components are then designed and integrated. This approach has two key limitations. First, due to limited time, the designer can only consider a small numbe.r of possible partitionings, so many good solutions will never be considered. Second, the effects that partitioning has on performance are far too complex for a designer to accurately estimate mentally. As a result of these limitations, designers often use more hardware than necessary to ensure performance constraints are met. -- tf. V&id IS. presently with the Department of Computer Science, University of California, Riverside, CA In formal approaches, one starts with a functional description of the system in a machine-readable language, such as VHDL. After verifying, usually through simulation, that the description is correct, the functionality is decomposed into functional portions of some granularity. These portions, along with additional information such as data shared between portions, make up an internal model of the system. Each portion is mapped to either hardware or software by partitioning algorithms that search large numbers of solutions. Such algorithms are guided by automated estimators that evaluate cost junctions for each partitioning. The output is a set of functional portions to be mapped to software and another set to be mapped to hardware. Simulation of the designed hardware and compiled software can then be performed to observe the effects of partitioning. Figure 1 shows a typical configuration of a hardware/software partitioning system. FU-CWld duaiplm Figure 1: Basics parts of a hw/sw partitioning system The partitioning algorithm is a crucial part of the formal approach because it is the algorithm that actually minimizes the expensive hardware. However, current research into algorithms for hardware/software partitioning is at an early stage. In [l], the essential criteria to consider during partitioning are described, but no particular algorithm is given. In [2], certain partitioning issues are also described but no algorithm is given. In [3], an approach based on a multi-stage clustering algorithm is described, where the closeness metrics include hardware sharing and concurrency, and where particular clusterings are evaluated based on factors including an area/performance cost function. In [4], an algorithm is described which starts with most functionality in hardware, and which then moves portions into software as long as such a move improves the de ACM O /94/0011 $
2 sign. In [5], an approach is described which starts with all functionality in software, and which then moves portions into hardware using an iterative-improvement algorithm such as simulated annealing. The authors hypothesize that starting with an all-software partitioning will result in less final hardware than starting with all-hardware partitioning (our results support this hypothesis). In [6], simulated annealing is also used. The focus is on placing highly utilized functional portions in hardware. However, there is no direct performance measurement. The common shortcoming of all previous approaches is the lack of advanced methods to minimize hardware. We propose a new partitioning approach that specifically addresses the need to minimize hardware while meeting performance constraints. The novelty of the approach is in how it frees a partitioning algorithm, such as simulated annealing, from simultaneously trying to satisfy performance constraints and to minimize hardware, since trying to solve both problems simultaneously yields a poor solution to both problems. Instead, in our approach, we use a relaxed cost function that enables the algorithm to focus on satisfying performance, and we use an efficientlycontrolled outer loop (based on binary-search) to handle the hardware minimization. As such, our approach can be considered as a meta-algorithm that combines the binary-search algorithm with any given partitioning algorithm. The approach results in substantially less hardware, while using only a small constant factor of additional computation and still satisfying performance constraints (whereas starting with an all-software partition and applying iterative-improvement algorithms often fails to satisfy performance). Such a reduction can significantly decrease implementation costs. Moving one of the subproblems to an outer loop may in fact be a general solution to other problems in which one metric must be minimized while others must merely satisfy constraints. The paper is organized as follows. Section 2 gives a definition of the hardware/software partitioning problem. Section 3 describes previous hardware/software partitioning algorithms, an extension we have made to one previous algorithm to reduce hardware, and our new hardwareminimizing algorithm based on constraint-search. Section 4 summarizes our experimental results on several examples. 2 Problem Definition While the partitioning subproblem interacts with other subproblems in hardware/software codesign, it is distinct, i.e., it is orthogonal to the choice of specification language, the level of granularity of functional decomposition, and the specific estimation models employed. We are given a set of functional objects 0 = {01,02,..., on} which compose the functionality of the system under design. The functions may be at any of various levels of granularity, such as tasks (e.g., processes, procedures or code groupings) or arithmetic operations. We are also given a set of performance constraints Cons = {Cl,C2,..., Cm), where Cj = {G, timecon}, G C 0, and limecon E Posihve. hvnecon is a constraint on the maximum execution-time of the all functions in group G. It is simple to extend the problem to aliow other performance constraints such as those on bitrates or inter-operation delays, but we have not included such constraints in order to simplify the notation. A hardware/software partition is defined as two sets H and S, where H C 0, S C 0, H US = 0, HnS=O.Th is dfi e ni t ion does not prevent further partition of hardware or software. Hardware can be partitioned into several chips while software can be executed on more than one processor. The hardware size of H, or Hsite(H) is defined as the size (e.g., transistors) of the hardware needed to implement the functions in H. The performance of G, or Performance(G), is defined as the total execution time for the group of functions in G for a given partition H,S. A performance satisfying partition is one for which Performance(Cj.G) 5 Cj.timecon for all j = 1... m. Deflnition 1: Given 0 and Cons, the Hardware/Software Partitioning Problem is to find a performance satisfying partition H and S such that Hsize(H) is minimal. In other words, the problem is to map all the functions to either hardware or software in such a way that we find the minimal hardware for which all performance constraints can still be met. Note that the hardware/software partitioning problem, like other partitioning problems, is NP-complete. The all-hardware size of 0 is defined as the size of an all-hardware partition, or in other words as Hsize(0). Note that if an all-hardware partition does not satisfy performance constraints, no solution exists. To compare any two partitions, a cost function is required. A cost function is a function Cod(H, S, Cons, I) which returns a natural number that summarizes the overall goodness of a given partition, the smaller the better. I contains any additional information that is not contained in II, S or Cons. We define an iterative improvement partitioning algorithm as a procedure PadAlg( H, S, Cons, I, Cost()) which returns a partition H, S such that Cosl(H, S, Cons,I) 5 Cost( H, S, Cons, I). Examples of such algorithms include group migration [7] and simulated annealing [8]. Since it is not feasible to implement the hardware and software components in order to determine a cost for each possible partition generated by an algorithm, we assume that fast estimators are available [9, 10, Part it ioning solution 3.1 Basic algorithms One simple and fast algorithm starts with an initial partition, and moves objects as long as improvement OCcurs. The algorithm presented in [4], due to Gupta and DeMicheli, and abbreviated as GD, can be viewed as an extension of this algorithm which ensures performance constraints are met. The algorithm starts by creating an allhardware partition, thus guaranteeing that a performance 215
3 satisfying partition is found if it exists (actually, certain functions which are considered unconstrainable are initially placed in software). To move a function requires not only cost improvement but also that all performance constraints still be satisfied (actually they require that maximum interfacing constraints between hardware and software be satisfied). Once a function is moved, the algorithm tries to move closely related functions before trying others. Greedy algorithms, such as the one described above, suffer from the limitation that they are easily trapped in a local minimum. As a simple example, consider an initial partition that is performance satisfying, in which two heavily communicating functions or and 02 are initially in hardware. Suppose that moving either 01 or 02 to software results in performance violations, but moving both 01 and 02 results in a performance satisfying partition. Neither of the above algorithms can find the latter solution because doing so requires accepting an intermediate, seemmgly negative move of a single function. To overcome the limitation of greedy algorithms, others have proposed using an existing hill-climbing algorithm such as simulated annealing. Such an algorithm accepts some number of negative moves in a manner that overcomes many local minimums. One simply creates an initial partit,ion and applies the algorithm. In [5], such an approach is described by Ernst and Henkel that uses an all-software solution for the initial partition. A hill-climbing partitioning algorithm is then used to extract functions from software to hardware in order to meet performance. The authors reason that such extraction should result in less hardware than the approach where functions are extracted in the other direction, i.e., from bardware to software. Cost function We now consider devising a cost function to be used by the hill-climbing partitioning algorithm. The difficultly lies in trying to balance the performance satisfiability and hardware minimization goals. The GD approach does not encounter this problem since performance satisfiabiiity is not part of the cost function. The cost function is only used to evaluate partitions that already satisfy the performance constraints. The algorithm simply rejects all partitions that are not performance satisfying. We saw that this approach will become trapped in a local minimum. The Ernst/Henkel approach does not encounter this problem since hardware size is not part of the cost function; instead, it is fixed beforehand, by allocating resources before partitioning. This approach requires the designer to manually try numerous hardware sizes, reapplying partitioning for each, to try to find the smallest hardware size that yields a performance satisfying partition. We propose a third solution. We use a cost function with two terms, one indicating the sum of all performance violations, the other the hardware size. The performance term is weighed very heavily to ensure that a performance satisfying solution is found, so minimizing hardware is a secondary consideration. The cost function is: Cost( H, S, Cons) = kperf Violation(C.,) J 1 + k OPeD x Hsize(H) where Violation(Cj) = Performance(CJ.G)-Cj.timecon if the difference is greater than 0, else Violadion(C,) = 0. Also, kperf >> k,,,,, but kperf should not be infinity, since then the algorithm could not distinguish a partition which almost meets constraints from one which greatly violates constraints. We refer to this solution as the PWHC (performanceweighted hi-climbing) algorithm. We shall see that it gives excellent results as compared to the GD algorithm, but there is still room for improvement. In particular, when starting with an all-software partition, hill-climbing algorithms often fail to find a performance-satisfying solution. 3.2 A new constraint-search approach While incorporating performance and hardware size considerations in the same cost function, as in PWHC, tends to gives much better results than previous ap proaches, we have determined a superior approach for minimizing hardware. Our approach involves dec0uplin.g to a degree the problem of satisfying performance from the problem of minimizing hardware Foundation The first step is to realize that the difficulty experienced in PWHC is that the two metrics in the cost function, performance and hardware size, directly compete with each other. In other words, decreasing performance violations usually increases hardware size, while decreasing hardware size usually increases performance violations. An iterativeimprovement algorithm has a hard time making significant progress towards minimizing one of the metrics since any reduction in one metric s value yields an increase in the other metric s value, resulting in very large hills that must be climbed. To solve this problem, we can relax the cost function goal. Rather than minimizing size, we just wish to find any size below a given constraint C,,,,. Cost ( H, S, Cons, Csize ) m = k P-f x c Violation(C,) j=1 + k area x Violation(Hsize( H), Csize) It is no longer required that kperf >> kare,,. We set k Per! - k,,,, = 1. The effect of relaxing the cost function goal is that once the hardware size drops below the constraint, decreasing performance violations does not necessarily yield a hardware-size violation; hence the iterativeimprovement algorithm has more flexibility to work towards eliminating performance violations. 216
4 The hardware minimization problem can now be stated distinct from the partitioning problem. Deflnition 2: Given 0, Cons, PartAlg() and Cosl(), the Minimal Hardware-Constraint Problem is to determine the smallest Csire such that Cost(PartAlg(H, S, COTIS, Csize, Cost()),Cons, Csize) = 0. In other words, we must choose the smallest size constraint for which a performance satisfying solution can be found by the partitioning algorithm. Theorem 1: Let PartAlg() be such that it always finds a zero-cost solution if one exists. Then Cost(PartAlg(H, S, Cons, Csize, Cost()), Cons, Csize) = 0 implies that Cost( PartAlg( H, S, Cons, Csize + 1, Cost()), COTZS, Csize + 1) = 0. Proof: we can create a (hypothetical) algorithm which subtracts 1 from its hardware-size constraint if a zero-cost solution is not found. Given Csize + 1 as the constraint, then if a zero-cost is not found, the algorithm will try Csize as the constraint. Thus the algorithm can always find a zero-cost solution for Csile + 1 if one exists for Csile. The above theorem states that if a zerocost solution is found for a given Csize, then zerocost solutions will be found for all larger values of Csire also. From this theorem we see that the sequence of cost numbers obtained for C *,ze = 0, 1,...) AllHardwareSiae consists of z non-zero numbers followed by Ceil= - z zero s, where z E {O..A~~HardwareSize}. Let CostSequence equal this sequence of cost numbers. Figure 2 depicts an example of a CostSequence graphically. (It is important to note that CostSequence is conceptual; it describes the solution space, but we do not actually need to generate this sequence to find a solution). We can now restate the minimal hardware-constraint problem as a search problem: mr zwo CUSf b Size H++i+++FH conatraht AllHardwareSlze Figure 2: An example cost sequence Deflnition 3: Given H, S, Cons, PattAlg() and Cost0 the Minimal Hardware-Constraint Search Problem is to find the first zero in CostSequence. Given this definition, we see that the problem can be easily mapped to the well-known problem of Sorted-array search, i.e., of finding the first occurrence of a key in an ordered array of items. The main difference between the two problems is that whereas in sorted-array search the items exist in the array beforehand, in our constraintsearch problem an item is added (i.e. partitioning applied and a cost determined) only if the array location is visited during search. In either case, we wish to visit as few array items as possible, so the difference does not affect the solution. A second difference is that the first z items in CostSequence are not necessarily in increasing or decreasing order. Since we are looking for a zero cost solution, we don t care what those non-zero values are, so we can convert CostSequence to an ordered sequence by mapping each non-zero cost to 1. The constraint corresponding to the first zero cost represents the minimal hardware, or optimal solution to the partitioning problem. Due to the NP-completeness of partitioning, it should come as no surprise that we can not actually guarantee an optimal solution. Note that we assumed in the above theorem that PatlAlg() finds a zerocost solution if one exists for the given size constraint. Since partitioning is NP-complete, such an algorithm is impractical. Thus PartAlg() may not find a zero-cost solution although one may exist for a given size constraint. The result is that the first zero in CostSequence may be for a constraint which is larger than the optimal, or that non-zero costs may appear for constraints larger than that yielding the first zero cost, meaning the sequence of zeros contains spikes. However, the first zero cost should correspond to a constraint near the optimal if a good algorithm is used. In addition, any spikes that occur should also only appear near the optimal. Thus the algorithm should yield near optimal results. It is well-known that binary-search is a good solution to the sorted-array search problem, since its worst case behavior is log(n) for an array of N items. We.therefore incorporate binary-search into our algorithm Algorithm We now describe our hardware-minimizing partitioning algorithm based on binary-search of the sequence of costs for the range of possible hardware constraints, which we refer to as the BCS (binary constraint-search) algorithm. The algorithm uses variables low and high which indicate the current window of possible constraints in which a zero-cost constraint lies, and variable mid which represents the middle of that window. Variables H,,,, and S,,,, store the zero-cost partition which has the smallest hardware constraint so far encountered. Algorithm 3.1 BCS hw/sw partitioning low = 0, high = AllHardwareSize while low < high loop,,,id = low+high+l H, S =PariAlg(H, S, Cons, mid,cost()) if Cost(H, S, Cons, mid) = 0 then high = mid - 1 H zet.0, S,,,, = H,S else low = mid end if end loop return Hz,,,, Lo The algorithm performs a binary search through the range of possible constraints, applying partitioning and 217
5 then the cost function as each constraint is visited. The algorithm looks very much like a standard binary-search algorithm with two modifications. First, mid is used as a hardware constraint for partitioning whose result is then used to determine a cost, in contrast to using mid as an index to an array item. Second, the cost is compared to 0, in contrast to an array item being compared to a key. 3.2.;3 Reducing runtime in practice After experimentation, we developed a simple modification of t.he constraint-search algorithm to reduce its runtime in pract.ice. Let SiZebest be the smallest zerocost hardware const.raint. If a Csize constraint much larger than SiZebest is provided to PartAIg(), the algorithm usually finds a solution very quickly. The functions causing a performance violation are simply moved to hardware. If a Csile constraint much smaller than SiZebest is provided, the algorithm also stops fairly quickly, since it is unable to find a sequence of moves that improves the cost. However, if Csize is slightly smaller or larger than SiZebest, the algorithm usually makes a large number of moves, gradually inching its way towards a cost of zero. This situation is very qdifferent from traditional binary-search where a comparison of the key with an item takes the same time for any item. Near the end of binary-search the window of possible constraint values is very small, with SiZebesr somewhere inside this window. Much of the constraint-search algorithm s runtime is spent reducing the window size by minute amounts and reapplying lengthy partitioning. In practice, we need not find the smallest hardware size to such a degree of precision. We thus terminate the binary-search when the window size (i.e., high - low) is less than a certain percentage of AIIHardurareSize. This percentage is called a precision factor. We have found that a precision factor of 1% achieves a speedup of roughly 2.5; we allow the user to select any factor. X2.4: Complexity The worst-case runtime complexity of the constraintsearch algorithm equals the complexity of the chosen partitioning algorithm PartAlg() multiplied by the complexity of our binary constraint-search. While the complexity of the binary search of a sequence with AllHardwareSize items is logz(allhardwaresize), the precision factor reduces this to a constant. Theorem 2: The complexity of the binary search of an N element CostSequence with a precision factor a is loch($). Proof: We start with a window size of N, and repeatedly divide the window size by 2 until the window size equals. a x N. Let w be the number of windows generated; w will thus give us the complexity. An equivalent value for w is obtained by starting with a window size of a x N, and multiplying the size by 2 until the size is N. Hence we obtain the following equation: (a x N) x 2 = N. Solving for w yields w = logs( 5) = logz( a). The complexity is therefore logz(i). We see that binary constraint-search partitioning with a precision factor has the same theoretical complexity as the partitioning algorithm PartAlg(). In practice, the binary constraint-search contributes a small constant factor. For example, a precision factor of 5% results in a constant factor of logs(20) = Experiments We briefly describe the environment used to compare the various algorithms on real examples. It is important to note that most environment issues are orthogonal t.o the issue of algorithm design. Our algorithms should perform well in any of the environments discussed in other work such as [II 2, 3,4,5]. It should also be noted that any partitioning algorithm can be used within the BCS algorithm, not just simulated annealing. We take a VHDL behavioral description as input. The description is decomposed to the granularity of tasks, i.e., processes, procedures, and optionally to statement blocks such as loops. Large data items (variables) are also treated as functions. Estimators of hardware size and behavior execution-time for both hardware and software are available [9, lo]. These estimators are especially designed to be used in conjunction with partitioning. In particular, very fast and accurate estimations are made available through special techniques to incrementally modify an estimate when a function is moved, rather than reestimating entirely for the new partition. The briefness of our discuission on estimation does not imply that it is trivial or simple, but instead that it is a different issue not discussed in this paper. We also note here that ILP solutions are inadeq,uate for use here because the metrics involved are non-linear, since they are obtained by using sophisticated estimators based on design models, rather than by using linear but grossly inaccurate metrics. We implemented three partitioning algorithms: GD, PWHC, and BCS. The PartAIg() used in PWHC and BCS is simulated annealing. From now on, we also refer the PWHC algorithm as SA (Simulated Annealing). The precision factor used in BCS is 5%. We applied each algori.thm to several examples: a real-time medical system (Volume) for measuring volume, a beam former system (Beam), a fuzzy logic control system (Fuzzy), and a microwave transmitter system (Microwave). For each example, a varie,ty of performance constraints were input. Some examples have performance constraints on one group of tasks in the system. Others have performance constraints on two groups of tasks in the system. We tested each example using ;s set of performance constraints that reside between the time required for an all-hardware partition and the time requ.ired for an all-software partition. The initial partition used in each trial by SA and BCS is an all-software partition. We also ran SA and BCS starting with an all-hardware partition, but found that the results were inferior to those starting with an all-software partition. Figure 3 summarizes the results. The Average run lime is the measured CPU time running on a Sparc2. The Auerage hardware decrease percentage is the percentage by 218
6 Time increase Hardware Hardware Failure rate (SA) ~% 8flvdR dkg- ~* 171._11 T if 1;; 1 ; 1 ; 1 ; Figure 3: Partitioning results on industry examples which each algorithm reduces the hardware, relative to the size of an all-hardware implementation. The Best cases is the number of times the given algorithm results in a hardware size smaller than those of both other algorithms. The Worst cases is the number of times the algorithm results in a hardware size that is larger than those of both other algorithms. No solution cases is the number of times the algorithm fails to find a performance-satisfying partition. The results demonstrate the superiority of the BCS algorithm in finding a performance-satisfying partition with minimal hardware. BCS always finds a performancesatisfying partition (in fact, it is guaranteed to do so), whereas SA failed in 5 cases, which is 13.2% of all cases. While GD also always finds a performance-satisfying partition, BCS results in a 9.5% savings in hardware, corresponding to an average savings of approximately gates [12]. An additional positive note that can be seen from the experiments is that the increase in computation time of BCS over simulated annealing is only 4.1, slightly better than the theoretical expectation of 4.3. To further evaluate the BCS algorithm, we developed a general formulation of the hardware/software partitioning problem. This formulation enables us to generate problems that imitate real examples, without spending the many months required to develop a real example. In addition, the formulation enables a comparison of algorithms without requiring use of sophisticated software and hardware estimators, which makes algorithm comparison possible for others who do not have such estimators. We generated 125 general hardware-software partitioning problems, and then applied BCS with a 5% precision factor, SA and GD. The BCS algorithm again finds performance-satisfying partitions with less hardware. SA failed to find a performancesatisfying partition in 15 cases, which is 12.5% of all cases. While GD also always found a performance-satisfying partition, BCS resulted in nearly a 10% savings in hardware. Figure 4 summarizes the comparison of BCS to the other algorithms. Details of the formulation, the psuedorandom hardware/software problem generation algorithm, and the partitioning results are provided in [13]. 4.1x 2.7% 9.5% 12.5% Figure 4: BCS compared to other algorithms 5 Conclusion The BCS algorithm excels over previous hardware/software partitioning algorithms in its ability to minimize hardware while satisfying performance constraints. The computation time required by the algorithm is well within reason, especially when one considers the great benefits obtained from the reduction in hardware achieved by the algorithm, including reduced system cost, faster design time, and more easily modifiable final designs. The BCS approach is essentially a meta-algorithm that removes from a cost function one metric that must be minimized, and thus the approach may be applicable to a wide range of problems involving non-linear, competing metrics. References [l] D. Thomas, J. Adams, and H. Schmit, A model and methodology for hardware/software codesign, in IEEE Design d Test of Computers, pp. 6-15, [2] A. Kalavade and E. Lee, A hardware/software codesign methodology for DSP applications, in IEEE Design f4 Test of Computers, [3] X. Xiong, E. Barros, and W. Rosentiel, A method for partitioning unity language in hardware and software, in EuroDAC, [4] R. Gupta and G. DeMicheli, usystem-level synthesis using re-programmable components, in EDAC, pp. 2-7, [5] R. Ernst and J. Henkel, Hardware-software codesign of embedded controllers based on hardware extraction, in International Workshop on Hardware- Software Co-Design, [6] Z. Peng and K. Kuchcinski, An algorithm for partitioning of application specific systems, in EDAC, pp , [7] B. Preas and M. Lorenzetti, Physical Design flutomation of VLSI Systems. California: Benjamin/Cummings, [8] S. Kirkpatrick, C. Gelatt, and M. P. Vecchi, Optimization by simulated annealing, Science, vol. 220, no. 4598, pp , [9] J. Gong, D. Gajski, and S. Narayan, Software estimation from executable specifications, in Journal of Computer and Software Engineering, [lo] S. Narayan and D. Ga ski, Area and performance estimation from system- i eve1 specifications. UC Irvine, Dept. of ICS, Technical Report 92-16,1992. [ll] W. Ye, R. Ernst, T. Benner, and J. Henkel, Fast timing analysis for hardware-software co-synthesis, in ICCD, pp , [12] Lf[lOO 1.5 Micron CMOS Datapath Cell Library, [13] F. Vahid, J. Gong, and D. Gajski, A hardwaresoftware partitioning algorithm for minimizing hardware. UC Irvine, Dept. of ICS, Technical Report 93-38,
Hardware/Software Codesign Overview
Hardware/Software Codesign Overview Education & Facilitation Program Module 14 Version 3.00 All rights reserved. This information is copyrighted by the SCRA, through its Advanced Technology Institute,
FPGA area allocation for parallel C applications
1 FPGA area allocation for parallel C applications Vlad-Mihai Sima, Elena Moscu Panainte, Koen Bertels Computer Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University
Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System
Hardware Resource Allocation for Hardware/Software Partitioning in the LYCOS System Jesper Grode, Peter V. Knudsen and Jan Madsen Department of Information Technology Technical University of Denmark Email:
ESE566 REPORT3. Design Methodologies for Core-based System-on-Chip HUA TANG OVIDIU CARNU
ESE566 REPORT3 Design Methodologies for Core-based System-on-Chip HUA TANG OVIDIU CARNU Nov 19th, 2002 ABSTRACT: In this report, we discuss several recent published papers on design methodologies of core-based
The Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
Analysis of Binary Search algorithm and Selection Sort algorithm
Analysis of Binary Search algorithm and Selection Sort algorithm In this section we shall take up two representative problems in computer science, work out the algorithms based on the best strategy to
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT Dr. Nikunja Swain, South Carolina State University Nikunja Swain is a professor in the College of Science, Mathematics,
MEP Y9 Practice Book A
1 Base Arithmetic 1.1 Binary Numbers We normally work with numbers in base 10. In this section we consider numbers in base 2, often called binary numbers. In base 10 we use the digits 0, 1, 2, 3, 4, 5,
The Performance Characteristics of MapReduce Applications on Scalable Clusters
The Performance Characteristics of MapReduce Applications on Scalable Clusters Kenneth Wottrich Denison University Granville, OH 43023 [email protected] ABSTRACT Many cluster owners and operators have
A Hardware-Software Cosynthesis Technique Based on Heterogeneous Multiprocessor Scheduling
A Hardware-Software Cosynthesis Technique Based on Heterogeneous Multiprocessor Scheduling ABSTRACT Hyunok Oh cosynthesis problem targeting the system-on-chip (SOC) design. The proposed algorithm covers
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Force/position control of a robotic system for transcranial magnetic stimulation
Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme
Algorithm Design and Analysis
Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Linda Shapiro Today Registration should be done. Homework 1 due 11:59 pm next Wednesday, January 14 Review math essential
RN-Codings: New Insights and Some Applications
RN-Codings: New Insights and Some Applications Abstract During any composite computation there is a constant need for rounding intermediate results before they can participate in further processing. Recently
Research Statement Immanuel Trummer www.itrummer.org
Research Statement Immanuel Trummer www.itrummer.org We are collecting data at unprecedented rates. This data contains valuable insights, but we need complex analytics to extract them. My research focuses
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
An Efficient RNS to Binary Converter Using the Moduli Set {2n + 1, 2n, 2n 1}
An Efficient RNS to Binary Converter Using the oduli Set {n + 1, n, n 1} Kazeem Alagbe Gbolagade 1,, ember, IEEE and Sorin Dan Cotofana 1, Senior ember IEEE, 1. Computer Engineering Laboratory, Delft University
Architectures and Platforms
Hardware/Software Codesign Arch&Platf. - 1 Architectures and Platforms 1. Architecture Selection: The Basic Trade-Offs 2. General Purpose vs. Application-Specific Processors 3. Processor Specialisation
OPC COMMUNICATION IN REAL TIME
OPC COMMUNICATION IN REAL TIME M. Mrosko, L. Mrafko Slovak University of Technology, Faculty of Electrical Engineering and Information Technology Ilkovičova 3, 812 19 Bratislava, Slovak Republic Abstract
High-Level Synthesis for FPGA Designs
High-Level Synthesis for FPGA Designs BRINGING BRINGING YOU YOU THE THE NEXT NEXT LEVEL LEVEL IN IN EMBEDDED EMBEDDED DEVELOPMENT DEVELOPMENT Frank de Bont Trainer consultant Cereslaan 10b 5384 VT Heesch
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
Undergraduate Major in Computer Science and Engineering
University of California, Irvine 2015-2016 1 Undergraduate Major in Computer Science and Engineering On This Page: Overview Admissions Requirements for the B.S. in Computer Science and Engineering Sample
Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001
Agenda Introduzione Il mercato Dal circuito integrato al System on a Chip (SoC) La progettazione di un SoC La tecnologia Una fabbrica di circuiti integrati 28 How to handle complexity G The engineering
ON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT
216 ON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT *P.Nirmalkumar, **J.Raja Paul Perinbam, @S.Ravi and #B.Rajan *Research Scholar,
Scheduling Algorithm with Optimization of Employee Satisfaction
Washington University in St. Louis Scheduling Algorithm with Optimization of Employee Satisfaction by Philip I. Thomas Senior Design Project http : //students.cec.wustl.edu/ pit1/ Advised By Associate
Modeling a GPS Receiver Using SystemC
Modeling a GPS Receiver using SystemC Modeling a GPS Receiver Using SystemC Bernhard Niemann Reiner Büttner Martin Speitel http://www.iis.fhg.de http://www.iis.fhg.de/kursbuch/kurse/systemc.html The e
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015
1 Introduction These notes introduce basic runtime analysis of algorithms. We would like to be able to tell if a given algorithm is time-efficient, and to be able to compare different algorithms. 2 Linear
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications
GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications Harris Z. Zebrowitz Lockheed Martin Advanced Technology Laboratories 1 Federal Street Camden, NJ 08102
This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers
This Unit: Floating Point Arithmetic CIS 371 Computer Organization and Design Unit 7: Floating Point App App App System software Mem CPU I/O Formats Precision and range IEEE 754 standard Operations Addition
A Framework for Software Product Line Engineering
Günter Böckle Klaus Pohl Frank van der Linden 2 A Framework for Software Product Line Engineering In this chapter you will learn: o The principles of software product line subsumed by our software product
System-on. on-chip Design Flow. Prof. Jouni Tomberg Tampere University of Technology Institute of Digital and Computer Systems. jouni.tomberg@tut.
System-on on-chip Design Flow Prof. Jouni Tomberg Tampere University of Technology Institute of Digital and Computer Systems [email protected] 26.03.2003 Jouni Tomberg / TUT 1 SoC - How and with whom?
Why? A central concept in Computer Science. Algorithms are ubiquitous.
Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online
MapReduce and Distributed Data Analysis. Sergei Vassilvitskii Google Research
MapReduce and Distributed Data Analysis Google Research 1 Dealing With Massive Data 2 2 Dealing With Massive Data Polynomial Memory Sublinear RAM Sketches External Memory Property Testing 3 3 Dealing With
Binary Division. Decimal Division. Hardware for Binary Division. Simple 16-bit Divider Circuit
Decimal Division Remember 4th grade long division? 43 // quotient 12 521 // divisor dividend -480 41-36 5 // remainder Shift divisor left (multiply by 10) until MSB lines up with dividend s Repeat until
Design Compiler Graphical Create a Better Starting Point for Faster Physical Implementation
Datasheet Create a Better Starting Point for Faster Physical Implementation Overview Continuing the trend of delivering innovative synthesis technology, Design Compiler Graphical delivers superior quality
Digital Systems Design! Lecture 1 - Introduction!!
ECE 3401! Digital Systems Design! Lecture 1 - Introduction!! Course Basics Classes: Tu/Th 11-12:15, ITE 127 Instructor Mohammad Tehranipoor Office hours: T 1-2pm, or upon appointments @ ITE 441 Email:
Hardware/Software Co-Design of a Java Virtual Machine
Hardware/Software Co-Design of a Java Virtual Machine Kenneth B. Kent University of Victoria Dept. of Computer Science Victoria, British Columbia, Canada [email protected] Micaela Serra University of Victoria
The Trip Scheduling Problem
The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems
A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs
A Visualization System and Monitoring Tool to Measure Concurrency in MPICH Programs Michael Scherger Department of Computer Science Texas Christian University Email: [email protected] Zakir Hussain Syed
Systems on Chip Design
Systems on Chip Design College: Engineering Department: Electrical First: Course Definition, a Summary: 1 Course Code: EE 19 Units: 3 credit hrs 3 Level: 3 rd 4 Prerequisite: Basic knowledge of microprocessor/microcontroller
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Contents. System Development Models and Methods. Design Abstraction and Views. Synthesis. Control/Data-Flow Models. System Synthesis Models
System Development Models and Methods Dipl.-Inf. Mirko Caspar Version: 10.02.L.r-1.0-100929 Contents HW/SW Codesign Process Design Abstraction and Views Synthesis Control/Data-Flow Models System Synthesis
Multi-objective Design Space Exploration based on UML
Multi-objective Design Space Exploration based on UML Marcio F. da S. Oliveira, Eduardo W. Brião, Francisco A. Nascimento, Instituto de Informática, Universidade Federal do Rio Grande do Sul (UFRGS), Brazil
Layered Approach to Development of OO War Game Models Using DEVS Framework
Layered Approach to Development of OO War Game Models Using DEVS Framework Chang Ho Sung*, Su-Youn Hong**, and Tag Gon Kim*** Department of EECS KAIST 373-1 Kusong-dong, Yusong-gu Taejeon, Korea 305-701
We can express this in decimal notation (in contrast to the underline notation we have been using) as follows: 9081 + 900b + 90c = 9001 + 100c + 10b
In this session, we ll learn how to solve problems related to place value. This is one of the fundamental concepts in arithmetic, something every elementary and middle school mathematics teacher should
Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems
Long-term monitoring of apparent latency in PREEMPT RT Linux real-time systems Carsten Emde Open Source Automation Development Lab (OSADL) eg Aichhalder Str. 39, 78713 Schramberg, Germany [email protected]
Embedded Systems Engineering Certificate Program
Engineering Programs Embedded Systems Engineering Certificate Program Accelerate Your Career extension.uci.edu/embedded University of California, Irvine Extension s professional certificate and specialized
Resource Allocation Schemes for Gang Scheduling
Resource Allocation Schemes for Gang Scheduling B. B. Zhou School of Computing and Mathematics Deakin University Geelong, VIC 327, Australia D. Walsh R. P. Brent Department of Computer Science Australian
Operation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
Implementation of emulated digital CNN-UM architecture on programmable logic devices and its applications
Implementation of emulated digital CNN-UM architecture on programmable logic devices and its applications Theses of the Ph.D. dissertation Zoltán Nagy Scientific adviser: Dr. Péter Szolgay Doctoral School
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1
9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 Seffi Naor Computer Science Dept. Technion Haifa, Israel Introduction
Binary search algorithm
Binary search algorithm Definition Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than
Attaining EDF Task Scheduling with O(1) Time Complexity
Attaining EDF Task Scheduling with O(1) Time Complexity Verber Domen University of Maribor, Faculty of Electrical Engineering and Computer Sciences, Maribor, Slovenia (e-mail: [email protected]) Abstract:
A Reactive Tabu Search for Service Restoration in Electric Power Distribution Systems
IEEE International Conference on Evolutionary Computation May 4-11 1998, Anchorage, Alaska A Reactive Tabu Search for Service Restoration in Electric Power Distribution Systems Sakae Toune, Hiroyuki Fudo,
Factoring & Primality
Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount
Cost Effective Automated Scaling of Web Applications for Multi Cloud Services
Cost Effective Automated Scaling of Web Applications for Multi Cloud Services SANTHOSH.A 1, D.VINOTHA 2, BOOPATHY.P 3 1,2,3 Computer Science and Engineering PRIST University India Abstract - Resource allocation
Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
Control 2004, University of Bath, UK, September 2004
Control, University of Bath, UK, September ID- IMPACT OF DEPENDENCY AND LOAD BALANCING IN MULTITHREADING REAL-TIME CONTROL ALGORITHMS M A Hossain and M O Tokhi Department of Computing, The University of
Linear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
A Direct Numerical Method for Observability Analysis
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method
A Comparison of General Approaches to Multiprocessor Scheduling
A Comparison of General Approaches to Multiprocessor Scheduling Jing-Chiou Liou AT&T Laboratories Middletown, NJ 0778, USA [email protected] Michael A. Palis Department of Computer Science Rutgers University
Lecture 8: Binary Multiplication & Division
Lecture 8: Binary Multiplication & Division Today s topics: Addition/Subtraction Multiplication Division Reminder: get started early on assignment 3 1 2 s Complement Signed Numbers two = 0 ten 0001 two
MODEL DRIVEN DEVELOPMENT OF BUSINESS PROCESS MONITORING AND CONTROL SYSTEMS
MODEL DRIVEN DEVELOPMENT OF BUSINESS PROCESS MONITORING AND CONTROL SYSTEMS Tao Yu Department of Computer Science, University of California at Irvine, USA Email: [email protected] Jun-Jang Jeng IBM T.J. Watson
Introduction to Digital System Design
Introduction to Digital System Design Chapter 1 1 Outline 1. Why Digital? 2. Device Technologies 3. System Representation 4. Abstraction 5. Development Tasks 6. Development Flow Chapter 1 2 1. Why Digital
Contributions to Gang Scheduling
CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,
Implementation and Design of AES S-Box on FPGA
International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 232-9364, ISSN (Print): 232-9356 Volume 3 Issue ǁ Jan. 25 ǁ PP.9-4 Implementation and Design of AES S-Box on FPGA Chandrasekhar
RN-coding of Numbers: New Insights and Some Applications
RN-coding of Numbers: New Insights and Some Applications Peter Kornerup Dept. of Mathematics and Computer Science SDU, Odense, Denmark & Jean-Michel Muller LIP/Arénaire (CRNS-ENS Lyon-INRIA-UCBL) Lyon,
Introduction to Parallel Programming and MapReduce
Introduction to Parallel Programming and MapReduce Audience and Pre-Requisites This tutorial covers the basics of parallel programming and the MapReduce programming model. The pre-requisites are significant
Evaluation of Complexity of Some Programming Languages on the Travelling Salesman Problem
International Journal of Applied Science and Technology Vol. 3 No. 8; December 2013 Evaluation of Complexity of Some Programming Languages on the Travelling Salesman Problem D. R. Aremu O. A. Gbadamosi
A Note on Maximum Independent Sets in Rectangle Intersection Graphs
A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada [email protected] September 12,
Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:
CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm
Real-Time Software. Basic Scheduling and Response-Time Analysis. René Rydhof Hansen. 21. september 2010
Real-Time Software Basic Scheduling and Response-Time Analysis René Rydhof Hansen 21. september 2010 TSW (2010e) (Lecture 05) Real-Time Software 21. september 2010 1 / 28 Last Time Time in a real-time
Implementation of Modified Booth Algorithm (Radix 4) and its Comparison with Booth Algorithm (Radix-2)
Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 6 (2013), pp. 683-690 Research India Publications http://www.ripublication.com/aeee.htm Implementation of Modified Booth
Integrating Benders decomposition within Constraint Programming
Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Lecture N -1- PHYS 3330. Microcontrollers
Lecture N -1- PHYS 3330 Microcontrollers If you need more than a handful of logic gates to accomplish the task at hand, you likely should use a microcontroller instead of discrete logic gates 1. Microcontrollers
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Special Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
Determining the Optimal Combination of Trial Division and Fermat s Factorization Method
Determining the Optimal Combination of Trial Division and Fermat s Factorization Method Joseph C. Woodson Home School P. O. Box 55005 Tulsa, OK 74155 Abstract The process of finding the prime factorization
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Logistic Regression for Spam Filtering
Logistic Regression for Spam Filtering Nikhila Arkalgud February 14, 28 Abstract The goal of the spam filtering problem is to identify an email as a spam or not spam. One of the classic techniques used
Project Management Planning
Develop Project Tasks One of the most important parts of a project planning process is the definition of activities that will be undertaken as part of the project. Activity sequencing involves dividing
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
