HPGAST: High Performance GA-based Sequential circuits Test generation on Beowulf PC-Cluster
|
|
- Ilene Booth
- 8 years ago
- Views:
Transcription
1 HPGAST: High Performance GA-based Sequential circuits Test generation on Beowulf PC-Cluster Tepakorn Siriwan Pradondet Nilagupta Department of Computer Engineering, Kasetsart University 50 Pahonyothin Rd. Lardyao Jatujak Bangkok Thailand Phone (+662) ext. 1403,1404 Fax.(+662) Abstract This paper deals with a High Performance Automated Test Pattern Generation for sequential circuits on single stuck-at fault model. HPGAST Parallel Genetic Algorithm on Beowulf PC-Cluster is presented. In this work, we describe a parallel version of an existing GA-based ATPG and tools: (PGAPack parallel genetic algorithm library to evolve candidate test vectors, HOPE fault simulator to compute the fitness of each candidate test vectors). The HPGAST uses the ISCAS89 benchmark are testing circuits, running on PIRUN, 72 nodes PC- Cluster. The experimental results show high fault coverage for 0.2 and 0.3-mutation probability. When increasing number of processors from 2, 4, 8, 16 and 32 respectively then the speedup increase, however when increasing processors from 32 to 64 the speedup mostly decrease. The speedup of the larger circuits has been improved over the smaller circuits. 1. Introduction The objective of Automated Test Pattern Generation (ATPG) is to find a test sequence that, when applied to the circuit, enable testers to distinguish between the correct circuit and any circuit with a model fault. The test sequence s effectiveness is measure by the fault coverage achieved for that fault model and the number of generated vectors, which is directly proportional to test application time. ATPG for combinational circuits is relatively easy. In these circuits, all inputs of the combinational part of the circuits (primary input and state variable) can be assigned arbitrary values, and the fault effect is observable on any output (circuit outputs and state variables). Test generation for sequential circuits is more complex because you cannot directly control or observe state lines. Testing a sequential circuit using a simulation-based technique are more easily handled complex component types than deterministic technique. In a simulation-based approach, fault simulator is processing in the forward direction only, no backtracking is required and various fault models can be accommodated. The basic principles of GA were first laid down by Holland [1]. Our goal in this work is to use simulation-based test generation implemented on a framework of genetic algorithm described by Goldberg [2]. The GA contains a population of strings or individual, in which each individual is a candidate solution, each individual is represented as a string (chromosome) of elements (genes). A fitness value is assigned to each individual, based on the value given by a fitness function. The population is initialized with random strings. The evolutionary process of selection, crossover, and mutation are use to generate an entirely new population from the existing population.
2 Parallel GA s are particularly easy to implement and promise substantial gains in performance as such there has been extensive research in this field. The simple method to parallel GA s is to do a global parallelization, only one population as in the serial GA, but the evaluation of individuals and the genetic operators are parallel explicitly. This method is relatively easy to implement and a significant speedup can be expected if the communications cost does not dominate the computation cost [3]. On a distributed computer, the population can be stored in one processor. This master processor would be responsible for sending the individuals to the other processors (the slave ) for evaluation, collecting the result, and applying the genetic operators to produce the next generation. Beowulf PC-Cluster was first laid down by Beowulf Project at NASA [4,5]. Beowulf PC- Cluster is consist of Linux PC clustering, which is the building of large supercomputing class system from PC and Linux operating system, is now one of the widely adopted system among high performance computing research communities. In this paper, we propose prototypes named HPGAST High Performance GA-based Sequential circuit Test generation on Beowulf PC-Cluster. Our work uses PGAPack [6] a parallel genetic algorithm library to evolve candidate test vectors, HOPE [7] faults simulator to compute the fitness of each candidate test vector, and Beowulf PC-cluster to improve the speedup of an ATPG system. In the next section, we begin with the related work in sequential and parallel genetic algorithms for sequential circuit ATPG. Next design and implementation of HPGAST is described. Then experimental results of application for the ISCAS89 [8] sequential benchmark circuit we gathered. Finally conclusions are presented. 2. Related work GA was first used as a framework for simulation-based test generation in [9,10]. The CRIS test generator [9] use a logic simulator to evaluate candidate test sequence and heuristic crossover scheme to conduct problem-specific knowledge. The result test sets generated often had lower faulted coverage. In a more new version of CRIS [11], fault simulation was used in the evaluation of candidate tests after the easy-to-test faults were detected. Fault coverage improved for many circuits, however execution time also increased. GATEST is genetic algorithm framework for sequential circuit test generation [12,13]. GATEST is organized in two parts; in the first, single test vectors are generated by the GA, which are able to increase the value of the already generated test sequence; in the second part, the GA generates test sequence. Various GA parameters are studied, including alphabet size, fitness function, generation gap, population size, and mutation rate. The best results were obtained using selection scheme was tournament selection without replacement and uniform crossover. The recommend using a population size of 16 or 32 to reduce the execution time. Non-overlapping populations gave the highest fault coverage. DIGATE is organized in three-phase [14]. The first phase selects a target fault as the one with the maximum activity so far, the second phase aims at activating the target fault, and the third phase looks at a sequence able to make the target fault observable at the circuit Primary Outputs. The main innovation in DIGATE is the pre-computed distinguishing sequence, which propagates a fault effect from a single flip-flop to the POs. GATTO is GA-based test generation for large sequential circuits [15,16]. GATTO Targeted a single fault at a time, and the approach was extended to allow for targeting 64 faults simultaneously [16]. The fitness function defined similar to CRIS, however the meanings of the three phases are different; moreover it optimizes the whole test sequence.
3 GATTO+ is an enhancement version of GATTO in term of test length minimization and fault excitation [17]. An Application of Parallel Genetic algorithms to sequential circuit test generation has first developed in the distributed algorithm GATTO [18]. GATTO is based on genetic algorithm, which use the computational power of a workstation network. GATTO implements the distributed genetic algorithm using the PVM library for implementing message passing and process spawning. A master process is in charge of executing the kernel of the overall algorithm, while a slave process can be activated on a remote workstation each time the fault simulation of a sequence is required. Several fault simulation processes thus work in parallel in many phase of the algorithm, while communications and synchronization points are reduced. Scalability is good for a small number of slaves. For a large number of slaves, scalability is poor. Since the master become a bottleneck and the slaves are often idle. ProperGATEST consists of three parallel genetic algorithms [19] using the ProperCAD II library [20]. The first algorithm is a parallel version of the sequential algorithm that produces the same result as the sequential algorithm. The second algorithm uses a parallel search strategy where each processor executes the sequential genetic algorithm with a different seed, and use migration to share information between processors. The third algorithm is a subpopulation based version of second algorithm, where sub-populations are distributed across processor and information is migrated from one processor to another. The result of the first algorithm provided significant speedup without degradation in the quality of the result. The second algorithm has improved the quality of the results and is a highly scalable implementation. The third algorithm reduces the workload among the processors and by exploiting the benefits of the randomized migration strategy used. 2.1 HOPE Modification HOPE is a fault simulator for synchronous sequential circuit [7]. It was developed in the Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute & State University. It employs the parallel fault simulation technique and employs several heuristics to reduce the parallel fault simulation time. HOPE is based on an earlier fault simulator called PROOFS, which improves three new techniques that substantially speed up parallel fault simulation. The first is a reduction of faults simulated in parallel through mapping non-stem faults to stem faults, The second is a new fault injection method, The third is a combination of a static fault ordering method and a dynamic fault ordering method. We use HOPE to evaluate the fitness of each candidate test. During fault simulation, the good and faulty circuit states are update after each vector is simulated then we store, keep and restore before and after each test is applied. We keep the numbers of fault detect of each candidate test, count of the number of faults which effects to flip-flops. 2.2 PGAPack PGAPack is a public-domain software package developed by researchers in the MCS Division at Argonne National Laboratory [6]. PGAPack is a parallel genetic algorithm library that is intended to provide most capabilities desired in a genetic algorithm package, in an integrated, seamless, and portable manner supports parallel and sequential implementations of the single population global model (GM) based on MPI message passing protocol. The parallel implementation uses a master/slave algorithm in the master processes, executes all steps of the genetic algorithm except the function evaluations. The slave processes execute the function evaluations.
4 The parallel implementation of the GM produces the same result as the sequential implementation, usually faster. If two processes are used, both the master process and the slave process will compute the function evaluations. If more than two processes are used, the master is responsible for bookkeeping only, and the slaves for executing the function evaluation. The number of function evaluations that can be executed in parallel limits the speedup. This number depends on the population size and the number of new strings that created in each generation. 2.3 PIRUN Beowulf PC-Cluster [21,22] We implemented our ATPG parallel on PIRUN (Pile of Inexpensive and Redundant Universal Nodes) Beowulf PC-Cluster, which belongs to SPMD class of parallel computer. Both of NFS and message-passing traffics can be configured to pass through these full duplex routes in a convenient way depending on the application which is running on. The PIRUN can be categorized into three main type of nodes as follows: CSN (Computing Service Nodes) : nodes that users log on to do their works. CSN is composed of 72-diskless nodes which are 500 MHz Pentium III with 128 MB of memory. FSN (File Server Nodes) : serve as a central file system for CSN. There are 3 FSNs. Each is composed of 500 MHz Pentium III Xeon with 512 MB of memory. Each is equipped with 54 GB of Ultra2 SCSI Harddisks with RAID (6x9GB). FSN has total 162 GB of disk space. SMN (System Management Node) : SMN is a 500 MHz Pentium III, the same as CSN but it has local hard disk and is used for management purpose. PIRUN interconnected by full duplex 100 MBPS Ethernet switch as a message passing network and 100 MBPS Ethernet hub as NFS, Linux Red Hat 6.1 is used as an operating system and MPICH1.1.2 to parallel programming support. 3. Design and Implementation Considering a functionality of sequential genetic algorithm model based ATPG system we can follow master-slave-programming model to implement parallel genetic algorithm. We use PGAPack parallel genetic algorithm library to generate candidate test vector and uses HOPE a synchronous sequential circuit fault simulator for fault simulation. 3.1 The Genetic Algorithm for ATPG Begin GA Add vector Fitness HOPE Fault simulator State yes progress no End Figure 1: Individual Test Vector Master.. HOPE HOPE HOPE Figure 2: HPGAST system model
5 HPGAST test generation is illustrated in Figure 1 and 2. Test vectors are repeatedly generated until no more progress in made. Each test vector is generated by HPGAST with a random initial population. The HOPE sequential circuit fault simulator is used in evaluating the fitness of each candidate test vector, and the best vector evolved in any generation is selected. Our work interested in speedup of execution in parallel by spawn fault simulator to each slave processor. We send candidate test vector to each slave processor to execute in parallel. The computational powers provide by a Beowulf PC-cluster by distributing the fault simulation task among the available processor. A master process is handle genetic operation and overall algorithm, while a slave process can be activated each time the fault simulation process. Thus communication and synchronization time are reduce. The master process does the following tasks: All I/O operation files system, read netlist and stores the generated test sequence. Spawns several slave processes. Initial distributes a copy of internal format of the netlist and fault list to each slave process. Start to execute the algorithm, send individual to slaves processor, wait for fitness function and update the global data model. The slave processes do a fault simulator, Both netlist and faultlist are stored in a local memory of each slave processors. The slaves compute fitness values, return the result to the master, and wait for a new job. 3.2 Problem encoding A binary encoding is used for generate individual test vector, each character of a chromosome in the population is mapped to a primary input as shown in Figure 3. GA Individual Sequential Circuit Figure 3: Problem Encoding 3.3 Fitness function The fitness of candidate test is calculated using fitness function from GATEST [12,13] as follows. Phase 1: total flip flops set + fraction of flip flops changed (1) Phase 2: #fault detected + fault propagated to flip flops (2) (#fault) (#flip flops) Phase 3: #fault detected + fault propagated to flip flops + 2(# good and faulty circuit) (3) (#fault) (#flip flops) (#fault) (#circuit node)
6 The objective of Phase 1 is to initialize the flip-flops. Therefore, the fitness of a candidate vector is a measure of the number of flip-flops set to know (zero or one) state. We also include the fraction of flip-flops changing values since the previous time frame, in addition to different test vectors that cause the same number of flip-flops to be set. Test generator switches to Phase 2 when all flip-flops are set. In this phase, Test vectors are generated to maximize the number of fault detected. The fitness of candidate test vector indicates number of faults it detects. To differentiate vector that detect the same number of faults, we include the number of fault effect propagated to flip-flops in the fitness function, and offset by number of fault simulated and number of flip-flops. When a test vector can not detects additional fault, the test generator switches to Phase 3 and begins count the noncontributing test vector. The objective of this phase is to find hard detect fault. We add the good and faulty circuit activity levels to the other two measures used in Phase 2. If the test vector is found that detects any faults before the number of noncontributing vectors generate reaches the progress limit, the test generator goes back to Phase 2, and the noncontributing vector count is reset to zero. 3.4 GA parameters Various GA parameters are important in achieving good results. Given a sufficient population size and number of generations, The test vector can be found, however execution time is directly proportional to both parameters. We generate test vector with random seed for GA, population sizes of 32, maximum number of 600 generations, tournament selection without replacement and uniform crossover as a default value. We use a crossover probability of one; i.e., two individuals are always crossed in generating two new individuals. Mutation is used to prevent the loss of key characters at the various string positions. We various mutation probabilities for 0.1 to 1.0 respectively for find the best mutation probability of the circuits. 4. Experimental Results HPGAST was implemented around the HOPE sequential circuit fault simulator [7] and PGAPack parallel genetic algorithm library [6]. Tests were generated for the ISCAS89 [8] sequential benchmark circuit on PIRUN Beowulf PC-Cluster [21,22]. The single processor experiment, We use HPGAST generate test pattern in single processor. We fixed number of generation to 600 to limit the execution time, mark the last detected vector to the test sequence length, and run five times per circuits. The experiment results in Table 1 are average of five runs, and a new random seed for GA was used for each run. The effects of mutation probability on fault coverage were also investigated. Results are shown in Table 1 averaged over five runs for various mutation tares used during test generation. Tournament selection without replacement and uniform crossover was used. Faults detect show the highest fault coverage for 0.2-mutation probability for S386 and S526, and 0.3-mutation probability for S298 and S641. Table 2 shows characteristic of ISCAS89 benchmark circuit and results of HPGAST. The number of PIs and Gate shown exclude POs and fan-outs. The numbers of testable faults are taken from [23]. PIs are number of primary input of circuits, Gate is number of gates in circuit, Seq. Depth is depth of the circuit from fault simulator, and Faults are number of collapsed faults. The number of faults detected and vector length of the GATEST [13] for comparison, and the highest fault coverage achieved are highlighted in bold.
7 Mutation Probability Table 1: Various Mutation Probability Results Circuit S298 S386 S526 S641 #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length #Fault Detect Vector Length Table 2: HPGAST Results Circuit PI s Gate Seq. Faults GATEST HPGAST Depth #Faults Detect Vector Length #Faults Detect Vector Length S S S S The parallel processor experiment, we increasing number of processors from 2, 4, 8, 16, 32 and 64 processors respectively. We fixed number of generation to 600 to compare the execution time in parallel processing. The experimental results in Table 3 show average execution time in minute of five runs. The experimental results of test sequence in fault coverage and test vector length are mostly same to the experiment results of single processor because the parallel implementation of the GM will produce the same result as the sequential implementation. Table 3 shows the experiment result of HPGAST for ISCAS89 sequential benchmark circuit on a parallel processing. When increasing processors from 2 to 4, 8, 16 and 32 processors respectively, the execution times decrease. When increasing processors from 32 to 64 processors the execution times mostly increase. Table 3: HPGAST parallel execution time Circuit Execute time (minute) VS Number of Processor S S S S
8 The speedup of execution time is calculated from. Speedup = Sequential execution time (4) Parallel execution time Table 4: Speedup of HPGAST Circuit Speedup VS Number of Processor S S S S Table 4 shows speedup result of HPGAST of ISCAS89 benchmark circuits form 2, 4, 8, 16, 32 and 64 processors respectively. For this parallel result, the fault detected and vector lengths are mostly the same but the speedup increase. When increasing number of processors from 2, 4, 8, 16 and 32 respectively then the speedup increase, however when increasing processors from 32 to 64 the speedup mostly decrease. The speedup for benchmark circuits from 2 to 64 processors is shown in Figure 4. The speedup of the larger circuits has been improved over the smaller circuits. 30 Speedups S298 S386 S526 S Number of Processors Figure 4: Speedup of HPGAST 5. Conclusions The HPGAST test generator was developed for a sequential circuit test generation on PIRUN Beowulf PC-cluster environment. On single processor experiment, HPGAST can generate high fault coverage for 0.2 and 0.3-mutation probability in the sample of ISCAS89 sequential benchmark circuits. On parallel processor experiment, when increasing number of processors from 2, 4, 8, 16 and 32 respectively the speedup increase, however when increasing processors from 32 to 64 the
9 speedup mostly decrease. The speedup for benchmark circuits from 2 to 64 processors showing the speedup of the larger circuits has been improved over the smaller circuits. In the future, there are many issues that we will be continue to work hard such as the ATPG performance in both of fault coverage, test vector length and execution time. We will propose these and other results in more detail soon. References [1] J.H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, [2] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, [3] Erick Cantu-Paz, A Survey of Parallel Genetic Algorithms, IlliGAL Report, [4] T. Sterling, D. J. Becker, D. Savarese, J. E. Dorband, U. A. Ranawake, and C.E. Paker, Beowulf: A Parallel Work Station for Scientific Computation, Proc. ICPP, [5] D. Ridge, T. Stering, D. J. Becker, and P. Merkey, Beowulf: A Parallel Work Station for Scientific Computation, Proc. of IEEE Aerospace 1997, [6] D. Levine, User Guide to the PGAPack Parallel Genetic Algorithm Library, Argonne National Laboratory, Jan [7] H. K. Lee, and D.S. Ha, HOPE:An Efficient Parallel Fault Simulator for Synchronous Sequential Circuits, IEEE Trans. Computer Aided Design of Integrated Circuits and System, Sep. 1996, [8] F. Brglez, D. Bryan, and K. Kozminski, Combinational profiles of sequential benchmark circuits, Proc. Int. Symp. Circuit System, May 1989, [9] D. G. Saab, Y.G. Saab, and J. A. Abraham, CRIS: A test cultivation program for sequential VLSI circuits, Proc. Int. Conf. Computer Aided Design, Nov. 1992, [10] M. Srinivas and L. M. Patnaik, A simulation-based test generation scheme using genetic algorithms, Proc. Int. Conf. VLSI Design, Jan. 1993, [11] D. G. Saab, Y.G. Saab, and J. A. Abraham, Automatic test vector cultivation for sequential VLSI circuits using genetic algorithms, in IEEE Trans. Computer-Aided Design, vol.15, Oct. 1996, [12] E. M. Rudnick, J. H. Patel, G. S. Greenstein, and T. M. Niermann, Sequential Circuit Test Generation in a Genetic Algorithm Framework, Proc. ACM/IEEE Design Automation Conf., Jun. 1994,
10 [13] E. M. Rudnick, J. H. Patel, G. S. Greenstein, and T. M. Niermann, a Genetic Algorithm Framework for Test Generation, IEEE Trans. in Computer-Aided Design of Integrated Circuits and System, Sep. 1997, [14] M. S. Hsiao, E. M. Rudnick, and J. H. Patel, Automatic Test Generation using Genetically-Engineered Distinguishing Sequences, Proc. IEEE VLSI Test Symp., 1996, [15] P. Prinetto, M. Rebaudengo, and M. Sonza Reorda, An automatic test pattern generator for large sequential circuits based on genetic algorithms, Proc. Int. Test Conf., Oct. 1994, [16] F. Corno, P. Prinetto, M. Rebaudengo, and M. Sonza Reorda, GATTO: A genetic algorithm for automatic test pattern generation for large synchronous sequential circuits, IEEE Trans. Computer-Aided Design, vol. 15, Aug. 1996, [17] F. Corno, P. Prinetto, M. Rebaudengo, M. S. Reorda, and R. Mosca, Advanced Techniques for GA-based sequential ATPGs, IEEE Design & Test Conf., Mar [18] P. Prinetto, M. Rebaudengo, M.S. Reorda, and E. Veiluva, GATTO: an Intelligent Tool for Automatic Test Pattern Generation For Digital Circuits, IEEE Int. Conf. on Tools with Artificial Intelligence, Nov [19] D. Krishnaswamy, M. S. Hsiao, V. Saxena, E.M. Rudnick, J.H.Patel, and P. Banerjee, Parallel Genetic Algorithms for Simulation-Based Sequential Circuit Test Generation, IEEE VLSI Design Conf., 1997, [20] S. Parkes, J. A. Chandy, and P. Banerjee, A library-based approach to portable, parallel, object-oriented programming: Interface, implementation and application, Proc. Supercomputing 94, 1994, [21] P. Uthayopas, S. Sanguanpong, and Y. Poovarawan, Building a Large Beowulf Cluster System: PIRUN Experience, Proc. of the 4 th ANSCSE, Mar [22] P. Uthayopas, S. Sanguanpong, and Y. Poovarawan, Building a Large Scale Internet Superserver for Academic Services with Linux Cluster Technology", International Workshop on Asia Pacific Advanced Network and Its Application (IWS-2000), Tsukuba, Japan, Feb. 2000, [23] J.A. Waicukauski, P.A. Shupe, D. J. Giramma, and A. Matin, ATPG for ultra-large structured designs, Proc. Int. Test Conf., Sep. 1990,
GARDA: a Diagnostic ATPG for Large Synchronous Sequential Circuits
GARDA: a Diagnostic ATPG for Large Synchronous Sequential Circuits F. Corno, P. Prinetto, M. Rebaudengo, M. Sonza Reorda Politecnico di Torino Dipartimento di Automatica e Informatica Torino, Italy Abstract
More informationISSN: 2319-5967 ISO 9001:2008 Certified International Journal of Engineering Science and Innovative Technology (IJESIT) Volume 2, Issue 3, May 2013
Transistor Level Fault Finding in VLSI Circuits using Genetic Algorithm Lalit A. Patel, Sarman K. Hadia CSPIT, CHARUSAT, Changa., CSPIT, CHARUSAT, Changa Abstract This paper presents, genetic based algorithm
More informationA Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
More informationBuilding an Inexpensive Parallel Computer
Res. Lett. Inf. Math. Sci., (2000) 1, 113-118 Available online at http://www.massey.ac.nz/~wwiims/rlims/ Building an Inexpensive Parallel Computer Lutz Grosz and Andre Barczak I.I.M.S., Massey University
More informationCloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
More informationMOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [mastroserio@na.infn.it] Francesco Maria Taurino [taurino@na.infn.it] Gennaro Tortone [tortone@na.infn.it] Napoli Index overview on Linux farm farm
More informationScalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
More informationBuilding a Resources Monitoring System for SMILE Beowulf Cluster
Building a Resources Monitoring System for SMILE Beowulf Cluster Putchong Uthayopas, Surachai Phaisithbenchapol, Krisana Chongbarirux Parallel Research Group Computer and Network System Research Laboratory
More informationAn On-line Backup Function for a Clustered NAS System (X-NAS)
_ An On-line Backup Function for a Clustered NAS System (X-NAS) Yoshiko Yasuda, Shinichi Kawamoto, Atsushi Ebata, Jun Okitsu, and Tatsuo Higuchi Hitachi, Ltd., Central Research Laboratory 1-28 Higashi-koigakubo,
More informationOverlapping Data Transfer With Application Execution on Clusters
Overlapping Data Transfer With Application Execution on Clusters Karen L. Reid and Michael Stumm reid@cs.toronto.edu stumm@eecg.toronto.edu Department of Computer Science Department of Electrical and Computer
More informationNumerical Research on Distributed Genetic Algorithm with Redundant
Numerical Research on Distributed Genetic Algorithm with Redundant Binary Number 1 Sayori Seto, 2 Akinori Kanasugi 1,2 Graduate School of Engineering, Tokyo Denki University, Japan 10kme41@ms.dendai.ac.jp,
More informationCOSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network
More informationBuilding a Parallel Computer from Cheap PCs: SMILE Cluster Experiences
Building a Parallel Computer from Cheap PCs: SMILE Cluster Experiences Putchong Uthayopas, Thara Angskun, Jullawadee Maneesilp Parallel Research Group Computer and Network Systems Research Laboratory,
More informationPARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN
1 PARALLEL & CLUSTER COMPUTING CS 6260 PROFESSOR: ELISE DE DONCKER BY: LINA HUSSEIN Introduction What is cluster computing? Classification of Cluster Computing Technologies: Beowulf cluster Construction
More informationLoad Balancing on a Non-dedicated Heterogeneous Network of Workstations
Load Balancing on a Non-dedicated Heterogeneous Network of Workstations Dr. Maurice Eggen Nathan Franklin Department of Computer Science Trinity University San Antonio, Texas 78212 Dr. Roger Eggen Department
More informationMicrosoft Windows Server 2003 with Internet Information Services (IIS) 6.0 vs. Linux Competitive Web Server Performance Comparison
April 23 11 Aviation Parkway, Suite 4 Morrisville, NC 2756 919-38-28 Fax 919-38-2899 32 B Lakeside Drive Foster City, CA 9444 65-513-8 Fax 65-513-899 www.veritest.com info@veritest.com Microsoft Windows
More informationNetwork Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
More informationIBM LoadLeveler for Linux delivers job scheduling for IBM pseries and IBM xseries platforms running Linux
Software Announcement May 11, 2004 IBM LoadLeveler for Linux delivers job scheduling for IBM pseries and IBM xseries platforms running Linux Overview LoadLeveler for Linux is a versatile workload management
More informationDesign Verification & Testing Design for Testability and Scan
Overview esign for testability (FT) makes it possible to: Assure the detection of all faults in a circuit Reduce the cost and time associated with test development Reduce the execution time of performing
More informationAdvances in Smart Systems Research : ISSN 2050-8662 : http://nimbusvault.net/publications/koala/assr/ Vol. 3. No. 3 : pp.
Advances in Smart Systems Research : ISSN 2050-8662 : http://nimbusvault.net/publications/koala/assr/ Vol. 3. No. 3 : pp.49-54 : isrp13-005 Optimized Communications on Cloud Computer Processor by Using
More informationON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT
216 ON SUITABILITY OF FPGA BASED EVOLVABLE HARDWARE SYSTEMS TO INTEGRATE RECONFIGURABLE CIRCUITS WITH HOST PROCESSING UNIT *P.Nirmalkumar, **J.Raja Paul Perinbam, @S.Ravi and #B.Rajan *Research Scholar,
More informationPARALLELS CLOUD STORAGE
PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...
More informationOptimization of Cluster Web Server Scheduling from Site Access Statistics
Optimization of Cluster Web Server Scheduling from Site Access Statistics Nartpong Ampornaramveth, Surasak Sanguanpong Faculty of Computer Engineering, Kasetsart University, Bangkhen Bangkok, Thailand
More informationStudy of Various Load Balancing Techniques in Cloud Environment- A Review
International Journal of Computer Sciences and Engineering Open Access Review Paper Volume-4, Issue-04 E-ISSN: 2347-2693 Study of Various Load Balancing Techniques in Cloud Environment- A Review Rajdeep
More informationScheduling and Resource Management in Computational Mini-Grids
Scheduling and Resource Management in Computational Mini-Grids July 1, 2002 Project Description The concept of grid computing is becoming a more and more important one in the high performance computing
More informationPerformance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
Res. Lett. Inf. Math. Sci., 2003, Vol.5, pp 1-10 Available online at http://iims.massey.ac.nz/research/letters/ 1 Performance Characteristics of a Cost-Effective Medium-Sized Beowulf Cluster Supercomputer
More informationDistributed RAID Architectures for Cluster I/O Computing. Kai Hwang
Distributed RAID Architectures for Cluster I/O Computing Kai Hwang Internet and Cluster Computing Lab. University of Southern California 1 Presentation Outline : Scalable Cluster I/O The RAID-x Architecture
More informationHyper Node Torus: A New Interconnection Network for High Speed Packet Processors
2011 International Symposium on Computer Networks and Distributed Systems (CNDS), February 23-24, 2011 Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors Atefeh Khosravi,
More informationFault Modeling. Why model faults? Some real defects in VLSI and PCB Common fault models Stuck-at faults. Transistor faults Summary
Fault Modeling Why model faults? Some real defects in VLSI and PCB Common fault models Stuck-at faults Single stuck-at faults Fault equivalence Fault dominance and checkpoint theorem Classes of stuck-at
More informationCluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: jasjeet@mri.ernet.in 1 Introduction and some local history High performance computing
More informationSoftware Distributed Shared Memory Scalability and New Applications
Software Distributed Shared Memory Scalability and New Applications Mats Brorsson Department of Information Technology, Lund University P.O. Box 118, S-221 00 LUND, Sweden email: Mats.Brorsson@it.lth.se
More informationA Comparison on Current Distributed File Systems for Beowulf Clusters
A Comparison on Current Distributed File Systems for Beowulf Clusters Rafael Bohrer Ávila 1 Philippe Olivier Alexandre Navaux 2 Yves Denneulin 3 Abstract This paper presents a comparison on current file
More informationBusiness white paper. HP Process Automation. Version 7.0. Server performance
Business white paper HP Process Automation Version 7.0 Server performance Table of contents 3 Summary of results 4 Benchmark profile 5 Benchmark environmant 6 Performance metrics 6 Process throughput 6
More informationStream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
More informationA Parallel Processor for Distributed Genetic Algorithm with Redundant Binary Number
A Parallel Processor for Distributed Genetic Algorithm with Redundant Binary Number 1 Tomohiro KAMIMURA, 2 Akinori KANASUGI 1 Department of Electronics, Tokyo Denki University, 07ee055@ms.dendai.ac.jp
More informationDavid Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems
David Rioja Redondo Telecommunication Engineer Englobe Technologies and Systems About me David Rioja Redondo Telecommunication Engineer - Universidad de Alcalá >2 years building and managing clusters UPM
More informationOnline Remote Data Backup for iscsi-based Storage Systems
Online Remote Data Backup for iscsi-based Storage Systems Dan Zhou, Li Ou, Xubin (Ben) He Department of Electrical and Computer Engineering Tennessee Technological University Cookeville, TN 38505, USA
More informationAn Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems
An Empirical Study and Analysis of the Dynamic Load Balancing Techniques Used in Parallel Computing Systems Ardhendu Mandal and Subhas Chandra Pal Department of Computer Science and Application, University
More informationCellular Computing on a Linux Cluster
Cellular Computing on a Linux Cluster Alexei Agueev, Bernd Däne, Wolfgang Fengler TU Ilmenau, Department of Computer Architecture Topics 1. Cellular Computing 2. The Experiment 3. Experimental Results
More informationHigh Performance Cluster Support for NLB on Window
High Performance Cluster Support for NLB on Window [1]Arvind Rathi, [2] Kirti, [3] Neelam [1]M.Tech Student, Department of CSE, GITM, Gurgaon Haryana (India) arvindrathi88@gmail.com [2]Asst. Professor,
More informationEvolutionary SAT Solver (ESS)
Ninth LACCEI Latin American and Caribbean Conference (LACCEI 2011), Engineering for a Smart Planet, Innovation, Information Technology and Computational Tools for Sustainable Development, August 3-5, 2011,
More informationIBM ^ xseries ServeRAID Technology
IBM ^ xseries ServeRAID Technology Reliability through RAID technology Executive Summary: t long ago, business-critical computing on industry-standard platforms was unheard of. Proprietary systems were
More informationFault-Tolerant Framework for Load Balancing System
Fault-Tolerant Framework for Load Balancing System Y. K. LIU, L.M. CHENG, L.L.CHENG Department of Electronic Engineering City University of Hong Kong Tat Chee Avenue, Kowloon, Hong Kong SAR HONG KONG Abstract:
More informationLoad Balancing on a Grid Using Data Characteristics
Load Balancing on a Grid Using Data Characteristics Jonathan White and Dale R. Thompson Computer Science and Computer Engineering Department University of Arkansas Fayetteville, AR 72701, USA {jlw09, drt}@uark.edu
More informationChapter 6. 6.1 Introduction. Storage and Other I/O Topics. p. 570( 頁 585) Fig. 6.1. I/O devices can be characterized by. I/O bus connections
Chapter 6 Storage and Other I/O Topics 6.1 Introduction I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec I/O bus connections
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationQuantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking
Quantifying the Performance Degradation of IPv6 for TCP in Windows and Linux Networking Burjiz Soorty School of Computing and Mathematical Sciences Auckland University of Technology Auckland, New Zealand
More informationSystem Management Framework and Tools for Beowulf Cluster
System Management Framework and Tools for Beowulf Cluster Putchong Uthayopas, Surachai Paisitbenchapol, Thara Angskun, Jullawadee Maneesilp Computer and Network System Research Laboratory, Department of
More informationRecommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
More informationA Comparison of Genotype Representations to Acquire Stock Trading Strategy Using Genetic Algorithms
2009 International Conference on Adaptive and Intelligent Systems A Comparison of Genotype Representations to Acquire Stock Trading Strategy Using Genetic Algorithms Kazuhiro Matsui Dept. of Computer Science
More informationInterconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, Josef.Pelikan@mff.cuni.cz Abstract 1 Interconnect quality
More informationEvolutionary Prefetching and Caching in an Independent Storage Units Model
Evolutionary Prefetching and Caching in an Independent Units Model Athena Vakali Department of Informatics Aristotle University of Thessaloniki, Greece E-mail: avakali@csdauthgr Abstract Modern applications
More informationA Robust Method for Solving Transcendental Equations
www.ijcsi.org 413 A Robust Method for Solving Transcendental Equations Md. Golam Moazzam, Amita Chakraborty and Md. Al-Amin Bhuiyan Department of Computer Science and Engineering, Jahangirnagar University,
More informationChapter 1 - Web Server Management and Cluster Topology
Objectives At the end of this chapter, participants will be able to understand: Web server management options provided by Network Deployment Clustered Application Servers Cluster creation and management
More informationIOmark- VDI. Nimbus Data Gemini Test Report: VDI- 130906- a Test Report Date: 6, September 2013. www.iomark.org
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
More informationDesign and Implementation of a Storage Repository Using Commonality Factoring. IEEE/NASA MSST2003 April 7-10, 2003 Eric W. Olsen
Design and Implementation of a Storage Repository Using Commonality Factoring IEEE/NASA MSST2003 April 7-10, 2003 Eric W. Olsen Axion Overview Potentially infinite historic versioning for rollback and
More informationUniversidad Simón Bolívar
Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García, Pedro Universidad Simón Bolívar In 1999, a couple of projects from USB received funding
More informationdlbsim -AParallel Functional Logic Simulator Allowing Dynamic Load Balancing
Published: Proc. of DATE 01, S. 472-478, IEEE Press, 1. dlbsim -AParallel Functional Logic Simulator Allowing Dynamic Load Balancing Klaus Hering Chemnitz University of Technology Department of Computer
More informationAn Active Packet can be classified as
Mobile Agents for Active Network Management By Rumeel Kazi and Patricia Morreale Stevens Institute of Technology Contact: rkazi,pat@ati.stevens-tech.edu Abstract-Traditionally, network management systems
More informationDesign Issues in a Bare PC Web Server
Design Issues in a Bare PC Web Server Long He, Ramesh K. Karne, Alexander L. Wijesinha, Sandeep Girumala, and Gholam H. Khaksari Department of Computer & Information Sciences, Towson University, 78 York
More informationJob Management System Extension To Support SLAAC-1V Reconfigurable Hardware
Job Management System Extension To Support SLAAC-1V Reconfigurable Hardware Mohamed Taher 1, Kris Gaj 2, Tarek El-Ghazawi 1, and Nikitas Alexandridis 1 1 The George Washington University 2 George Mason
More informationEfficient DNS based Load Balancing for Bursty Web Application Traffic
ISSN Volume 1, No.1, September October 2012 International Journal of Science the and Internet. Applied However, Information this trend leads Technology to sudden burst of Available Online at http://warse.org/pdfs/ijmcis01112012.pdf
More informationDistributed communication-aware load balancing with TreeMatch in Charm++
Distributed communication-aware load balancing with TreeMatch in Charm++ The 9th Scheduling for Large Scale Systems Workshop, Lyon, France Emmanuel Jeannot Guillaume Mercier Francois Tessier In collaboration
More informationArchitecture bits. (Chromosome) (Evolved chromosome) Downloading. Downloading PLD. GA operation Architecture bits
A Pattern Recognition System Using Evolvable Hardware Masaya Iwata 1 Isamu Kajitani 2 Hitoshi Yamada 2 Hitoshi Iba 1 Tetsuya Higuchi 1 1 1-1-4,Umezono,Tsukuba,Ibaraki,305,Japan Electrotechnical Laboratory
More informationDistributed File System Performance. Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame
Distributed File System Performance Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame Questions to answer: Why can t you locate an AFS file server in my lab to improve
More informationEFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALLEL JOBS IN CLUSTERS
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALLEL JOBS IN CLUSTERS A.Neela madheswari 1 and R.S.D.Wahida Banu 2 1 Department of Information Technology, KMEA Engineering College,
More informationVarious Schemes of Load Balancing in Distributed Systems- A Review
741 Various Schemes of Load Balancing in Distributed Systems- A Review Monika Kushwaha Pranveer Singh Institute of Technology Kanpur, U.P. (208020) U.P.T.U., Lucknow Saurabh Gupta Pranveer Singh Institute
More information- Behind The Cloud -
- Behind The Cloud - Infrastructure and Technologies used for Cloud Computing Alexander Huemer, 0025380 Johann Taferl, 0320039 Florian Landolt, 0420673 Seminar aus Informatik, University of Salzburg Overview
More informationParallel Analysis and Visualization on Cray Compute Node Linux
Parallel Analysis and Visualization on Cray Compute Node Linux David Pugmire, Oak Ridge National Laboratory and Hank Childs, Lawrence Livermore National Laboratory and Sean Ahern, Oak Ridge National Laboratory
More informationSimplest Scalable Architecture
Simplest Scalable Architecture NOW Network Of Workstations Many types of Clusters (form HP s Dr. Bruce J. Walker) High Performance Clusters Beowulf; 1000 nodes; parallel programs; MPI Load-leveling Clusters
More informationLOAD BALANCING AS A STRATEGY LEARNING TASK
LOAD BALANCING AS A STRATEGY LEARNING TASK 1 K.KUNGUMARAJ, 2 T.RAVICHANDRAN 1 Research Scholar, Karpagam University, Coimbatore 21. 2 Principal, Hindusthan Institute of Technology, Coimbatore 32. ABSTRACT
More informationA GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS
A GPU COMPUTING PLATFORM (SAGA) AND A CFD CODE ON GPU FOR AEROSPACE APPLICATIONS SUDHAKARAN.G APCF, AERO, VSSC, ISRO 914712564742 g_suhakaran@vssc.gov.in THOMAS.C.BABU APCF, AERO, VSSC, ISRO 914712565833
More informationRESEARCH PAPER International Journal of Recent Trends in Engineering, Vol 1, No. 1, May 2009
An Algorithm for Dynamic Load Balancing in Distributed Systems with Multiple Supporting Nodes by Exploiting the Interrupt Service Parveen Jain 1, Daya Gupta 2 1,2 Delhi College of Engineering, New Delhi,
More informationA General Framework for Tracking Objects in a Multi-Camera Environment
A General Framework for Tracking Objects in a Multi-Camera Environment Karlene Nguyen, Gavin Yeung, Soheil Ghiasi, Majid Sarrafzadeh {karlene, gavin, soheil, majid}@cs.ucla.edu Abstract We present a framework
More informationDesign Verification and Test of Digital VLSI Circuits NPTEL Video Course. Module-VII Lecture-I Introduction to Digital VLSI Testing
Design Verification and Test of Digital VLSI Circuits NPTEL Video Course Module-VII Lecture-I Introduction to Digital VLSI Testing VLSI Design, Verification and Test Flow Customer's Requirements Specifications
More informationFigure 1. The cloud scales: Amazon EC2 growth [2].
- Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues
More informationV:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System
V:Drive - Costs and Benefits of an Out-of-Band Storage Virtualization System André Brinkmann, Michael Heidebuer, Friedhelm Meyer auf der Heide, Ulrich Rückert, Kay Salzwedel, and Mario Vodisek Paderborn
More informationClient/Server and Distributed Computing
Adapted from:operating Systems: Internals and Design Principles, 6/E William Stallings CS571 Fall 2010 Client/Server and Distributed Computing Dave Bremer Otago Polytechnic, N.Z. 2008, Prentice Hall Traditional
More informationClient/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
More informationMaking A Beowulf Cluster Using Sun computers, Solaris operating system and other commodity components
Making A Beowulf Cluster Using Sun computers, Solaris operating system and other commodity components 1. INTRODUCTION: Peter Wurst and Christophe Dupré Scientific Computation Research Center Rensselaer
More informationMultiobjective Multicast Routing Algorithm
Multiobjective Multicast Routing Algorithm Jorge Crichigno, Benjamín Barán P. O. Box 9 - National University of Asunción Asunción Paraguay. Tel/Fax: (+9-) 89 {jcrichigno, bbaran}@cnc.una.py http://www.una.py
More informationCluster Implementation and Management; Scheduling
Cluster Implementation and Management; Scheduling CPS343 Parallel and High Performance Computing Spring 2013 CPS343 (Parallel and HPC) Cluster Implementation and Management; Scheduling Spring 2013 1 /
More informationBinary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
More informationOPTIMIZED SENSOR NODES BY FAULT NODE RECOVERY ALGORITHM
OPTIMIZED SENSOR NODES BY FAULT NODE RECOVERY ALGORITHM S. Sofia 1, M.Varghese 2 1 Student, Department of CSE, IJCET 2 Professor, Department of CSE, IJCET Abstract This paper proposes fault node recovery
More informationGrid Scheduling Dictionary of Terms and Keywords
Grid Scheduling Dictionary Working Group M. Roehrig, Sandia National Laboratories W. Ziegler, Fraunhofer-Institute for Algorithms and Scientific Computing Document: Category: Informational June 2002 Status
More information- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
More informationEnabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
More informationAgenda. HPC Software Stack. HPC Post-Processing Visualization. Case Study National Scientific Center. European HPC Benchmark Center Montpellier PSSC
HPC Architecture End to End Alexandre Chauvin Agenda HPC Software Stack Visualization National Scientific Center 2 Agenda HPC Software Stack Alexandre Chauvin Typical HPC Software Stack Externes LAN Typical
More informationAn Efficient load balancing using Genetic algorithm in Hierarchical structured distributed system
An Efficient load balancing using Genetic algorithm in Hierarchical structured distributed system Priyanka Gonnade 1, Sonali Bodkhe 2 Mtech Student Dept. of CSE, Priyadarshini Instiute of Engineering and
More informationViolin: A Framework for Extensible Block-level Storage
Violin: A Framework for Extensible Block-level Storage Michail Flouris Dept. of Computer Science, University of Toronto, Canada flouris@cs.toronto.edu Angelos Bilas ICS-FORTH & University of Crete, Greece
More informationA Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture
A Robust Dynamic Load-balancing Scheme for Data Parallel Application on Message Passing Architecture Yangsuk Kee Department of Computer Engineering Seoul National University Seoul, 151-742, Korea Soonhoi
More informationBENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB
BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next
More informationEnabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
More informationOperating System for the K computer
Operating System for the K computer Jun Moroo Masahiko Yamada Takeharu Kato For the K computer to achieve the world s highest performance, Fujitsu has worked on the following three performance improvements
More informationUsing an MPI Cluster in the Control of a Mobile Robots System
Using an MPI Cluster in the Control of a Mobile Robots System Mohamed Salim LMIMOUNI, Saïd BENAISSA, Hicham MEDROMI, Adil SAYOUTI Equipe Architectures des Systèmes (EAS), Laboratoire d Informatique, Systèmes
More informationOracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
More informationTesting Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX
White Paper Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX April 2010 Cy Hay Product Manager, Synopsys Introduction The most important trend
More informationArchitecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia vlad@neva.ru 1. Introduction Modern
More informationAgenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
More information