Performance Evaluation of Multi-core processors with Varied Interconnect Networks
|
|
|
- Winifred Copeland
- 10 years ago
- Views:
Transcription
1 Performance Evaluation of Multi-core processors with Varied Interconnect Networks Ram Prasad Mohanty, Ashok Kumar Turuk, Bibhudatta Sahoo Department of Computer Science and Enginee ring National Institute of Technology Rourkela , India Abstract Multi-core and heterogeneous processor systems are widely used now a days. Even mobile devices have two or more cores to improve their performance. While these two technologies are widely used, it is not clear which one would perform better and which hardware configuration is optimal for a specific target domain. In this paper, we proposed an interconnect architecture Multi-core processor with Hybrid network(mphn). The performance of MPHN has been compared with: (i) Multi-core processor with internal network, and (ii) Multi-core processor with Ring network. This architecture substantially minimizes the communication delay in multicore processors. Index Terms Multi-core processor; interconnect; performance analysis; impact of cache size. I. INTRODUCTION In today s Scenario the usage and application of multithreaded or multi-core processor system in most of the computer industry is a common implementation across the globe. Also, in mobile device applications, it is becoming more common to see cell phones with multi-cores. Multithreading/multi-core technology increases performance, but doing so requires more power than single threading/core computers. Power was not an issue at the beginning of computer era. However, power has become a critical design issue in computer systems [1]. Multi-threaded and multi-core systems also requires more space (area) than a single threaded or single core system []. Multi-threaded and multi-core technologies are conceptually different. A thread is the smallest unit of processing that can be scheduled by an operating system, and multiple threads can exist within the same process and share the resources but are executed independently. However, cores in multi-core system have hardware resources for themselves and use them each for processing [3]. In this paper, we evaluate the performance of (i) Multi-core Processor with internal Network, (ii) Multi-core Processor with Ring network, and (iii) Multi-core Processor with hybrid Network in order to find better configuration of performance computing. II. ARCHITECTURE AND BACKGROUND Various work in current literature has explored the multicore architecture utilising various preformance metrics and application domain. D.M. and Ranganathan [4] have analysed a Single-ISA heterogeneous multi-core architecture for multithreaded workload performance. The objective was to analyze the performance of multi-core architectures for multithreaded workloads. This section details the benefits of variation in the interconnection network in the multi-core architecture with multithreaded workloads. A. Exploration of Multi-core Architecture Various works has analyzed the performance in both single core and multi-core architectures. Julian et al. [5] determined the relationship between performance and memory system in single core as well as multi-core architecture. They utilized multiple performance parameters like cache size, core complexity. The author have discussed the effect of variation in cache size and core complexity across the single core and multi-core architecture. Multi-core architecture with multiple types of on-chip interconnect network [6] are the recent trend of multi-core architecture. This type of architecture have different type of interconnect networks with different core complexity and cache configuration. Few of the architecture of this type have been described below: 1) Multi-core Processor with Internal Network: In Fig. 1 the architecture of a multi-core processor with internal network has been shown. This architecture has a private L1 cache dedicated for each core, to unify the instruction and data requests. These L1 caches are connected to two L cache by using a switch [7]. Fig. 1: Multi-core Processor with Internal Network
2 ) Multi-core Processor with Ring Network: Multi-core processor with Ring Network is shown in Fig.. Here each core have private L1 cache. While the L uses the interconnect which is implemented using a bidirectional ring to communicate with the memory controller. III. PROPOSED WORK In this paper, we proposed an interconnect architecture that substantially minimizes the communication delay in multcore processors. The performance of the proposed architecture has been compared with existing multicore processor architectures detailed in Section. A. Multi-core Processor with Hybrid Network (MPHN) The proposed architecture of multi-core processor uses a hybrid network to connect main memory modules with multiple cores. Typical 4 core multi-core processor architecture is shown in Fig 3. This architecture has private L1 caches as shown in Fig. 4.The two L caches serve the independent higher level L1 caches. 4 cores of the processor shares one level cache bank. This level cache bank communicates with the main memory banks using the proposed hybrid interconnect. The hybrid interconnect is created using four switches connected to each other with bidirectional links. Fig. : Multi-core Processor with Ring Network 3) Switches: The network model as shown in the different architecture above consists of set of nodes and switch nodes. Few links that connects to the input and output buffers of a pair of nodes is also included in this network. The nodes existing in this network is classified as the end nodes and switch nodes. An end node is able to send or receive packets from or to another end node. Switches are used only to forward the packets between the switches or end nodes [8]. A link is particularly used for connection between the end node and switch. The internal architecture of a switch is displayed in the Fig: 5. This switch includes a crossbar which connects each input buffers with every other output buffer. A packet at any input buffer can be routed to any of the output buffer at the tail of the switch [9]. The paper has been organized as Core 0 L1-0 L1-3 Core 3 Core 4 L1-4 L1-7 Core 7 Sw0 SWITCH L 0 SWITCH L 1 Sw1 Sw Sw3 MM-0 MM-1 MM- MM-3 Fig. 4: Multi-core Processor with Hybrid Network B. Parameters used for Performace Analysis The parameters of interest to analyze the performance are outlined as follows: Execution Time It is the product of the clock cycle time with the sum of CPU cycles and the memory stall cycles [10][11]. Fig. 3: Switch Architecture Model follows: section III describes the proposed work, section IV provides a detailed description of the simulation results and section V gives a concluding remark and the future direction of our work.. CP Utime : ˆCPU time = clk t (1) where, CPU time is CPU time. clk is CPU clock cycles for a program. t is clock cycle time. Amdahl started with the intuitively clear statement that program speedup is a function of the fraction of a program that is enhanced and to what extent that fraction is enhanced. Over all speedup is computed by the ratio of the old and new execution time. speedup :ˆs =( p 0 p 1 )=( eo en ) () where, ˆs is, p 0 is Performance for entire task using the enhancement when possible, p 1 is Performance for entire task without using the enhancement.
3 e o is Old Execution Time. e n is New Execution Time. (Overall) It is the ratio of the execution times. The ratio of the Old exectuion time with the New execution time. overall =ˆs o = 1 ((1 fe)+( fe se ) (3) where, s o is Overall speedup, f e is Fraction enhanced, s e is enhanced. Execution Time (new): New execution time is dependant on the fraction of program enhanced and the speedup of this enhanced program. Here the way of computing the new exectuion time by using the old execution time and the speedup of enhanced program. ExecutionT ime(new) :ˆe = et o((1 f e)+( fe )) (4) se where, ˆe is New Execution Time, et o is Old Execution Time, f e is Fraction enhanced, s e is enhanced. IV. SIMULATION RESULTS For the simulation work, we take the help of the following: SPLASH benchmark suite. multisim simulator. A. Construction of Workload With the advancement of processor architecture over time, benchmarks that were used to compute the performance of these processors are not as practical today as they were before due to their incapability to stress the new architectures to their utmost capacity in terms of clock cycles, cache, main memory and I/O bandwidth [1]. Hence new and enhanced benchmarks have to be developed and used. Bienia et al. [13] have compared the SPLASH- and PARSEC on chipmultiprocessors. SPLASH- is found to be such a benchmark that has concentrated workloads based on real applications and is a descendant of the SPLASH benchmark. Other such benchmark includes CPUSPEC006, and Mini Bench. The SPLASH- has 11 programs. All experiments were run on systems with 3 bit LINUX operating system and Intel Core Duo processors using the multisim simulator. Here we have used the fft program of the benchamark suite for the performance evaluation. B. Approach for Simulation The simulator used for experimentation is MultiSim [9] which has been developed integrating some significant characteristics of popular simulators, such as separate functional and timing simulation, SMT and multiprocessor support and cache coherence. MultiSim is an application-only tool intended to simulate x86 binary executable files. The simulation is run by several multi-core architecture which have been detailed. In each group, the simulation case is determined by varying the configuration of cache, cores or network connections [4]. C. Simulation Results Using the MultiSim simulator the multi-core architecture with internal networks as shown in Fig: 1 was modelled. On this model the number of cores was kept constant as 8 and the L1 and L cache size was varied while keeping the number of cores constant. The L cache size was varied first by keeping the L1 cache size constant and then in the other set The L1 cache size was varied and the same set of L cache size was used and the execution time was analysed. The results thus obtained is displayed in the Table: I TABLE I: Execution time for MPIN L1 Cache Size L Cache Size Execution Time 51KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB Multi-core architecture with ring network as given in Fig: was modelled with 8 cores using the multisim simulator. The number of cores was kept constant while varying the L cache size for every L1 cache size in the set as shown in Table: II and the execution time is analysed whose details is provided in the same table. Multi-core architecture with hybrid network as shown in Fig: 4 was modelled by keeping the number of cores as 8. Here also we kept the number of cores constant The L cache size was varied for every L1 cache size in the set as shown in Table: II.Here the execution time is analysed. The analysis for these results has been depicted in the following figures for each Architecture seperately. Fig: 5 shows the graph for CPU execution time for multi-core processor with Internal Network. Fig: 6 shows the bar graph for CPU execution time for multi-core architecture with External network. Fig: 7 shows the graph for CPU execution time for multi-core architecture with Hybrid network. Cache size has great impact on the performance of the multicore processor architecture. This can be deduced from the results obtained. But as the cache size
4 TABLE II: Execution time for MPEN L1 Cache Size L Cache Size Execution Time 51KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB TABLE III: Execution time for MPHN L1 Cache Size L Cache Size Execution Time 51KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB KB MB KB MB MB MB Execution Time for MPEN L1=3KB L1=64KB L1=18KB L1=56KB L1=51KB Fig. 6: Execution Time for MPEN Execution Time for MPRN L1=3KB L1=64KB L1=18KB L1=56KB L1=51KB 30 Fig. 7: Execution Time for MPHN increases, the growth in the performance is lowered beyond certain size of the cache. We can also see that the Multi-core processor with hybrid network provides the lowest execution time as compared to the other two architectures. The speedup has been computed by the expression given by the expression. Here the Old execution time is the execution time optained by executing the same benchmark program for a single core processor. The speedup obtained has been depicted in the figures given. The speedup for Multi-core architecture with internal network has been given in the Fig: 8. The speedup for multi-core architecture for External Network has been depicted in the Fig: 9. The for multi-core architecture with hybrid network has been given in the Fig: 10. Speed UP for MPIN L1 = 3 KB L1 = 64 KB L1 = 18 KB L1 = 56 KB L1 = 51 KB 1.8 Fig. 8: for MPIN 3550 Execution Time for MPIN L1=3 KB L1=64 KB L1=18 KB L1=56 KB L1=51 KB for MPEN L1 = 3 KB L1 = 64 KB L1 = 18 KB L1 = 56 KB L1 = 51 KB Fig. 5: Execution Time for MPIN 1.5 Fig. 9: for MPEN
5 for MPRN L1 = 3 KB L1 = 64 KB L1 = 18 KB L1 = 56 KB L1 = 51 KB Fig. 10: for MPHN V. CONCLUSIONS &FUTURE WORK Communication delay is one of the big issues to address in developing future processor architectures. In this paper, we discussed the impact of cache size, interconnect on multicore processor performance. Our conclusions apply to multicore architectures composed of processors with shared L cache and running parallel applications. We conducted an experimental evaluation of our proposed interconnect architecture and two other existing architectures, considering number of cores, and cache size. We showed that multicore processor with hybrid interconnect achieve better performance with respect to the two existing architectures. Few of the factors, not directly addressed in this paper, may affect design choices, such as power,area, and reliability. These factors may, as well, affect the design choice. Nevertheless, evaluating all these alternatives is left for future research. REFERENCES [1] D. Pham, S. Asano, M. Bolliger, M. Day, H. Hofstee, C. Johns, J. Kahle, A. Kameyama, J. Keaty, Y. Masubuchi et al., The design and implementation of a first-generation cell processor, in Solid-State Circuits Conference, 005. Digest of Technical Papers. ISSCC. 005 IEEE International. IEEE, 005, pp [] B. B. Brey, The Intel Microprocessors. Prentice Hall Press, 008. [3] D. Geer, Chip makers turn to multicore processors, Computer, vol. 38, no. 5, pp , 005. [4] R. Kumar, D. Tullsen, P. Ranganathan, N. Jouppi, and K. Farkas, Singleisa heterogeneous multi-core architectures for multithreaded workload performance, in ACM SIGARCH Computer Architecture News, vol. 3, no.. IEEE Computer Society, 004, p. 64. [5] J. Bui, C. Xu, and S. Gurumurthi, Understanding performance issues on both single core and multi-core architecture, Technical report, University of Virginia, Department of Computer Science, Charlottesville, Tech. Rep., 007. [6] R. Ubal, J. Sahuquillo, S. Petit, P. López, Z. Chen, and D. Kaeli, The multisim simulation framework. [7] R. Ubal, B. Jang, P. Mistry, D. Schaa, and D. Kaeli, Multisim: a simulation framework for cpu-gpu computing, in Proceedings of the 1st international conference on Parallel architectures and compilation techniques. ACM, 01, pp [8] K. Hwang, Advanced computer architecture. Tata McGraw-Hill Education, 003. [9] R. Ubal, J. Sahuquillo, S. Petit, and P. López, Multisim: A simulation framework to evaluate multicore-multithread processors, in IEEE 19th International Symposium on Computer Architecture and High Performance computing, page (s), 007, pp [10] C. Hughes and T. Hughes, Professional multicore programming: design and implementation for C++ developers. Wrox, 011. [11] J. L. Hennessy and D. A. Patterson, Computer architecture: a quantitative approach. Elsevier, 01. [1] S. Akhter and J. Roberts, Multi-core programming. Intel Press, 006, vol. 33. [13] C. Bienia, S. Kumar, and K. Li, Parsec vs. splash-: A quantitative comparison of two multithreaded benchmark suites on chip-multiprocessors, in Workload Characterization, 008. IISWC 008. IEEE International Symposium on. IEEE, 008, pp [14] M. Monchiero, R. Canal, and A. González, Design space exploration for multicore architectures: a power/performance/thermal view, in Proceedings of the 0th annual international conference on Supercomputing. ACM, 006, pp [15] D. Gove, Multicore Application Programming: For Windows, Linux, and Oracle Solaris. Addison-Wesley Professional, 010. [16] J. Fruehe, Multicore processor technology, 005. [17] P. Gorder, Multicore processors for science and engineering, Computing in science & engineering, vol. 9, no., pp. 3 7, 007. [18] L. Peng, J. Peir, T. Prakash, Y. Chen, and D. Koppelman, Memory performance and scalability of intel s and amd s dual-core processors: a case study, in Performance, Computing, and Communications Conference, 007. IPCCC 007. IEEE Internationa. IEEE, 007, pp [19] B. JUN-FENG, Application development methods based on multi-core systems, American Journal of Engineering and Technology Research Vol, vol. 11, no. 9, 011. [0] D. Hackenberg, D. Molka, and W. E. Nagel, Comparing cache architectures and coherency protocols on x86-64 multicore smp systems, in Proceedings of the 4Nd Annual IEEE/ACM International Symposium on microarchitecture. ACM, 009, pp [1] B. Schauer, Multicore processors a necessity, ProQuest Discovery Guides1 14, 008. [] R. Merritt, Cpu designers debate multi-core future, EETimes Online, February, 008. [3] S. Balakrishnan, R. Rajwar, M. Upton, and K. Lai, The impact of performance asymmetry in emerging multicore architectures, in ACM SIGARCH Computer Architecture News, vol. 33, no.. IEEE Computer Society, 005, pp [4] W. Knight, Two heads are better than one [dual-core processors], IEE Review, vol. 51, no. 9, pp. 3 35, 005. [5] G. Tan, N. Sun, and G. R. Gao, Improving performance of dynamic programming via parallelism and locality on multicore architectures, Parallel and Distributed Systems, IEEE Transactions on, vol. 0, no., pp , 009. [6] S. C. Woo, M. Ohara, E. Torrie, J. P. Singh, and A. Gupta, The splash- programs: Characterization and methodological considerations, in ACM SIGARCH Computer Architecture News, vol. 3, no.. ACM, 1995, pp [7] R. P. Mohanty, A. K. Turuk, and B. Sahoo, Analysing the performance of multi-core architecture, in 1st International Conference on Computing, Communication and Sensor Networks-CCSN,(01), vol. 6, PIET, Rourkela, Odisha, pp , 01., 01. [8] D. Stasiak, R. Chaudhry, D. Cox, S. Posluszny, J. Warnock, S. Weitzel, D. Wendel, and M. Wang, Cell processor low-power design methodology, Micro, IEEE, vol. 5, no. 6, pp , 005. [9] J. A. Brown, R. Kumar, and D. Tullsen, Proximity-aware directorybased coherence for multi-core processor architectures, in Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures. ACM, 007, pp [30] D. Olson, Intel announces plan for up to 8-core processor, Slippery Brick, March, 008. [31] X. Wang, G. Gan, J. Manzano, D. Fan, and S. Guo, A quantitative study of the on-chip network and memory hierarchy design for manycore processor, in Parallel and Distributed Systems, 008. ICPADS th IEEE International Conference on. IEEE, 008, pp [3] L. Cheng, J. B. Carter, and D. Dai, An adaptive cache coherence protocol optimized for producer-consumer sharing, in High Performance Computer Architecture, 007. HPCA 007. IEEE 13th International Symposium on. IEEE, 007, pp [33] R. Kumar, V. Zyuban, and D. M. Tullsen, Interconnections in multicore architectures: Understanding mechanisms, overheads and scaling, in Computer Architecture, 005. ISCA 05. Proceedings. 3nd International Symposium on. IEEE, 005, pp
Multicore Processors A Necessity By Bryan Schauer
Discovery Guides Multicore Processors A Necessity By Bryan Schauer Abstract As personal computers have become more prevalent and more applications have been designed for them, the end-user has seen the
Performance Evaluation of 2D-Mesh, Ring, and Crossbar Interconnects for Chip Multi- Processors. NoCArc 09
Performance Evaluation of 2D-Mesh, Ring, and Crossbar Interconnects for Chip Multi- Processors NoCArc 09 Jesús Camacho Villanueva, José Flich, José Duato Universidad Politécnica de Valencia December 12,
A New Methodology for Studying Realistic Processors in Computer Science Degrees
A New Methodology for Studying Realistic Processors in Computer Science Degrees C. Gómez Departamento de Sistemas Informáticos Universidad de Castilla-La Mancha Albacete, Spain [email protected] M.E.
Scalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
Parallel Programming Survey
Christian Terboven 02.09.2014 / Aachen, Germany Stand: 26.08.2014 Version 2.3 IT Center der RWTH Aachen University Agenda Overview: Processor Microarchitecture Shared-Memory
Performance Impacts of Non-blocking Caches in Out-of-order Processors
Performance Impacts of Non-blocking Caches in Out-of-order Processors Sheng Li; Ke Chen; Jay B. Brockman; Norman P. Jouppi HP Laboratories HPL-2011-65 Keyword(s): Non-blocking cache; MSHR; Out-of-order
Control 2004, University of Bath, UK, September 2004
Control, University of Bath, UK, September ID- IMPACT OF DEPENDENCY AND LOAD BALANCING IN MULTITHREADING REAL-TIME CONTROL ALGORITHMS M A Hossain and M O Tokhi Department of Computing, The University of
Outline. Introduction. Multiprocessor Systems on Chip. A MPSoC Example: Nexperia DVP. A New Paradigm: Network on Chip
Outline Modeling, simulation and optimization of Multi-Processor SoCs (MPSoCs) Università of Verona Dipartimento di Informatica MPSoCs: Multi-Processor Systems on Chip A simulation platform for a MPSoC
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
Optimizing Shared Resource Contention in HPC Clusters
Optimizing Shared Resource Contention in HPC Clusters Sergey Blagodurov Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract Contention for shared resources in HPC clusters occurs
Multi-Threading Performance on Commodity Multi-Core Processors
Multi-Threading Performance on Commodity Multi-Core Processors Jie Chen and William Watson III Scientific Computing Group Jefferson Lab 12000 Jefferson Ave. Newport News, VA 23606 Organization Introduction
3D On-chip Data Center Networks Using Circuit Switches and Packet Switches
3D On-chip Data Center Networks Using Circuit Switches and Packet Switches Takahide Ikeda Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka,
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003
Interconnect Efficiency of Tyan PSC T-630 with Microsoft Compute Cluster Server 2003 Josef Pelikán Charles University in Prague, KSVI Department, [email protected] Abstract 1 Interconnect quality
COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook)
COMP 422, Lecture 3: Physical Organization & Communication Costs in Parallel Machines (Sections 2.4 & 2.5 of textbook) Vivek Sarkar Department of Computer Science Rice University [email protected] COMP
Course Development of Programming for General-Purpose Multicore Processors
Course Development of Programming for General-Purpose Multicore Processors Wei Zhang Department of Electrical and Computer Engineering Virginia Commonwealth University Richmond, VA 23284 [email protected]
Multi-core processors An overview
Multi-core processors An overview Balaji Venu 1 1 Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK Abstract Microprocessors have revolutionized the world we
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association
Making Multicore Work and Measuring its Benefits Markus Levy, president EEMBC and Multicore Association Agenda Why Multicore? Standards and issues in the multicore community What is Multicore Association?
Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin
BUS ARCHITECTURES Lizy Kurian John Electrical and Computer Engineering Department, The University of Texas as Austin Keywords: Bus standards, PCI bus, ISA bus, Bus protocols, Serial Buses, USB, IEEE 1394
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines
Reconfigurable Architecture Requirements for Co-Designed Virtual Machines Kenneth B. Kent University of New Brunswick Faculty of Computer Science Fredericton, New Brunswick, Canada [email protected] Micaela Serra
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
Why the Network Matters
Week 2, Lecture 2 Copyright 2009 by W. Feng. Based on material from Matthew Sottile. So Far Overview of Multicore Systems Why Memory Matters Memory Architectures Emerging Chip Multiprocessors (CMP) Increasing
Cost Effective Selection of Data Center in Cloud Environment
Cost Effective Selection of Data Center in Cloud Environment Manoranjan Dash 1, Amitav Mahapatra 2 & Narayan Ranjan Chakraborty 3 1 Institute of Business & Computer Studies, Siksha O Anusandhan University,
Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors
2011 International Symposium on Computer Networks and Distributed Systems (CNDS), February 23-24, 2011 Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors Atefeh Khosravi,
Modeling Virtual Machine Performance: Challenges and Approaches
Modeling Virtual Machine Performance: Challenges and Approaches Omesh Tickoo Ravi Iyer Ramesh Illikkal Don Newell Intel Corporation Intel Corporation Intel Corporation Intel Corporation [email protected]
Multicore Processor, Parallelism and Their Performance Analysis
Multicore Processor, Parallelism and Their Performance Analysis I Rakhee Chhibber, II Dr. R.B.Garg I Research Scholar, MEWAR University, Chittorgarh II Former Professor, Delhi School of Professional Studies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies
Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies Kurt Klemperer, Principal System Performance Engineer [email protected] Agenda Session Length:
A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures
11 th International LS-DYNA Users Conference Computing Technology A Study on the Scalability of Hybrid LS-DYNA on Multicore Architectures Yih-Yih Lin Hewlett-Packard Company Abstract In this paper, the
Xeon+FPGA Platform for the Data Center
Xeon+FPGA Platform for the Data Center ISCA/CARL 2015 PK Gupta, Director of Cloud Platform Technology, DCG/CPG Overview Data Center and Workloads Xeon+FPGA Accelerator Platform Applications and Eco-system
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana
FPGA-based Multithreading for In-Memory Hash Joins
FPGA-based Multithreading for In-Memory Hash Joins Robert J. Halstead, Ildar Absalyamov, Walid A. Najjar, Vassilis J. Tsotras University of California, Riverside Outline Background What are FPGAs Multithreaded
Lecture 11: Multi-Core and GPU. Multithreading. Integration of multiple processor cores on a single chip.
Lecture 11: Multi-Core and GPU Multi-core computers Multithreading GPUs General Purpose GPUs Zebo Peng, IDA, LiTH 1 Multi-Core System Integration of multiple processor cores on a single chip. To provide
International Journal of Computer & Organization Trends Volume20 Number1 May 2015
Performance Analysis of Various Guest Operating Systems on Ubuntu 14.04 Prof. (Dr.) Viabhakar Pathak 1, Pramod Kumar Ram 2 1 Computer Science and Engineering, Arya College of Engineering, Jaipur, India.
Virtuoso and Database Scalability
Virtuoso and Database Scalability By Orri Erling Table of Contents Abstract Metrics Results Transaction Throughput Initializing 40 warehouses Serial Read Test Conditions Analysis Working Set Effect of
Networking Virtualization Using FPGAs
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical and Computer Engineering University of Massachusetts,
McPAT: An Integrated Power, Area, and Timing Modeling Framework for Multicore and Manycore Architectures
McPAT: An Integrated Power, Area, and Timing Modeling Framework for Multicore and Manycore Architectures Sheng Li, Junh Ho Ahn, Richard Strong, Jay B. Brockman, Dean M Tullsen, Norman Jouppi MICRO 2009
Enhancing Cloud-based Servers by GPU/CPU Virtualization Management
Enhancing Cloud-based Servers by GPU/CPU Virtualiz Management Tin-Yu Wu 1, Wei-Tsong Lee 2, Chien-Yu Duan 2 Department of Computer Science and Inform Engineering, Nal Ilan University, Taiwan, ROC 1 Department
Parallel Firewalls on General-Purpose Graphics Processing Units
Parallel Firewalls on General-Purpose Graphics Processing Units Manoj Singh Gaur and Vijay Laxmi Kamal Chandra Reddy, Ankit Tharwani, Ch.Vamshi Krishna, Lakshminarayanan.V Department of Computer Engineering
Power Reduction Techniques in the SoC Clock Network. Clock Power
Power Reduction Techniques in the SoC Network Low Power Design for SoCs ASIC Tutorial SoC.1 Power Why clock power is important/large» Generally the signal with the highest frequency» Typically drives a
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0
PERFORMANCE ENHANCEMENTS IN TreeAge Pro 2014 R1.0 15 th January 2014 Al Chrosny Director, Software Engineering TreeAge Software, Inc. [email protected] Andrew Munzer Director, Training and Customer
A Lab Course on Computer Architecture
A Lab Course on Computer Architecture Pedro López José Duato Depto. de Informática de Sistemas y Computadores Facultad de Informática Universidad Politécnica de Valencia Camino de Vera s/n, 46071 - Valencia,
MapReduce on GPUs. Amit Sabne, Ahmad Mujahid Mohammed Razip, Kun Xu
1 MapReduce on GPUs Amit Sabne, Ahmad Mujahid Mohammed Razip, Kun Xu 2 MapReduce MAP Shuffle Reduce 3 Hadoop Open-source MapReduce framework from Apache, written in Java Used by Yahoo!, Facebook, Ebay,
Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing
www.ijcsi.org 227 Real Time Network Server Monitoring using Smartphone with Dynamic Load Balancing Dhuha Basheer Abdullah 1, Zeena Abdulgafar Thanoon 2, 1 Computer Science Department, Mosul University,
Design and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,
An Adaptive Task-Core Ratio Load Balancing Strategy for Multi-core Processors
An Adaptive Task-Core Ratio Load Balancing Strategy for Multi-core Processors Ian K. T. Tan, Member, IACSIT, Chai Ian, and Poo Kuan Hoong Abstract With the proliferation of multi-core processors in servers,
Symmetric Multiprocessing
Multicore Computing A multi-core processor is a processing system composed of two or more independent cores. One can describe it as an integrated circuit to which two or more individual processors (called
Performance Metrics and Scalability Analysis. Performance Metrics and Scalability Analysis
Performance Metrics and Scalability Analysis 1 Performance Metrics and Scalability Analysis Lecture Outline Following Topics will be discussed Requirements in performance and cost Performance metrics Work
Offloading file search operation for performance improvement of smart phones
Offloading file search operation for performance improvement of smart phones Ashutosh Jain [email protected] Vigya Sharma [email protected] Shehbaz Jaffer [email protected] Kolin Paul
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Research Statement. Hung-Wei Tseng
Research Statement Hung-Wei Tseng I have research experience in many areas of computer science and engineering, including computer architecture [1, 2, 3, 4], high-performance and reliable storage systems
How To Improve Performance On A Single Chip Computer
: Redundant Arrays of Inexpensive Disks this discussion is based on the paper:» A Case for Redundant Arrays of Inexpensive Disks (),» David A Patterson, Garth Gibson, and Randy H Katz,» In Proceedings
Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based Infrastructure
J Inf Process Syst, Vol.9, No.3, September 2013 pissn 1976-913X eissn 2092-805X http://dx.doi.org/10.3745/jips.2013.9.3.379 Round Robin with Server Affinity: A VM Load Balancing Algorithm for Cloud Based
How To Balance A Web Server With Remaining Capacity
Remaining Capacity Based Load Balancing Architecture for Heterogeneous Web Server System Tsang-Long Pao Dept. Computer Science and Engineering Tatung University Taipei, ROC Jian-Bo Chen Dept. Computer
Operating System Multilevel Load Balancing
Operating System Multilevel Load Balancing M. Corrêa, A. Zorzo Faculty of Informatics - PUCRS Porto Alegre, Brazil {mcorrea, zorzo}@inf.pucrs.br R. Scheer HP Brazil R&D Porto Alegre, Brazil [email protected]
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
Lecture 2 Parallel Programming Platforms
Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple
Dynamic resource management for energy saving in the cloud computing environment
Dynamic resource management for energy saving in the cloud computing environment Liang-Teh Lee, Kang-Yuan Liu, and Hui-Yang Huang Department of Computer Science and Engineering, Tatung University, Taiwan
Parallel Computing 37 (2011) 26 41. Contents lists available at ScienceDirect. Parallel Computing. journal homepage: www.elsevier.
Parallel Computing 37 (2011) 26 41 Contents lists available at ScienceDirect Parallel Computing journal homepage: www.elsevier.com/locate/parco Architectural support for thread communications in multi-core
Naveen Muralimanohar Rajeev Balasubramonian Norman P Jouppi
Optimizing NUCA Organizations and Wiring Alternatives for Large Caches with CACTI 6.0 Naveen Muralimanohar Rajeev Balasubramonian Norman P Jouppi University of Utah & HP Labs 1 Large Caches Cache hierarchies
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
Cluster Computing at HRI
Cluster Computing at HRI J.S.Bagla Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211019. E-mail: [email protected] 1 Introduction and some local history High performance computing
Performance of Software Switching
Performance of Software Switching Based on papers in IEEE HPSR 2011 and IFIP/ACM Performance 2011 Nuutti Varis, Jukka Manner Department of Communications and Networking (COMNET) Agenda Motivation Performance
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads
HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing
Hardware Implementation of Improved Adaptive NoC Router with Flit Flow History based Load Balancing Selection Strategy
Hardware Implementation of Improved Adaptive NoC Rer with Flit Flow History based Load Balancing Selection Strategy Parag Parandkar 1, Sumant Katiyal 2, Geetesh Kwatra 3 1,3 Research Scholar, School of
Experimental Evaluation of Horizontal and Vertical Scalability of Cluster-Based Application Servers for Transactional Workloads
8th WSEAS International Conference on APPLIED INFORMATICS AND MUNICATIONS (AIC 8) Rhodes, Greece, August 2-22, 28 Experimental Evaluation of Horizontal and Vertical Scalability of Cluster-Based Application
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment
A Performance Study of Load Balancing Strategies for Approximate String Matching on an MPI Heterogeneous System Environment Panagiotis D. Michailidis and Konstantinos G. Margaritis Parallel and Distributed
Effective Utilization of Multicore Processor for Unified Threat Management Functions
Journal of Computer Science 8 (1): 68-75, 2012 ISSN 1549-3636 2012 Science Publications Effective Utilization of Multicore Processor for Unified Threat Management Functions Sudhakar Gummadi and Radhakrishnan
Accelerate Cloud Computing with the Xilinx Zynq SoC
X C E L L E N C E I N N E W A P P L I C AT I O N S Accelerate Cloud Computing with the Xilinx Zynq SoC A novel reconfigurable hardware accelerator speeds the processing of applications based on the MapReduce
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform
An Experimental Study of Load Balancing of OpenNebula Open-Source Cloud Computing Platform A B M Moniruzzaman 1, Kawser Wazed Nafi 2, Prof. Syed Akhter Hossain 1 and Prof. M. M. A. Hashem 1 Department
A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti 2, Nidhi Rajak 3
A STUDY OF TASK SCHEDULING IN MULTIPROCESSOR ENVIROMENT Ranjit Rajak 1, C.P.Katti, Nidhi Rajak 1 Department of Computer Science & Applications, Dr.H.S.Gour Central University, Sagar, India, [email protected]
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC
OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC Driving industry innovation The goal of the OpenPOWER Foundation is to create an open ecosystem, using the POWER Architecture to share expertise,
Comparative Analysis of Congestion Control Algorithms Using ns-2
www.ijcsi.org 89 Comparative Analysis of Congestion Control Algorithms Using ns-2 Sanjeev Patel 1, P. K. Gupta 2, Arjun Garg 3, Prateek Mehrotra 4 and Manish Chhabra 5 1 Deptt. of Computer Sc. & Engg,
Load Balancing on a Grid Using Data Characteristics
Load Balancing on a Grid Using Data Characteristics Jonathan White and Dale R. Thompson Computer Science and Computer Engineering Department University of Arkansas Fayetteville, AR 72701, USA {jlw09, drt}@uark.edu
TPCalc : a throughput calculator for computer architecture studies
TPCalc : a throughput calculator for computer architecture studies Pierre Michaud Stijn Eyerman Wouter Rogiest IRISA/INRIA Ghent University Ghent University [email protected] [email protected]
ROUTE MECHANISMS FOR WIRELESS ADHOC NETWORKS: -CLASSIFICATIONS AND COMPARISON ANALYSIS
International Journal of Science, Environment and Technology, Vol. 1, No 2, 2012, 72-79 ROUTE MECHANISMS FOR WIRELESS ADHOC NETWORKS: -CLASSIFICATIONS AND COMPARISON ANALYSIS Ramesh Kait 1, R. K. Chauhan
Improved LS-DYNA Performance on Sun Servers
8 th International LS-DYNA Users Conference Computing / Code Tech (2) Improved LS-DYNA Performance on Sun Servers Youn-Seo Roh, Ph.D. And Henry H. Fong Sun Microsystems, Inc. Abstract Current Sun platforms
Thread level parallelism
Thread level parallelism ILP is used in straight line code or loops Cache miss (off-chip cache and main memory) is unlikely to be hidden using ILP. Thread level parallelism is used instead. Thread: process
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique
A Novel Way of Deduplication Approach for Cloud Backup Services Using Block Index Caching Technique Jyoti Malhotra 1,Priya Ghyare 2 Associate Professor, Dept. of Information Technology, MIT College of
Performance Monitoring of Parallel Scientific Applications
Performance Monitoring of Parallel Scientific Applications Abstract. David Skinner National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory This paper introduces an infrastructure
How To Build A Cloud Computer
Introducing the Singlechip Cloud Computer Exploring the Future of Many-core Processors White Paper Intel Labs Jim Held Intel Fellow, Intel Labs Director, Tera-scale Computing Research Sean Koehl Technology
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE
PERFORMANCE ANALYSIS OF KERNEL-BASED VIRTUAL MACHINE Sudha M 1, Harish G M 2, Nandan A 3, Usha J 4 1 Department of MCA, R V College of Engineering, Bangalore : 560059, India [email protected] 2 Department
