On-Chip Memory Architecture Exploration of Embedded System on Chip



Similar documents
7a. System-on-chip design and prototyping platforms

Architectures and Platforms

What is a System on a Chip?

Introduction to Digital System Design

EEM870 Embedded System and Experiment Lecture 1: SoC Design Overview

Testing of Digital System-on- Chip (SoC)

GEDAE TM - A Graphical Programming and Autocode Generation Tool for Signal Processor Applications

ELEC 5260/6260/6266 Embedded Computing Systems

Multichannel Voice over Internet Protocol Applications on the CARMEL DSP

Computer System Design. System-on-Chip

Lesson 7: SYSTEM-ON. SoC) AND USE OF VLSI CIRCUIT DESIGN TECHNOLOGY. Chapter-1L07: "Embedded Systems - ", Raj Kamal, Publs.: McGraw-Hill Education

Introduction to System-on-Chip

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

Pre-tested System-on-Chip Design. Accelerates PLD Development

Switched Interconnect for System-on-a-Chip Designs

Trends in Embedded Software Development in Europe. Dr. Dirk Muthig

FPGA area allocation for parallel C applications

ARM Microprocessor and ARM-Based Microcontrollers

Implementing a Digital Answering Machine with a High-Speed 8-Bit Microcontroller

Outline. Introduction. Multiprocessor Systems on Chip. A MPSoC Example: Nexperia DVP. A New Paradigm: Network on Chip

MPSoC Designs: Driving Memory and Storage Management IP to Critical Importance

Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001

NETWORK ISSUES: COSTS & OPTIONS

Best Practises for LabVIEW FPGA Design Flow. uk.ni.com ireland.ni.com

Digitale Signalverarbeitung mit FPGA (DSF) Soft Core Prozessor NIOS II Stand Mai Jens Onno Krah

Real-Time Operating Systems for MPSoCs

Chapter 2 Features of Embedded System

Custom design services

1. PUBLISHABLE SUMMARY

on-chip and Embedded Software Perspectives and Needs

Making Multicore Work and Measuring its Benefits. Markus Levy, president EEMBC and Multicore Association

Types Of Operating Systems

evm Virtualization Platform for Windows

A Generic Network Interface Architecture for a Networked Processor Array (NePA)

All Programmable Logic. Hans-Joachim Gelke Institute of Embedded Systems. Zürcher Fachhochschule

Continuous-Time Converter Architectures for Integrated Audio Processors: By Brian Trotter, Cirrus Logic, Inc. September 2008

Sample Project List. Software Reverse Engineering

An Interactive Visualization Tool for the Analysis of Multi-Objective Embedded Systems Design Space Exploration

4. H.323 Components. VOIP, Version 1.6e T.O.P. BusinessInteractive GmbH Page 1 of 19

Multi-objective Design Space Exploration based on UML

DEVELOPING TRENDS OF SYSTEM ON A CHIP AND EMBEDDED SYSTEM

Logical Operations. Control Unit. Contents. Arithmetic Operations. Objectives. The Central Processing Unit: Arithmetic / Logic Unit.

FLIX: Fast Relief for Performance-Hungry Embedded Applications

Hardware/Software Co-Design of a Java Virtual Machine

ESE566 REPORT3. Design Methodologies for Core-based System-on-Chip HUA TANG OVIDIU CARNU

Chapter 4 System Unit Components. Discovering Computers Your Interactive Guide to the Digital World

JOURNAL OF OBJECT TECHNOLOGY

CS2101a Foundations of Programming for High Performance Computing

Architectural Level Power Consumption of Network on Chip. Presenter: YUAN Zheng

Codesign: The World Of Practice

Using On-chip Networks to Minimize Software Development Costs

ZigBee Technology Overview

SOC architecture and design

Switch Fabric Implementation Using Shared Memory

İSTANBUL AYDIN UNIVERSITY

Software Defined Radio Architecture for NASA s Space Communications

Microtronics technologies Mobile:

COMPUTER HARDWARE. Input- Output and Communication Memory Systems

Discovering Computers Living in a Digital World

Computer Performance. Topic 3. Contents. Prerequisite knowledge Before studying this topic you should be able to:

BY STEVE BROWN, CADENCE DESIGN SYSTEMS AND MICHEL GENARD, VIRTUTECH

Qsys and IP Core Integration

From Control Loops to Software

Chapter 2 Heterogeneous Multicore Architecture

Computer Systems Structure Main Memory Organization

Chapter 1 Computer System Overview

Weighted Total Mark. Weighted Exam Mark

Memory Systems. Static Random Access Memory (SRAM) Cell

WORKFLOW ENGINE FOR CLOUDS

Introducción. Diseño de sistemas digitales.1

How mobile operators can monetize 3G investments through an effective applications platform

Development of an Internet based Embedded System for Smart House Controlling and Monitoring

The SA601: The First System-On-Chip for Guitar Effects By Thomas Irrgang, Analog Devices, Inc. & Roger K. Smith, Source Audio LLC

EEC 119B Spring 2014 Final Project: System-On-Chip Module

White Paper: Pervasive Power: Integrated Energy Storage for POL Delivery

Mobile Operating Systems Lesson 05 Windows CE Part 1

System Software Integration: An Expansive View. Overview

Operating Systems 4 th Class

Full-Band Capture Cable Digital Tuning

FPGAs in Next Generation Wireless Networks

On some Potential Research Contributions to the Multi-Core Enterprise

3 - Introduction to Operating Systems

Building Blocks for PRU Development

Chapter 1: Introduction. What is an Operating System?

High-Level Synthesis for FPGA Designs

Universal Flash Storage: Mobilize Your Data

Networking Remote-Controlled Moving Image Monitoring System

Software engineering for real-time systems

BDTI Solution Certification TM : Benchmarking H.264 Video Decoder Hardware/Software Solutions

A Survey on ARM Cortex A Processors. Wei Wang Tanima Dey

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

Rapid System Prototyping with FPGAs

EMBEDDED SYSTEM BASICS AND APPLICATION

PowerPC Microprocessor Clock Modes

Parallel Computing. Benson Muite. benson.

The MOST Affordable HD Video Conferencing. Conferencing for Enterprises, Conferencing for SMBs

Embedded Systems: Technologies and Markets

Networking Virtualization Using FPGAs

Architekturen und Einsatz von FPGAs mit integrierten Prozessor Kernen. Hans-Joachim Gelke Institute of Embedded Systems Professur für Mikroelektronik

Specification and Design of a Video Phone System

Transcription:

On-Chip Memory Architecture Exploration of Embedded System on Chip A Thesis Submitted for the Degree of Doctor of Philosophy in the Faculty of Engineering by T.S. Rajesh Kumar Supercomputer Education and Research Centre Indian Institute of Science Bangalore 560 012 September 2008

To my Family, Sree, Amma, Advika and Adarsh

Abstract Today s feature-rich multimedia products require embedded system solution with complex System-on-Chip (SoC) to meet market expectations of high performance at low cost and lower energy consumption. SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the area, power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. The on-chip memory organization of embedded processors varies widely from one SoC to another, depending on the application and market segment for which the SoC is deployed. There is a wide variety of choices available for the embedded designers, starting from simple on-chip SPRAM based architecture to more complex cache-spram based hybrid architecture. The performance of a memory architecture also depends on how the data variables of the application are placed in the memory. There are multiple data layouts for each memory architecture that are efficient from a power and performance viewpoint. Further, the designer would be interested in multiple optimal design points to address various market segments. Hence a memory architecture exploration for an embedded system involves evaluating a large design space in the order of 100,000 of design points and each design points having several tens of thousands of data layouts. Due to its large impact on system performance parameters, the memory architecture is often hand-crafted by experienced designers exploring a very small subset of this design space. The vast memory design space prohibits any possibility for a manual analysis. In this work, we propose an automated framework for on-chip memory architecture

exploration. Our proposed framework integrates memory architecture exploration and data layout to search the design space efficiently. While the memory exploration selects specific memory architectures, the data layout efficiently maps the given application on to the memory architecture under consideration and thus helps in evaluating the memory architecture. The proposed memory exploration framework works at both logical and physical memory architecture level. Our work addresses on-chip memory architecture for DSP processors that is organized as multiple memory banks, with each back can be a single/dual port banks and with non-uniform bank sizes. Further, our work also address memory architecture exploration for on-chip memory architectures that is SPRAM and cache based. Our proposed method is based on multi-objective Genetic Algorithm based and outputs several hundred Pareto-optimal design solutions that are interesting from a area, power and performance viewpoints within a few hours of running on a standard desktop configuration.

Acknowledgments There are many people I would like to thank who have helped me in various ways. First and foremost I would like to thank my Supervisors, Prof. R. Govindarajan and Dr.C.P. Ravikumar, who have guided me and supported me in various aspects through the entire journey in completion of my thesis work. I profusely thank for the encouragement they provided and their perseverance in keeping me focused on the Ph.D. work. I would like to express my gratitude to Texas Instruments for giving me the time and opportunity to pursue my studies. I would like to thank my colleagues at Texas Instruments for their support and reviews. In particular my manager Balaji Holur. I would also like to thank my previous managers Pamela Kumar and Manohar Sambandam. Last but not the least, I would like to thank my dearest family members for the encouragement they provided and the sacrifices they made to help me achieve my goals.

iv

Contents Abstract Acknowledgments i iii List of Publications from this Thesis 1 1 Introduction 3 1.1 Application Specific Systems.......................... 3 1.2 Memory Subsystem............................... 5 1.2.1 On-chip Memory Organization..................... 5 1.2.2 Cache-based Memory Organization.................. 6 1.2.3 Scratch Pad Memory-based Organization............... 7 1.3 Data Layout................................... 8 1.4 Memory Architecture Exploration....................... 10 1.5 Embedded System Design Flow........................ 12 1.6 Contributions.................................. 16 1.7 Thesis Overview................................. 20 2 Background 23 2.1 On-chip Memory Architecture of Embedded Processors........... 23 2.1.1 DSP On-chip SPRAM Architecture.................. 23 2.1.2 Microcontroller Memory Architecture................. 25 2.2 Software Optimizations............................. 26

vi 2.2.1 DSP Software Optimizations...................... 27 2.2.2 MCU Software Optimizations..................... 28 2.3 Cache Based Embedded SOC......................... 29 2.3.1 Cache-SPRAM Based Hybrid On-chip Memory Architecture.... 30 2.4 Genetic Algorithms - An Overview...................... 30 2.5 Multi-objective Multiple Design Points.................... 33 3 Data Layout for Embedded Applications 35 3.1 Introduction................................... 35 3.2 Method Overview and Problem Statement.................. 38 3.2.1 Method Overview............................ 38 3.2.2 Problem Statement........................... 40 3.3 ILP Formulation................................ 41 3.3.1 Basic Formulation............................ 43 3.3.2 Handling Multiple Memory Banks................... 44 3.3.3 Handling SARAM and DARAM.................... 46 3.3.4 Overlay of Data Sections........................ 46 3.3.5 Swapping of Data............................ 47 3.4 Genetic Algorithm Formulation........................ 48 3.5 Heuristic Algorithm............................... 49 3.5.1 Data Partitioning into Internal and External Memory........ 50 3.5.2 DARAM and SARAM placements................... 50 3.6 Experimental Methodology and Results.................... 53 3.6.1 Experimental Methodology....................... 53 3.6.2 Integer Linear Programming - Results................ 54 3.6.3 Heuristic and GA Results....................... 58 3.6.4 Comparison of Heuristic Data Layout with GA........... 59 3.6.5 Comparison of Different Approaches................. 61 3.7 Related Work.................................. 62 3.8 Conclusions................................... 65

vii 4 Logical Memory Exploration 67 4.1 Introduction................................... 67 4.2 Method Overview................................ 70 4.2.1 Memory Architecture Parameters................... 70 4.2.2 Memory Architecture Exploration Objectives............. 71 4.2.3 Memory Architecture Exploration and Data Layout......... 73 4.3 Genetic Algorithm Formulation........................ 74 4.3.1 GA Formulation for Memory Architecture Exploration....... 74 4.3.2 Pareto Optimality and Non-Dominated Sorting........... 75 4.4 Simulated Annealing Formulation....................... 78 4.4.1 Memory Subsystem Optimization................... 78 4.5 Experimental Results.............................. 80 4.5.1 Experimental Methodology....................... 80 4.5.2 Experimental Results.......................... 81 4.6 Related Work.................................. 88 4.7 Conclusions................................... 91 5 Data Layout Exploration 93 5.1 Introduction................................... 93 5.2 Problem Definition............................... 95 5.3 MODLEX: Multi Objective Data Layout EXploration............ 96 5.3.1 Method Overview............................ 96 5.3.2 Mapping Logical Memory to Physical Memory............ 98 5.3.3 Genetic Algorithm Formulation.................... 98 5.4 Experimental Results.............................. 101 5.4.1 Experimental Methodology....................... 101 5.4.2 Experimental Results.......................... 104 5.4.3 Comparison of MODLEX and Stand-alone Optimizations...... 108 5.5 Related Work.................................. 109 5.6 Conclusions................................... 110

viii 6 Physical Memory Exploration 111 6.1 Introduction................................... 111 6.2 Logical Memory Exploration to Physical Memory Exploration (LME2PME) 114 6.2.1 Method Overview............................ 114 6.2.2 Physical Memory Exploration..................... 115 6.2.3 Genetic Algorithm Formulation.................... 116 6.3 Direct Physical Memory Exploration (DirPME) Framework......... 120 6.3.1 Method Overview............................ 120 6.3.2 Genetic Algorithm Formulation.................... 122 6.4 Experimental Methodology and Results.................... 125 6.4.1 Experimental Methodology....................... 125 6.4.2 Experimental Results from LME2PME................ 125 6.4.3 Experimental Results from DirPME.................. 126 6.4.4 Comparison of LME2PME and DirPME............... 130 6.5 Related Work.................................. 134 6.6 Conclusions................................... 135 7 Cache Based Architectures 137 7.1 Introduction................................... 137 7.2 Solution Overview............................... 140 7.3 Data Partitioning Heuristic.......................... 142 7.4 Cache Conscious Data Layout......................... 146 7.4.1 Overview................................ 146 7.4.2 Graph Partitioning Formulation.................... 150 7.4.3 Cache Offset Computation....................... 152 7.5 Experimental Methodology and Results.................... 155 7.5.1 Experimental Methodology....................... 155 7.5.2 Cache-Conscious Data Layout..................... 156 7.5.3 Cache-SPRAM Data Partitioning................... 158 7.5.4 Memory Architecture Exploration................... 162

ix 7.6 Related Work.................................. 163 7.6.1 Cache Conscious Data Layout..................... 163 7.6.2 SPRAM-Cache Data Partitioning................... 166 7.6.3 Memory Architecture Exploration................... 167 7.7 Conclusions................................... 168 8 Conclusions 171 8.1 Thesis Summary................................ 171 8.2 Future Work................................... 174 8.2.1 Standardization of Input and Output Parameters.......... 174 8.2.2 Impact of platform change on system performance.......... 174 8.2.3 Impact of Application IP library rework on system performance.. 174 8.2.4 Impact of semiconductor library rework on the system performance 175 8.2.5 Multiprocessor Architectures...................... 175 Bibliography 176

List of Tables 1.1 Explanation of Xchart Steps.......................... 15 3.1 List of Symbols Used.............................. 42 3.2 Memory Architecture for the Experiments.................. 54 3.3 Experimental Results.............................. 56 3.4 Results from Heuristic Placement (HP) and Genetic Placement (GP) on 4 Embedded Applications, VE = Voice Encoder, JP = JPEG Decoder, LLP = Levinson s Linear Predictor, 2D = 2D Wavelet Transform........ 59 3.5 Comparative Ranking of Algorithms..................... 62 4.1 Memory Architecture Parameters....................... 72 4.2 Evaluation of Multi-Objective Cost Function................. 79 4.3 Memory Architecture Exploration....................... 85 4.4 Non-dominant Points Comparison GA-SA.................. 85 5.1 Memory Architectures Used for Data Layout................ 103 6.1 Memory Architectures Explored - Using DirPME Approach......... 129 6.2 Non-dominant Points Comparison LME2PME-DirPME........... 134 7.1 Input Parameters for Data Partitioning Algorithm.............. 145 7.2 Data Layout Comparison............................ 157 7.3 Data Layout for Different Cache Configurations............... 157

List of Figures 1.1 Architecture of an Embedded SoC....................... 6 1.2 Embedded Application Development Flow.................. 9 1.3 Memory Trends in SoC............................. 11 1.4 Application Specific SoC Design Flow Illustration with X-chart....... 14 1.5 Mapping Chapters to X-chart Steps...................... 21 2.1 Example DSP Memory Map.......................... 24 2.2 Cache-SPRAM Based On-Chip Memory Architecture............ 31 2.3 Genetic Algorithm Flow............................ 32 3.1 Overview of Data Layout............................ 39 3.2 Illustration of Parallel and Self Conflicts................... 39 3.3 Heuristic Algorithm for Data Layout..................... 52 3.4 Relative performance of the Genetic Algorithm w.r.t. Heuristic, for Varying Number of Generations............................. 60 3.5 Comparison of Heuristic Data Layout Performance with GA Data layout. 60 4.1 DSP Processor Memory Architecture..................... 71 4.2 Two-stage Approach to Memory Subsystem Optimization.......... 74 4.3 Comparison of GA and SA Approaches for Memory Exploration...... 82 4.4 Vocoder Non-dominated Points Comparison Between GA and SA..... 83 4.5 Vocoder: Memory Exploration (All Design Points Explored and Non-dominated Points)...................................... 87

xii 4.6 MPEG: Memory Exploration (All Design Points Explored and Non-dominated Points)...................................... 87 4.7 JPEG: Memory Exploration (All Design Points Explored and Non-dominated Points)...................................... 88 4.8 DSL: Memory Exploration (All Design Points Explored and Non-dominated Points)...................................... 88 5.1 MODLEX: Multi Objective Data Layout EXploration Framework..... 97 5.2 Data Layout Exploration: MPEG Encoder.................. 104 5.3 Data Layout Exploration: Voice Encoder................... 105 5.4 Data Layout Exploration: Multi-Channel DSL................ 105 5.5 Individual Optimizations vs Integrated.................... 106 6.1 Memory Architecture Exploration....................... 112 6.2 Memory Architecture Exploration - Integrated Approach.......... 113 6.3 Logical to Physical Memory Exploration - Overview............. 115 6.4 Logical to Physical Memory Exploration - Method.............. 117 6.5 GA Formulation of LME2PME........................ 118 6.6 MAX: Memory Architecture exploration Framework............ 121 6.7 GA Formulation of Physical Memory Exploration.............. 123 6.8 Voice Encoder: Memory Architecture Exploration - Using LME2PME Approach...................................... 127 6.9 MPEG: Memory Architecture Exploration - Using LME2PME Approach. 128 6.10 DSL: Memory Architecture Exploration - Using LME2PME Approach... 129 6.11 Voice Encoder (3D view): Memory Architecture Exploration - Using DirPME Approach.................................... 130 6.12 Voice Encoder: Memory Architecture Exploration - Using DirPME Approach131 6.13 MPEG Encoder: Memory Architecture Exploration - Using DirPME Approach...................................... 132 6.14 DSL: Memory Architecture Exploration - Using DirPME Approach.... 133

xiii 7.1 Target Memory Architecture.......................... 139 7.2 Memory Exploration Framework........................ 141 7.3 Example: Temporal Relationship Graph................... 143 7.4 Heuristic Algorithm for Data Partitioning.................. 147 7.5 Cache Conscious Data Layout......................... 149 7.6 Heuristic Algorithm for Offset Computation................. 153 7.7 AAC: Performance for different Hybrid Memory Architecture........ 158 7.8 MPEG: Performance for different Hybrid Memory Architecture....... 159 7.9 JPEG: Performance for different Hybrid Memory Architecture....... 159 7.10 AAC: Power consumed for different hybrid memory architecture...... 160 7.11 MPEG: Power consumed for different hybrid memory architecture..... 161 7.12 JPEG: Power consumed for different hybrid memory architecture..... 161 7.13 AAC: Non-dominated Solutions........................ 163 7.14 MPEG: Non-dominated Solutions....................... 164 7.15 JPEG: Non-dominated Solutions....................... 164

List of Publications from this Thesis 1. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. On-chip Memory Architecture Exploration Framework for DSP Processor Based Embedded SoC. Submitted to the ACM Transactions on Embedded Computing Systems, May 2008. 2. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. Memory Architecture Exploration Framework for Cache-based Embedded SoC. In Proceedings of the International Conference on VLSI Design, Jan 2008. 3. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. MODLEX: A Multi-Objective Data Layout EXploration Framework for Embedded SoC. In Proceedings of the 12th Asia and South Pacific Design Automation Conference (ASP-DAC), Jan 2007. 4. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. MAX: A Multi-Objective Memory Architecture Exploration Framework for Embedded SoC. In Proceedings of the International Conference on VLSI Design, Jan 2007. 5. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. Embedded Tutorial on Multi- Processor Architectures for Embedded SoC. In Proceedings of the VLSI Design and Test, Aug 2003. 6. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. Optimal Code and Data Layout for Embedded Systems. Design, Jan 2003. In Proceedings of the International Conference on VLSI 7. T.S.Rajesh Kumar, R.Govindarajan, and C.P. Ravikumar. Memory Exploration for Embedded Systems. In Proceedings of the VLSI Design and Test, Aug 2002.

2 List of Publications from this Thesis

Chapter 1 Introduction 1.1 Application Specific Systems Today s VLSI technology allows us to integrate tens of processor cores on the same chip along with embedded memories, application specific circuits, and interconnect infrastructure. As a result, it is possible to integrate an entire system onto a single chip. The single chip phone, which has been introduced by several semiconductor vendors, is an example of such a system-on-chip; it includes the modem, radio transceiver, power management functionality, a multimedia engine and security features, all on the same chip. An embedded system is an application-specific system which is optimized to perform a single function or a small set of functions [70]. We distinguish this from a general-purpose system, which is software-programmable to perform multiple functions. A personal computer is an example of a general-purpose system; depending on the software we run on the computer, it can be useful for playing games, word processing, database operations, scientific computation, etc. On the other hand, a digital camera is an example of an embedded system, which can perform a limited set of functions such as taking pictures, organizing them, or transferring them to another device through a suitable I/O interface. Other examples of embedded systems include mobile phones, audio/video players, videogame consoles, settop boxes, car infotainment systems, personal digital assistants, telephone central-office switches, dedicated network routers and bridges. Note that a large number of embedded

4 Introduction systems are built for the consumer market. As a result, in order to be competetive, the cost of an embedded system cannot be very high. Yet, the consumers demand higher performance and more features from the embedded systems products. It is easy to appreciate this point if we compare the performance and feature set offered by mobile phones that cost Rs 5000/-(or 100$) today and which cost the same a few years ago. We also see that a large number of embedded systems are being built for the mobile market. This trend is not surprising - the number of mobile phone subscribers increased from 500 Million in year 2000 to 2.6 Billion in 2007 [7]. Because of such high volumes, embedded systems are extremely cost sensitive and their design demands careful silicon-area optimization. Since mobile devices use batteries as the main source of power, embedded systems must also be optimized for energy dissipation. Power, which represents the rate at which energy is consumed, must also be kept low to avoid heating and improving reliability. In summary, the designer of an embedded system must simultaneously consider and optimize price, performance, energy, and power dissipation. Application specific embedded systems designed today demand innovative methods to optimize these system cost functions [11, 19]. Many of today s embedded systems are based on system-on-chip platforms [16], which, in turn, consist of one or more embedded microcontrollers, digital signal processors (DSP), application specific circuits and read-only memory, all integrated into a single package. These blocks are available from vendors of intellectual property (IP) as hard cores or soft cores [42, 28]. A hard core, or hard IP block, is one where the circuit is available at a lower level of abstraction such as the layout-level [42, 28]; it is impossible to customize a hard IP to suit the requirements of the embedded system. As a result, there are limited opportunities in optimizing the cost functions by modifying the hard IP. For example, if some functionality included in the IP is not required in the present application, we cannot remove the function to save area. Soft IP refers to circuits which are available at a higher level of abstraction, such as register-transfer level [28, 42]. It is possible to customize the soft IP for the specific application. The designer of an embedded SoC integrates the IP cores for processors, memories, and application-specific hardware to create the SoC. Figure 1.1 illustrates the architecture of an embedded system-on-chip (SoC). As can

1.2 Memory Subsystem 5 be seen in the figure, there are four principal components in such an SoC. 1. An Analog Front End which includes the analog/digital and digital/analog converters 2. Programmable Components which include microprocessors, microcontrollers, and DSPs. The number of embedded processors is increasing every year. An interesting statistic shows that of the nine billion processors manufactured in 2005, less than 2% were used for general-purpose computers. The other 8.8 billion went into embedded systems [13]. The microcontroller/microprocessor is useful in handling interrupts, house-keeping and performing timing related functions. The DSP is useful for processing the audio and video information e.g., compression and decompression of audio and video information. The application software is normally preloaded in the memory and is not user programmable, unlike general-purpose processor-based systems 3. Application-specific components these include hardware accelerators for computeintensive functions. Examples of hardware accelerators include digital image processors which are useful in cameras 1.2 Memory Subsystem 1.2.1 On-chip Memory Organization The memory architecture of an embedded processor core is complex and is custom designed to improve run-time performance and power consumption. In this section we describe only on the memory architecture of the DSP processor as this is the focus of the thesis. This is because, the memory architecture of the DSP is more complex than that of microcontrollers (MCU) due to the following reasons: (a) DSP applications are more data dominated than the control-dominated software executed on an MCU. Memory bandwidth requirements for DSP applications range from 2 to 3 memory accesses per

6 Introduction Figure 1.1: Architecture of an Embedded SoC processor clock cycle. For an MCU, this figure is, at best, one memory access per cycle. (b) It is critical in DSP application to extract maximum performance from the memory subsystem in order to meet the real-time constraints of the embedded application. As a consequence, the DSP software for critical kernels is developed mostly as hand optimized assembly code. In contrast, the software for MCU is typically developed in high-level languages. The memory architecture for a DSP is unique since the DSP has multiple onchip buses and multiple address generation units to service higher bandwidth needs. The on-chip memory of embedded processors can include (a) only Level-1 cache (L1-cache) (e.g., [1]), (b) only scratch-pad RAM (SPRAM) (e.g., [75, 76], or (c) a combination of L1-cache and SPRAM (e.g., [2, 77]). 1.2.2 Cache-based Memory Organization Purely cache-based on-chip memory organization is generally not preferred by embedded system designers as this organization cannot guarantee the worst-case execution time constraints. This is because the access time in a cache based system can vary depending on whether the access results in a cache miss or a hit [33]. As a consequence, the run-time

1.2 Memory Subsystem 7 performance of cache-based memory subsystems varies, based on the execution path of application and is data dependent. However cache architecture is advantageous in the sense that it reduces programmer s responsibility in terms of placement of data to achieve better memory access time. Further the movement of data from off-chip memory to cache is transparent. In [12], the authors present a comparison study of SPRAM and cache for embedded applications and conclude that SPRAM has 34% smaller area and 40% lower power consumption than a cache of the same capacity. There is published literature to estimate the worst case execution time [81] and find an upper bound on run-time [78] for cache-based embedded systems. Hence it was argued that for real-time embedded systems which require stringent worst-case performance guarantee, purely cache based on-chip organization is not suitable. 1.2.3 Scratch Pad Memory-based Organization On-chip memory organization based only on Scratch Pad memory ensures single cycle access times and guarantees on worst-case execution for data that resides in Scratch-Pad RAM (SPRAM). However, it is the responsibility of the programmer to identify data section that should be placed in SPRAM or place code in the program to appropriately move data from off-chip memory to SPRAM. A DSP core can include the following types of memories static RAM (SRAM), ROM, and/or dynamic RAM (DRAM). The scratch pad memory in the DSP core is organized into multiple memory banks to facilitate multiple simultaneous data accesses. A memory bank can be organized as a single-access RAM (SARAM) or a dual-access RAM (DARAM) to provide single or dual access to the memory bank in a single cycle. Also the on-chip memory banks can be of different sizes. Smaller memory banks consume lesser power per access than the larger memories. The embedded system may also be interfaced to off-chip memory, which can include SRAM and DRAM. Purely SPRAM based on-chip organization is suitable only for low to medium complex embedded applications. SPRAM based systems do not use the on-chip RAM efficiently as it requires the entire data sections that are currently accessed to be placed exclusively

8 Introduction in the SPRAM. It is possible to accommodate different data sections in SPRAM at different points in execution time by moving data dynamically between off-chip memory and SPRAM. But this results in certain run-time overhead and increase in code size. For medium to large applications, which have large number of critical data variables, a large amount of on-chip RAM will become necessary to meet the real-time performance constraints. Hence for such applications pure SPRAM architecture are not preferred. 1.3 Data Layout To efficiently use the on-chip memory, critical data variables of the application need to be identified and mapped to the on-chip RAM. The memory architecture may contain both on-chip cache and SPRAM. In such a case it is important to partition the data section and assign them appropriately to on-chip cache and SPRAM such that memory performance of the application is optimized. Further, among the data sections assigned to on-chip cache and SPRAM, a proper placement of the data sections on the cache and SPRAM is required to ensure that the cache misses are reduced and the multiple memory banks of the SPRAM and the dual ported SPRAMs are efficiently utilized. Identifying such a data placement for data sections, referred to as the data layout problem, is complex and critical step [10, 53]. This task is typically performed manually as the compiler cannot assume that the code under compilation represents the entire system [10]. The application program in a modern embedded system is complex since it must support a variety of device interfaces such as networking interfaces, credit card readers, USB interfaces, parallel ports, and so on. The application also has many multimedia components like MP3, AAC and MIDI [8]. This necessitates an IP reuse methodology [74], where software modules developed and optimized independently by different vendors are integrated. Figure 1.2 explains the typical flow in embedded application development. This integration is a very challenging job with multiple objectives: (a) it has to be done under tight constraints on time-to-market constraints, (b) it has to be repeated for different variants of SoCs with different custom memory architectures, and (c) it has to perform in such a way that the embedded application is optimized for performance,

1.3 Data Layout 9 power consumption and cost. Figure 1.2: Embedded Application Development Flow Since the IPs/modules are independently optimized, the integrator is under pressure to deliver the complete product with the expectation that each component performs at the same level as it did in isolation. This is a major challenge. When a module is optimized independently, the developer has all the resources of the SoC (MIPS and Memory) to optimize the module. When these modules are integrated at the system-level, the system resources are shared among the modules. So the application integrator needs to know the MIPS and memory requirements of the modules unambiguously to be able to allocate the shared resources to critical needs [74]. Usually, the modules memory requirements are given only at a high level. To be able to optimize the whole application/system, the integrator will need detailed memory analysis at the module-level; e.g., which data buffers need to be placed in dual ported memories and which data buffers should not be placed in the same memory bank this data is usually not available. Moreover, the critical code is usually written in low-level assembly language to meet real-time constraints and/or

10 Introduction due to legacy reasons. Because of the above mentioned reasons, the application integration/optimization, analyzing the application and mapping software modules in order to obtain optimal cost and performance takes significant amount of time (approximately 1-2 man months). Currently in most of the SoC design data layout is also performed manually and it has two major problems:(1) the development time is significant not acceptable for current-day time to market requirements, (2) quality of solution varies based on the expertise. 1.4 Memory Architecture Exploration In modern embedded systems, the area and power consumed by the memory subsystem is up to 10 times that of the data path, making memory a critical component of the design [11]. Further, the memory subsystem constitutes a large part (typically up to 70%) of the silicon area for the current day SoC and it is expected to go up to 94% in 2014 as shown in the Figure 1.3 [6]. The main reason for this is that embedded memory has a relatively smallsubsystem per-area design cost in terms of both man-power, time-tomarket and power consumption [60]. Hence the memory plays an important role in the design of embedded SoCs. Further the memory architecture strongly influences the cost, performance and power dissipation of an embedded SoC. As discussed earlier, the on-chip memory organization of embedded processors varies widely from one SoC to another, depending on the application and market segment for which the SoC is deployed. There is a wide variety of choices available for the embedded designers, starting from simple on-chip SPRAM based architecture to more complex cache-spram based hybrid architecture. To begin with, the system designer needs to decide if the SoC requires cache and what is the right size of on-chip RAM. Once the high level memory organization is decided, the finer parameters need to be defined to complete the memory architecture definition. For the on-chip SPRAM based architecture, the parameters, namely, size, latency, number of memory banks, number of read/write ports per memory bank and connectivity, collectively define the memory organization and strongly influence the performance, cost, and power consumption. For cache based on-chip RAM,

1.4 Memory Architecture Exploration 11 Figure 1.3: Memory Trends in SoC the finer parameters are the size of cache, associativity, line size, miss latency and write policy. Due to its large impact on system performance parameters, the memory architecture is often hand-crafted by the designer based on the targeted applications. However, with the combination of on-chip SPRAM and cache, the memory design space is too large for a manual analysis [31]. Also, with the projected growth in the complexity of embedded systems and the vast design space in memory architecture, hand optimization of the memory architecture will soon become impossible. This warrants an automated framework which can explore the memory architecture design space and identify interesting design points that are optimal from a performance, power consumption and VLSI area (and hence cost) perspective. As the memory architecture design space itself is vast, a brute force design space exploration tool may take large computation time and hence is unlikely to be useful in meeting the tight time-to-market constraint. Further, for each given memory architecture, there are several possible data section layouts which are optimal in terms of performance and power. This further compounds the memory architecture exploration problem.

12 Introduction 1.5 Embedded System Design Flow In this section, we present our view of embedded system design flow to set the context for our work. For this purpose, we introduce the notion of the X-chart, which is inspired from the well-known Y-chart introduced by Gajski to capture the process of VLSI system design [29]. In a Y-chart, the three levels of design abstraction form the three dimensions of the figure Y; these are (a) design behavior, (b) design structure and (c) physical aspects of the design. A design flow starts from a behavior specification, which is then mapped to a structure, which in turn is mapped to a physical realization. We can view the process of transforming a behavior to a physical realization as a successive refinement process. Optimization of design metrics such as area, performance, and power are the goals of each of these refinement steps. The design process may spiral from the behavioral axis to structural axis to physical design axis in multiple stepwise refinement steps. We introduce the notion of the X-chart, which is illustrated in Figure 1.4. The X- chart representation has four axes: (a) Behavior, (b) Logical Architecture, (c) Physical Architecture and (d) Software Data Layout. The logical memory architecture (LMA) defines the embedded cache size, cache associativity, cache block size, size of the scratch pad memory, number of memory banks, and the number of ports. The physical memory architecture (PMA) is an actual realization of an LMA using the memory library components provided by the semiconductor vendor. The fourth dimension, namely Software Data Layout, is necessary for capturing the process of embedded system design. We have identified several steps in the embedded system design flow and marked them with circled numbers. Table 1.1 explains the individual steps in the X-chart representation. The design of an embedded system begins with a behavioral description (Point (1) in Figure 1.4, which is shown on the behavioral axis). Today, there are many languages available to capture the system behavior, e.g., System Verilog [5], System C [4], and so on. Hardware-software partitioning is performed to identify which functionalities of the description are best performed in hardware and which are best implemented in software. Hardware implementation is cost-intensive, but improves the performance.

1.5 Embedded System Design Flow 13 We show point (2) on the LMA axis, since hardware-software partitioning adds considerable amount of detail to decide the LMA parameters. The next step is to select hardware and software IP blocks. Depending on the time schedule (for designing the embedded system) and the cost constraint, the designer may wish to use readily available IP blocks from a vendor or implement a custom version of the IP. The target platform is then defined to implement the embedded system. As mentioned earlier, a platform includes one or more processors, memory, and hardware accelerators for specific functions. Platforms also come with software tools such as compilers and simulators, so that the development cycle can be accelerated. In other words, one does not need to wait for the hardware implementation to complete before trying out the software. We show point (4) on the software data layout axis, since the selection of a platform defines many aspects of software implementation. Software partitioning is now performed to decide which software IP blocks are executed on which processor. This completes one spiral cycle in the design life cycle of the embedded system. To recapitulate, the following components are defined at the end of the first cycle (a) the platform on which the embedded system will be built, (b) the hardware and software IP blocks that are selected for the target application, (c) assignment of software IP blocks to target processors where the software will be executed. We show point (5) on the behavioral axis, since the next spiral cycle will begin from here. The next step is to define the logical memory architecture for the memory subsystem. Guided by considerations such as cost, performance, and power, the designer must decide basic architectural parameters of the memory sub-system, such as whether or not to provide cache memory, how many memory banks are provided, whether or not dualported memories are necessary for guaranteeing performance, etc. The next step is to perform design space exploration in the logical space. Each logical memory architecture is also characterized by the selection of values for parameters such as cache size, cache associativity, cache block size, etc. There is often a cost/performance tradeoff between two solutions in the architectural space. Hence the designer must consider different Paretooptimal solutions that exhibit cost/performance tradeoff. This results in point (6) in

14 Introduction Figure 1.4. Figure 1.4: Application Specific SoC Design Flow Illustration with X-chart A logical memory architecture must be translated into a physical implementation by selecting components from the semiconductor vendors memory library. There are multiple realizations, i.e., physical memory architectures (PMA) for the same LMA. This involves choosing the appropriate modules based on the process technology selected in step (7), and the corresponding semiconductor vendor memory library. These represent tradeoff in terms of power consumed and VLSI area. This leads to point (7) in Figure 1.4. The mapping of an LMA to a PMA is similar to the technology mapping step in logic synthesis [53]. Data Layout (DL) is the subsequent step in the design life cycle. During this step, the placement of data variables is determined, considering every possible implementation

1.5 Embedded System Design Flow 15 Table 1.1: Explanation of Xchart Steps of the physical memory architecture. Once again, there are multiple solutions for data layout for a given PMA. These solutions may exhibit tradeoffs in power, performance, and area. In this thesis, we use the phrase Physical Memory Architecture Exploration (PMAE) to refer to the search for Pareto-optimal LMA/PMA/DL solutions. We capture this in the form of an equation that follows. P MAE = Logical M emory Architecture Exploration + M emory Allocation Exploration + Data Layout Exploration (1.1)

16 Introduction In this thesis, the focus is on memory sub-system optimization, constituted by steps (5) to (9) in Figure 1.4. The size of the solution space increases manifold during each step of the memory exploration. If N 1 optimal solutions (logical memory architectures) are identified during memory sub-system definition, memory allocation must be explored for each one of them, which can potentially result in N 1 N 2 solutions during memory allocation exploration. Similarly, data layout must be performed for each of the N 1 N 2 solutions from the memory allocation exploration step, and we may in general obtain N 1 N 2 N 3 Pareto-optimal points in the PMAE solution space. As mentioned earlier this problem can result in exploring a combinatorially exploding large design space. 1.6 Contributions First, we propose methods for data layout optimization, assuming a fixed memory architecture for a DSP-based embedded system architecture. Data layout is a critical component in the embedded design cycle and decides the final configuration of the embedded system. Data layout happens at the final stage in the life cycle of an embedded system, as illustrated in the X-chart of Figure 1.4. Data layout forms the foundation for memory subsystem optimization. Hence, we first formulate data section layout as an Integer Linear programming (ILP) problem. The proposed ILP formulation can handle: (i) partitioning of data between on-chip and off-chip memory, (ii) handling simultaneously accessed data variables (parallel conflict) in different on-chip memory banks, (iii) placing data variables that are accessed concurrently (self conflict) in dual-access RAMs, (iv) overlay of data sections with non-overlapping life times, and (v) swapping of data sections from/to off-chip memory. An important contribution of this work is the development of a simple unified ILP formulation to handle all the above mentioned optimizations. The ILP based approach is very effective for many moderately complex applications and delivers optimal results. However, as the application complexity increases, the execution time of ILP method increases drastically, making them unsuitable for large applications and in situations (such as memory architecture exploration) where the data layout need to be solved repeatedly.

1.6 Contributions 17 Hence we looked at developing faster methods to solve this problem. We propose a heuristic algorithm that maps the data sections to the given memory architecture and reduces the number of memory access conflicts resulting from both self conflicts and parallel conflicts. Finally, we also formulate the same problem in Genetic Algorithm (GA) and compare the results of the heuristic with GA. We find that the heuristic algorithm performs within 5% of GA s results with GA performing better. However, the heuristic algorithm s run-time is an order faster than GA s run-time making it suitable to be used for memory architecture exploration. Next, we address logical memory architecture exploration for DSP-based embedded systems (step (5) to (7) in the X-chart of Figure 1.4). The input is a set of high-level memory parameters such as the number of memory banks, size of each memory bank, number of ports etc., that define the memory sub-system. The goal of the exploration is to find an optimal on-chip memory organization that can run the given applications with minimum number of memory-stalls. When an LMA is generated, it must be evaluated for cost (in terms of VLSI area) and performance. But these depend on the data layout. Hence to evaluate a memory architecture properly, we must first generate an efficient data layout. We use the fast heuristic method proposed by us. We have implemented the memory architecture exploration problem as a two-level hierarchical search, with architectural exploration at the outer level and data-layout exploration at the inner level. A multi-objective GA and a Simulated Annealing algorithm (SA) are used as alternate search mechanisms for the architectural exploration problem. As the memory architecture exploration framework consider both performance and cost (VLSI area) objectives, we use the Pareto-optimality constraint proposed in [25] to identify design points that are interesting from one or the other objective. The proposed memory exploration framework is fully automatic and flexible. The framework is also scalable, and additional objectives like power consumption can be added easily. We have used four different applications from multimedia and communication domains for our experiments and found 100-200 Pareto-optimal design choices (memory architectures) for each of the applications.

18 Introduction Next, we explore the data layout design space for a given physical memory architecture in order to optimize the performance and power consumption of the memory subsystem. Note that data layout exploration forms the step (8) to (9) in the X-chart representation. We propose MODLEX, a Multi Objective Data Layout EXploration framework based on Genetic Algorithm that explores the data layout design space for a given logical and physical memory architecture and obtains a list of Pareto-optimal data layout solutions from performance and power perspectives. Most of the existing work in the literature assumes that performance and power are non-conflicting objectives with respect to data layout. However we show that there is a significant trade-off (up to 70%) that is possible between power and performance. Our next step is physical memory architecture exploration (step (5 to 8) in Figure 1.4). We propose two different methods for physical memory exploration. The first approach is an extension of the Logical Memory Architectural Exploration (LMAE) method described in Chapter 4 and represented in X-chart by step 5 to 6. Physical memory exploration is performed by taking the output of LMAE and for each of the Pareto-optimal logical memory architecture, performing a memory allocation exploration (step (6 to 7)) with an objective to optimize power and area in the physical memory space. Note that the data layout is fixed at the logical memory exploration stage itself and hence the performance does not change at this step. The memory allocation exploration is formulated as a multiobjective Genetic search to explore the design space with power and area as objectives. We refer to this approach as LME2PME. The second approach is a direct and integrated approach for Physical Memory Exploration, which we refer to as DirPME. This approach corresponds to a direct move from point 5 to point 8 in Figure 1.4. In this approach, we integrate three critical components together: (i) Logical Memory Architecture Exploration, (ii) Memory Allocation Exploration (iii) Data layout exploration. The core engine of the memory architecture exploration framework is formulated as a Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA) [25]. For the data layout problem, which needs to be solved for thousands of memory architectures, we use our fast efficient heuristic data layout method.

1.6 Contributions 19 Our integrated memory architecture exploration framework searches the design space by exploring 1000s of memory architectures and lists down 200-300 Pareto-optimal design solutions that are interesting from an area, power, and performance view point. Next, we address the memory architecture exploration problem for hybrid memory architectures that have a combination of SPRAM and cache. For such a hybrid architecture, a critical step is to partition the data between on-chip SPRAM and external RAM. Data partitioning aims at improving the overall memory sub-system performance by placing data in SPRAM that have the following characteristics: (a) higher access frequency, (b) data that has a overlapping life time with many other data, and (c) data that has poor spatial access characteristics. By placing all data that exhibits the above characteristics in SPRAM results in reducing the number of potentially conflicting data in cache, reducing the cache misses, leading to overall memory sub-system performance improvement. But typically the SPRAM size is small and it is not possible to accommodate all the data identified for SPRAM placement. Hence, even after data partitioning, there will be a significant number of potentially conflicting data sections that need to be placed in external RAM. If these data are need to be placed in the caches such that the conflict misses causes between them is reduced. Cache-conscious data layout addresses this problem and aims at placing data in external RAM (off-chip RAM) with the objective to reduce cache misses. This is achieved by an efficient data layout heuristic that is independent of instruction caches, optimizes run-time and keeps the off-chip memory address space usage under check. We extend the above approach and perform hybrid memory architecture exploration with the objective to optimize run-time performance, power consumption and area. The salient feature of our work are as follows. First, we provide a unified framework for logical memory exploration, memory allocation exploration, and data layout Our work addresses power, performance, area optimization in an integrated framework

20 Introduction Our work addresses memory architecture exploration framework for a hybrid memory architecture involving on-chip SPRAM and cache. Our work does not rely on source-code optimization for power and performance optimization. Hence it is suitable for Platform-based/IP-based system design 1.7 Thesis Overview The rest of the thesis is organized as follows. In the following chapter, we provide the background material for the thesis. We begin by explaining the memory architecture of a DSP and an MCU. We summarize the software optimizations used in the literature to improve memory access efficiency. We explain cache-based embedded SoC and their challenges with respect to predictability. Finally, we introduce the concepts of a Genetic Algorithm (GA) for optimization, since GA is used in our optimization framework in the latter chapters. In Chapter 3, we propose different methods to address the data layout problem for onchip SPRAM based memory architecture. First, we propose a Integer Linear Programming (ILP) based approach. Further, we also propose a fast and efficient heuristic for the data layout problem. Finally, we formulate the data layout problem in Genetic Algorithm (GA). In Chapter 4, we present a multi-objective memory architecture exploration framework to search the memory design space for the on-chip memory architecture with performance and memory cost as two objectives. We address the memory architecture exploration problem at the logical level. Multi-objectective Data Layout Exploration problem is addressed in Chapter 5. Here, the data layout design space is explored for a given logical memory architecture and application with respect to performance and power. In Chapter 6, we address the memory architecture exploration problem at physical memory level. In this chapter we propose two different approaches for addressing the physical memory architecture exploration.