Focused Random Validation with BURST Kent Dickey, President, ProValid, LLC January 2005
|
|
|
- Basil Murphy
- 9 years ago
- Views:
Transcription
1 Focused Random Validation with BURST Kent Dickey, President, ProValid, LLC January 2005 This white paper describes ProValid's leading-edge chip validation tool BURST. BURST performs automated validation of FPGA and ASIC-based systems. It can be used for pre-silicon or post-silicon testing to increase the overall system quality. The first section gives a summary of current verification technology and lists some shortcomings with current popular verification approaches. The second section describes BURST and how it addresses these issues. 1. Current Industry Verification Technology Computer chip designs strive for a defect-free product. Defects in a shipping product can be extremely costly due to product returns, liability, and lost customers. For most complex designs, more resources are spent ensuring the design is correct than are spent creating the design. Testing for system correctness is critical for product success. The standard functional checks performed are called verification. Verification checks that the chip operates according to its specification. ProValid defines a stronger standard for correctness called validation. Validation ensure that the desired chip or system works as intended. For example, the chip specification may be incorrect relative to a larger system specification. Or the chip specification might not be complete and not describe necessary behavior to avoid deadlocks or correctly order transactions. Validation is a superset of verification. A major issue is that many companies only perform verification tasks. This leads to reduced coverage and increases the chances of missing critical bugs. Pre-silicon testing attempts to discover most chip bugs using simulators before hardware is created. Using RTL simulators is an important step for ASIC and FPGA designs. Debug of simulations is much easier since most internal chip signals can be viewed. Chip complexity creates some bugs too complex to find using the limited simulation time available. Experience shows that some defects such as timing races, hold time problems, or a large class of chip and board manufacturing issues are not easy to find using common simulators. Post-silicon testing then attempts to catch these remaining bugs by running tests on the completed system at operational speed. Once this testing completes, the product is shipped Standard Verification Techniques A well-used approach top verification is to run hand-written directed tests. These tests are usually described by designers in a large test list and then crafted by a verification group to ensure the listed functionality works as specified. Writing these tests is tedious and slow and requires a large team to finish the tests on schedule. This approach has historically provided the lowest coverage and allows many complex bugs to slip through testing. As an improvement, random test tools are now commonly used for chip verification. A random test tool uses random numbers to guide generating a test case, allowing for good coverage of 1
2 targeted functionality. By random, it is implied that in general these tools use pseudo-random numbers which are generated by a formula, but the term random is commonly used to mean pseudo-random. These existing test tools help find many chip bugs and are a standard part of most test plans. However, many random test tools can still miss serious and critical defects in chips due to tool limitations. Many bugs are missed due to complex interactions among various components that may not be well exercised by simple random tests. Also, the random tool may be too unfocused and not be an efficient exerciser of the chip. It wastes time testing uninteresting variations of what has previously been tested. An example of a simplistic random generator is to test a CPU by generating random 32-bit values, placing them in memory, and then executing them. Chances are that almost all tests will end after a few instructions with an illegal instruction, or take a branch into a bad memory address. This is an inefficient testing strategy since billions of tests would need to be run to have a reasonable chance of exercising even basic functionality. Although no actual tool would be so simplistic, many tools do fall into the same problem. The tool's coverage hits a limit after only a relatively small number of tests and practical time limits prevent these tools from finding further bugs. A related issue with random test generation is a simple rule of probability: if a particular CPU instruction has a 10% chance of being generated, the chance of generating nine in a row is a one in a billion chance. This makes simplistic random generators unlikely to fill chip queues. Another common approach is template-based random testing. When testing a CPU, the tool uses sequences of 2 to 8 instructions that were predetermined from a table. The random generator just splices multiple templates together. Coverage is now limited by the templates. If a bug requires an instruction sequence not in a template, it is impossible to hit. These generators can easily generate nine of the same instructions in a row, which addresses some of the simplistic generator issues, but the template limitations create new coverage holes. These generators tend to achieve their maximum coverage fairly quickly and then cannot find further bugs. Another drawback for some random test tools is managing knob files where each knob controls the probability of some aspect of the test. Engineering resources are required to craft knob files, generate tests, and manage the knob files for regression testing. Knobs are a way to help guide the random testing tool. To test a particular area, changing the knobs can help create more targeted tests. Unfortunately, knob interactions can be complex and not well documented in the tool. Changing the knobs without this understanding may create holes in the test coverage rather than test new areas. Managing the knob files takes engineering resources, too. It would be better for the user if they did not have to fiddle with the knobs Pre-Silicon and Post-Silicon Validation The earlier a bug is found, the easier and faster it is to fix. Finding a design flaw soon after the RTL (logic source code) is written leads to extremely fast fixes for very low cost. Pre-silicon simulation thus focuses on giving quick feedback on simple test cases. Random test generators are often used, but since debug time is so critical, they usually generate fairly simple tests so that failures can be debugged quickly. Once chips and boards are manufactured, post-silicon testing begins. Many sources of bugs other than logic flaws are possible, so stressful testing of the final system configuration is critical to ensure product success. Post-silicon testing ensures the system meets reliability and manufacturing quality goals and is a necessary final check on all pre-silicon system assumptions. At a minimum, post-silicon testing runs applications in various system configurations and checks that the system operates correctly. For computers or embedded systems, this often means booting an OS and running the expected applications with peripherals attached. This testing will often 2
3 find bugs, but the overall system coverage is limited. Some hardware failures can be masked by the layers of software, making bug detection very difficult. And since the application code is relatively static, once basic testing is complete, simply running longer tests is less likely to find more bugs. This testing also leaves the product vulnerable to exposing chip bugs when future application updates occur. It is important for a post-silicon test plan to include testing beyond running the expected applications to ensure the hardware defects are discovered before a future application update exposes them. Debugging complex applications is a difficult and time consuming process. It is much more efficient to do chip validation with a dedicated testing tool System-Level Testing Many verification tools address just one area at a time. There are floating-point tests, network packet tests, CPU integer tests, memory pattern tests, etc. Although each test can be effective at finding some bugs, there are bugs at the intersection of these areas. ProValid has discovered many bugs where a chip flaw is revealed only when multiple chips in the system are being fully exercised simultaneously. To find this class of bug, a random test tool must get all components in the system doing work simultaneously. This means the CPU, memory, peripherals, and all buses need random but coordinated traffic all at the same time. Just running a memory test while running a network packet test is not sufficient. For example, the failure may only occur if the CPU and network device access the same memory locations, which will not happen when running independent tests. The tests need to interact to find the subtle ordering bugs, deadlocks, or queue full flaws. To find failures caused by manufacturing process variations, temperature change, or excessive noise on buses or power supplies, it is important that testing include all of the system. To generate maximum noise on the power supplies and grounds, all chips need to be busy. Data patterns need to be carefully selected to increase the noise, since again simple random values are not the worst-case patterns for generating noise. 2. Validating Systems Using BURST ProValid has developed an industry-leading chip validation tool named BURST. It addresses the limitations of current verification practices and provides greater overall system coverage. BURST (Bug-finding Using Random Stress Tests) applies advanced validation principles to generating effective random pre-silicon and post-silicon tests. BURST excels at system validation. Teams implementing a verification test plan often focus at too low a level by closely following a chip specification when creating tests. Validation has a broader focus and creates tests based on how the system should behave. 3
4 2.1. BURST General Operation BURST generates random test sequences which stress the whole system. For post-silicon testing, BURST runs completely on the system to be tested and is booted like an OS. For pre-silicon testing, BURST runs on Linux (or any Unix-like system) as a normal application. It prepares simulation stimulus files to pre-load system state and then starts the simulator. When the simulator ends, the output files are read back in and BURST checks the results. BURST is a selfchecking tool which will check for its own correctness by calculating the expected final system state. The same code for test generation and checking is used for both pre-silicon and post-silicon testing. BURST consists of driver modules which create the test for that component. One module generates CPU instructions and each supported peripheral device has a driver module. Modules can interact and use a shared memory space to allow modules to create contention with each other. In general, most aspects of the test sequence are randomized, including addresses used, amount of memory used, number of instructions or transactions, etc. After each test is run, the final system state is checked against BURST's predicted values. If anything mismatches, BURST provides copious debug information to aid debugging. If the test passes, BURST begins generating the next test automatically. By being autonomous, BURST executes quickly by running thousands of tests per second in post-silicon testing. Like many other random test tools, BURST has knobs to control how features are tested. However, BURST's knobs are intended for special circumstances and they are not changed for normal testing. By removing the need to adjust the knobs, testing uses hardware resources around the clock and engineering resources can be spent on debug or developing tests for new features. Automation also enables a small team of less experienced engineers to efficiently utilize a large quantity of hardware. This accelerates the schedule by allowing testing to conclude sooner Focused Random Validation Many random test generators have common coverage holes. The two main issues are: being too random and so are slow to exercise complex cases; or using templates which restrict the potential coverage. A better approach is called focused random testing. Special random algorithms ensure that the most interesting functions are tested the most. When applied to testing a CPU, a focused random tool such as BURST generates instructions one at a time without using a template. Instructions are chosen with register values selected to avoid execution exceptions. Care is taken so that there are few if any restrictions on the possible instruction sequences. The focused randomness will create a series of the same instruction much more often than the usual randomness would indicate. This provides faster coverage of complex cases. BURST also uses multi-level randomness to create more useful tests. Each test will determine certain modes randomly once for this test, and then craft the rest of the test within these modes. This is effectively the same as having the tool randomly turn knobs which eliminates knob fiddling. All of the BURST knobs are pre-set to reasonable values, so everything has a chance of occurring. The multi-level knobs ensure that the interesting cases get exercised much faster than randomness would normally allow. 4
5 2.3. BURST Portability and Reusability A sophisticated automated validation tool requires significant resources to develop, often requiring expertise not available in most organizations. When designed well, the tool will consist of modules that can be added or removed to allow porting to new architectures and configurations. This leverages most of the previous work forward to the next project. BURST is modularly designed so that components can be enabled or disabled to fit the system to be tested. The CPU instruction generator is separate from the peripheral card drivers and other BURST infrastructure code. This allows BURST to be ported to new architectures with a minimum of work while being able to reuse most of BURST's existing functionality. Configuration testing is also a key factor in stressful system tests. Systems need to be tested with various peripherals present in order to ensure high coverage. BURST automatically detects peripherals present allowing quick hardware changes to occur without needing to continually recompile the tool. An automated random tool also solves the reusability problem inherent with directed tests. Handwritten tests tend to rely on specific cycle timings to hit the desired test cases. Although the tests may run on the next generation platform, they no longer are increasing coverage or hitting cases of interest. But with an automated random tool, coverage on new systems stays high, even if the system underwent significant changes. Work can be focused on supporting new features to increase coverage. This high leverage reduces the effort for validation throughout generations of products. By focusing on validation, BURST is less closely tied to chip implementation details. BURST achieves high coverage by exercising how the system is intended to operate, rather than the details of how the implementation works. This abstraction enables most BURST code to be reused when validating on future similar systems BURST Debug Features BURST is designed to find design flaws and expects failures to occur. Unlike applications which assume the hardware is working correctly, BURST defensively checks continuously for correct operation. BURST therefore detects failures soon after they occur and provides information about the system state at the time of a failure. Since BURST's purpose is to detect failures, it provides extensive debug information after a failure is detected. Every module in BURST keeps a detailed record of what the test was expecting to accomplish, what the wrong answer was, and many other test details. Many bugs can be debugged to the failing hardware component without needing a logic trace just by analyzing BURST's debug information. BURST makes each test repeatable and independent. If a given test fails, it just takes a few seconds to re-start BURST and run just that test again, even if the failure occurred after the machine was running for weeks. The test will behave the same way when re-run and so the failure can be reproduced to help get a trace of the failure or to test potential fixes. BURST also has features to aid in capturing traces in post-silicon testing. It creates logic analyzer triggers to allow easy capture of a failure. BURST creates tests whose traces are designed to fit within the storage capacity of the analyzer. 5
6 2.5. Automation Reduces Resources Test automation maximizes two critical resources during validation: engineering hours and hardware utilization. Using an automated validation tool, engineering time is spent doing productive work such as developing code to stress new product features or debugging failures. It is not spent tediously preparing tests on machines. Automated validation allows testing around the clock without requiring engineers to work multiple shifts. This increases system utilization Successes Using BURST ProValid has shown that BURST is highly successful in finding bugs. BURST has found bugs on systems which had already been released into production, but which had not run BURST before. Companies not using automated random validation techniques for their complex designs are potentially releasing products with bugs that could have been discovered with better validation. BURST has been the only tool to find some bugs in CPUs, peripherals, and other IP after full testing had already been completed. For example, BURST generated the worst-case pattern test for a marginal DDR SDRAM memory problem and found a hold-time bug in a DDR controller. In the first case, months of Linux testing with applications had not discovered the problem. BURST can be debugged faster than running applications. An embedded computer running Linux had disk failures that were being actively debugged for weeks with little progress. BURST was applied to the problem and in less than a day, PCI traces showed a defect in the PCI host bridge. 3. Conclusions Complex chip designs need to use the best validation tools available to find the tough bugs. Not only does BURST find bugs that other tools miss, it provides investment protection by migrating forward to new systems and more efficiently using your testing resources. 6
FPGA Prototyping Primer
FPGA Prototyping Primer S2C Inc. 1735 Technology Drive, Suite 620 San Jose, CA 95110, USA Tel: +1 408 213 8818 Fax: +1 408 213 8821 www.s2cinc.com What is FPGA prototyping? FPGA prototyping is the methodology
White Paper. S2C Inc. 1735 Technology Drive, Suite 620 San Jose, CA 95110, USA Tel: +1 408 213 8818 Fax: +1 408 213 8821 www.s2cinc.com.
White Paper FPGA Prototyping of System-on-Chip Designs The Need for a Complete Prototyping Platform for Any Design Size, Any Design Stage with Enterprise-Wide Access, Anytime, Anywhere S2C Inc. 1735 Technology
Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001
Agenda Introduzione Il mercato Dal circuito integrato al System on a Chip (SoC) La progettazione di un SoC La tecnologia Una fabbrica di circuiti integrati 28 How to handle complexity G The engineering
7a. System-on-chip design and prototyping platforms
7a. System-on-chip design and prototyping platforms Labros Bisdounis, Ph.D. Department of Computer and Communication Engineering 1 What is System-on-Chip (SoC)? System-on-chip is an integrated circuit
Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips
Technology Update White Paper High Speed RAID 6 Powered by Custom ASIC Parity Chips High Speed RAID 6 Powered by Custom ASIC Parity Chips Why High Speed RAID 6? Winchester Systems has developed High Speed
GETTING STARTED WITH ANDROID DEVELOPMENT FOR EMBEDDED SYSTEMS
Embedded Systems White Paper GETTING STARTED WITH ANDROID DEVELOPMENT FOR EMBEDDED SYSTEMS September 2009 ABSTRACT Android is an open source platform built by Google that includes an operating system,
Introduction to Functional Verification. Niels Burkhardt
Introduction to Functional Verification Overview Verification issues Verification technologies Verification approaches Universal Verification Methodology Conclusion Functional Verification issues Hardware
Figure 1 FPGA Growth and Usage Trends
White Paper Avoiding PCB Design Mistakes in FPGA-Based Systems System design using FPGAs is significantly different from the regular ASIC and processor based system design. In this white paper, we will
Model Checking based Software Verification
Model Checking based Software Verification 18.5-2006 Keijo Heljanko [email protected] Department of Computer Science and Engineering Helsinki University of Technology http://www.tcs.tkk.fi/~kepa/ 1/24
DDR Memory Overview, Development Cycle, and Challenges
DDR Memory Overview, Development Cycle, and Challenges Tutorial DDR Overview Memory is everywhere not just in servers, workstations and desktops, but also embedded in consumer electronics, automobiles
Assertion Synthesis Enabling Assertion-Based Verification For Simulation, Formal and Emulation Flows
Assertion Synthesis Enabling Assertion-Based Verification For Simulation, Formal and Emulation Flows Manual Assertion Creation is ABV Bottleneck Assertion-Based Verification adopted by leading design companies
Application Security in the Software Development Lifecycle
Application Security in the Software Development Lifecycle Issues, Challenges and Solutions www.quotium.com 1/15 Table of Contents EXECUTIVE SUMMARY... 3 INTRODUCTION... 4 IMPACT OF SECURITY BREACHES TO
BY STEVE BROWN, CADENCE DESIGN SYSTEMS AND MICHEL GENARD, VIRTUTECH
WHITE PAPER METRIC-DRIVEN VERIFICATION ENSURES SOFTWARE DEVELOPMENT QUALITY BY STEVE BROWN, CADENCE DESIGN SYSTEMS AND MICHEL GENARD, VIRTUTECH INTRODUCTION The complexity of electronic systems is rapidly
Achieving business benefits through automated software testing. By Dr. Mike Bartley, Founder and CEO, TVS (mike@testandverification.
Achieving business benefits through automated software testing By Dr. Mike Bartley, Founder and CEO, TVS ([email protected]) 1 Introduction During my experience of test automation I have seen
Integrated Application and Data Protection. NEC ExpressCluster White Paper
Integrated Application and Data Protection NEC ExpressCluster White Paper Introduction Critical business processes and operations depend on real-time access to IT systems that consist of applications and
Digitale Signalverarbeitung mit FPGA (DSF) Soft Core Prozessor NIOS II Stand Mai 2007. Jens Onno Krah
(DSF) Soft Core Prozessor NIOS II Stand Mai 2007 Jens Onno Krah Cologne University of Applied Sciences www.fh-koeln.de [email protected] NIOS II 1 1 What is Nios II? Altera s Second Generation
Medical Device Design: Shorten Prototype and Deployment Time with NI Tools. NI Technical Symposium 2008
Medical Device Design: Shorten Prototype and Deployment Time with NI Tools NI Technical Symposium 2008 FDA Development Cycle From Total Product Life Cycle by David W. Fiegal, M.D., M.P.H. FDA CDRH Amazon.com
EXPERT STRATEGIES FOR LOG COLLECTION, ROOT CAUSE ANALYSIS, AND COMPLIANCE
EXPERT STRATEGIES FOR LOG COLLECTION, ROOT CAUSE ANALYSIS, AND COMPLIANCE A reliable, high-performance network is critical to your IT infrastructure and organization. Equally important to network performance
A White Paper By: Dr. Gaurav Banga SVP, Engineering & CTO, Phoenix Technologies. Bridging BIOS to UEFI
A White Paper By: Dr. Gaurav Banga SVP, Engineering & CTO, Phoenix Technologies Bridging BIOS to UEFI Copyright Copyright 2007 by Phoenix Technologies Ltd. All rights reserved. No part of this publication
Objective. Testing Principle. Types of Testing. Characterization Test. Verification Testing. VLSI Design Verification and Testing.
VLSI Design Verification and Testing Objective VLSI Testing Mohammad Tehranipoor Electrical and Computer Engineering University of Connecticut Need to understand Types of tests performed at different stages
Network connectivity controllers
Network connectivity controllers High performance connectivity solutions Factory Automation The hostile environment of many factories can have a significant impact on the life expectancy of PCs, and industrially
ESP-CV Custom Design Formal Equivalence Checking Based on Symbolic Simulation
Datasheet -CV Custom Design Formal Equivalence Checking Based on Symbolic Simulation Overview -CV is an equivalence checker for full custom designs. It enables efficient comparison of a reference design
REMOTE DEVELOPMENT OPTION
Leading the Evolution DATA SHEET MICRO FOCUS SERVER EXPRESS TM REMOTE DEVELOPMENT OPTION Executive Overview HIGH PRODUCTIVITY DEVELOPMENT FOR LINUX AND UNIX DEVELOPERS Micro Focus Server Express is the
Introduction to Automated Testing
Introduction to Automated Testing What is Software testing? Examination of a software unit, several integrated software units or an entire software package by running it. execution based on test cases
CPS221 Lecture: Operating System Structure; Virtual Machines
Objectives CPS221 Lecture: Operating System Structure; Virtual Machines 1. To discuss various ways of structuring the operating system proper 2. To discuss virtual machines Materials: 1. Projectable of
IEC 61131-3. The Fast Guide to Open Control Software
IEC 61131-3 The Fast Guide to Open Control Software 1 IEC 61131-3 The Fast Guide to Open Control Software Introduction IEC 61131-3 is the first vendor-independent standardized programming language for
x86 ISA Modifications to support Virtual Machines
x86 ISA Modifications to support Virtual Machines Douglas Beal Ashish Kumar Gupta CSE 548 Project Outline of the talk Review of Virtual Machines What complicates Virtualization Technique for Virtualization
High Performance or Cycle Accuracy?
CHIP DESIGN High Performance or Cycle Accuracy? You can have both! Bill Neifert, Carbon Design Systems Rob Kaye, ARM ATC-100 AGENDA Modelling 101 & Programmer s View (PV) Models Cycle Accurate Models Bringing
Peach Fuzzer Platform
Fuzzing is a software testing technique that introduces invalid, malformed, or random data to parts of a computer system, such as files, network packets, environment variables, or memory. How the tested
Design Cycle for Microprocessors
Cycle for Microprocessors Raúl Martínez Intel Barcelona Research Center Cursos de Verano 2010 UCLM Intel Corporation, 2010 Agenda Introduction plan Architecture Microarchitecture Logic Silicon ramp Types
Smarter Balanced Assessment Consortium. Recommendation
Smarter Balanced Assessment Consortium Recommendation Smarter Balanced Quality Assurance Approach Recommendation for the Smarter Balanced Assessment Consortium 20 July 2012 Summary When this document was
Unifying IT How Dell Is Using BMC
Unifying IT Management: How Dell Is Using BMC Software to Implement ITIL ABSTRACT Companies are looking for ways to maximize the efficiency with which they plan, deliver, and manage technology services.
How to Plan a Successful Load Testing Programme for today s websites
How to Plan a Successful Load Testing Programme for today s websites This guide introduces best practise for load testing to overcome the complexities of today s rich, dynamic websites. It includes 10
PCI Express Overview. And, by the way, they need to do it in less time.
PCI Express Overview Introduction This paper is intended to introduce design engineers, system architects and business managers to the PCI Express protocol and how this interconnect technology fits into
High Level Design Distributed Network Traffic Controller
High Level Design Distributed Network Traffic Controller Revision Number: 1.0 Last date of revision: 2/2/05 22c:198 Johnson, Chadwick Hugh Change Record Revision Date Author Changes 1 Contents 1. Introduction
Business Process Management The Must Have Enterprise Solution for the New Century
Business Process Management The Must Have Enterprise Solution for the New Century 15200 Weston Parkway, Suite 106 Cary, NC 27513 Phone: (919) 678-0900 Fax: (919) 678-0901 E-Mail: [email protected] WWW:
Chapter 3: Operating-System Structures. System Components Operating System Services System Calls System Programs System Structure Virtual Machines
Chapter 3: Operating-System Structures System Components Operating System Services System Calls System Programs System Structure Virtual Machines Operating System Concepts 3.1 Common System Components
JOURNAL OF OBJECT TECHNOLOGY
JOURNAL OF OBJECT TECHNOLOGY Online at http://www.jot.fm. Published by ETH Zurich, Chair of Software Engineering JOT, 2006 Vol. 5, No. 6, July - August 2006 On Assuring Software Quality and Curbing Software
What is a System on a Chip?
What is a System on a Chip? Integration of a complete system, that until recently consisted of multiple ICs, onto a single IC. CPU PCI DSP SRAM ROM MPEG SoC DRAM System Chips Why? Characteristics: Complex
Automation can dramatically increase product quality, leading to lower field service, product support and
QA Automation for Testing Medical Device Software Benefits, Myths and Requirements Automation can dramatically increase product quality, leading to lower field service, product support and liability cost.
Codesign: The World Of Practice
Codesign: The World Of Practice D. Sreenivasa Rao Senior Manager, System Level Integration Group Analog Devices Inc. May 2007 Analog Devices Inc. ADI is focused on high-end signal processing chips and
TESTING FRAMEWORKS. Gayatri Ghanakota
TESTING FRAMEWORKS Gayatri Ghanakota OUTLINE Introduction to Software Test Automation. What is Test Automation. Where does Test Automation fit in the software life cycle. Why do we need test automation.
Objectives. Chapter 2: Operating-System Structures. Operating System Services (Cont.) Operating System Services. Operating System Services (Cont.
Objectives To describe the services an operating system provides to users, processes, and other systems To discuss the various ways of structuring an operating system Chapter 2: Operating-System Structures
Datasheet FUJITSU Cloud Monitoring Service
Datasheet FUJITSU Cloud Monitoring Service FUJITSU Cloud Monitoring Service powered by CA Technologies offers a single, unified interface for tracking all the vital, dynamic resources your business relies
ARM Webinar series. ARM Based SoC. Abey Thomas
ARM Webinar series ARM Based SoC Verification Abey Thomas Agenda About ARM and ARM IP ARM based SoC Verification challenges Verification planning and strategy IP Connectivity verification Performance verification
Wait-Time Analysis Method: New Best Practice for Performance Management
WHITE PAPER Wait-Time Analysis Method: New Best Practice for Performance Management September 2006 Confio Software www.confio.com +1-303-938-8282 SUMMARY: Wait-Time analysis allows IT to ALWAYS find the
Computers. Hardware. The Central Processing Unit (CPU) CMPT 125: Lecture 1: Understanding the Computer
Computers CMPT 125: Lecture 1: Understanding the Computer Tamara Smyth, [email protected] School of Computing Science, Simon Fraser University January 3, 2009 A computer performs 2 basic functions: 1.
1 Organization of Operating Systems
COMP 730 (242) Class Notes Section 10: Organization of Operating Systems 1 Organization of Operating Systems We have studied in detail the organization of Xinu. Naturally, this organization is far from
Open Flow Controller and Switch Datasheet
Open Flow Controller and Switch Datasheet California State University Chico Alan Braithwaite Spring 2013 Block Diagram Figure 1. High Level Block Diagram The project will consist of a network development
White Paper Utilizing Leveling Techniques in DDR3 SDRAM Memory Interfaces
White Paper Introduction The DDR3 SDRAM memory architectures support higher bandwidths with bus rates of 600 Mbps to 1.6 Gbps (300 to 800 MHz), 1.5V operation for lower power, and higher densities of 2
JTAG Applications. Product Life-Cycle Support. Software Debug. Integration & Test. Figure 1. Product Life Cycle Support
JTAG Applications While it is obvious that JTAG based testing can be used in the production phase of a product, new developments and applications of the IEEE-1149.1 standard have enabled the use of JTAG
The Bus (PCI and PCI-Express)
4 Jan, 2008 The Bus (PCI and PCI-Express) The CPU, memory, disks, and all the other devices in a computer have to be able to communicate and exchange data. The technology that connects them is called the
Computer Organization & Architecture Lecture #19
Computer Organization & Architecture Lecture #19 Input/Output The computer system s I/O architecture is its interface to the outside world. This architecture is designed to provide a systematic means of
AMS Verification at SoC Level: A practical approach for using VAMS vs SPICE views
AMS Verification at SoC Level: A practical approach for using VAMS vs SPICE views Nitin Pant, Gautham Harinarayan, Manmohan Rana Accellera Systems Initiative 1 Agenda Need for SoC AMS Verification Mixed
What is LOG Storm and what is it useful for?
What is LOG Storm and what is it useful for? LOG Storm is a high-speed digital data logger used for recording and analyzing the activity from embedded electronic systems digital bus and data lines. It
Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture
Chapter 2: Computer-System Structures Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture Operating System Concepts 2.1 Computer-System Architecture
HARDWARE ACCELERATION IN FINANCIAL MARKETS. A step change in speed
HARDWARE ACCELERATION IN FINANCIAL MARKETS A step change in speed NAME OF REPORT SECTION 3 HARDWARE ACCELERATION IN FINANCIAL MARKETS A step change in speed Faster is more profitable in the front office
1394 Bus Analyzers. Usage Analysis, Key Features and Cost Savings. Background. Usage Segmentation
1394 Bus Analyzers Usage Analysis, Key Features and Cost Savings By Dr. Michael Vonbank DapUSA Inc., and Dr. Kurt Böhringer, Hitex Development Tools GmbH Background When developing products based on complex
Software based Finite State Machine (FSM) with general purpose processors
Software based Finite State Machine (FSM) with general purpose processors White paper Joseph Yiu January 2013 Overview Finite state machines (FSM) are commonly used in electronic designs. FSM can be used
NAND Flash FAQ. Eureka Technology. apn5_87. NAND Flash FAQ
What is NAND Flash? What is the major difference between NAND Flash and other Memory? Structural differences between NAND Flash and NOR Flash What does NAND Flash controller do? How to send command to
Minimizing code defects to improve software quality and lower development costs.
Development solutions White paper October 2008 Minimizing code defects to improve software quality and lower development costs. IBM Rational Software Analyzer and IBM Rational PurifyPlus software Kari
// Taming an Unruly Schedule Using the 14-Point Schedule Assessment
// Taming an Unruly Schedule Using the 14-Point Schedule Assessment Dr. Dan Patterson, PMP CEO & President, Acumen February 2010 www.projectacumen.com Table of Contents Introduction... 3 The 14-Point Assessment...
High Availability White Paper
High Availability White Paper This document provides an overview of high availability best practices for mission critical applications. Author: George Quinlan, Senior Consultant Background - High Availability
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
The ROI of Test Automation
The ROI of Test Automation by Michael Kelly www.michaeldkelly.com Introduction With the exception of my first project team out of college, in every project team since, I ve had to explain either what automated
A Survey on Virtual Machine Security
A Survey on Virtual Machine Security Jenni Susan Reuben Helsinki University of Technology [email protected] Abstract Virtualization plays a major role in helping the organizations to reduce the operational
Information Security Services
Information Security Services Information Security In 2013, Symantec reported a 62% increase in data breaches over 2012. These data breaches had tremendous impacts on many companies, resulting in intellectual
Adopting Agile Testing
Adopting Agile Testing A Borland Agile Testing White Paper August 2012 Executive Summary More and more companies are adopting Agile methods as a flexible way to introduce new software products. An important
PART III. OPS-based wide area networks
PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity
Functional and LoadTest Strategies
Test Automation Functional and LoadTest Strategies Presented by: Courtney Wilmott April 29, 2013 UTD CS6367 Software Testing and Validation Definitions / Overview Software is a set of programs, procedures,
Managing Agile Projects in TestTrack GUIDE
Managing Agile Projects in TestTrack GUIDE Table of Contents Introduction...1 Automatic Traceability...2 Setting Up TestTrack for Agile...6 Plan Your Folder Structure... 10 Building Your Product Backlog...
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
Best Practises for LabVIEW FPGA Design Flow. uk.ni.com ireland.ni.com
Best Practises for LabVIEW FPGA Design Flow 1 Agenda Overall Application Design Flow Host, Real-Time and FPGA LabVIEW FPGA Architecture Development FPGA Design Flow Common FPGA Architectures Testing and
Provisioning Technology for Automation
Provisioning Technology for Automation V Mamoru Yokoyama V Hiroshi Yazawa (Manuscript received January 17, 2007) Vendors have recently been offering more products and solutions for IT system automation
Creating Business Value with Mature QA Practices
perspective Creating Business Value with Mature QA Practices Abstract The IT industry across the globe has rapidly evolved in recent times. The evolution has been primarily driven by factors like changing
Car Racing Game. Figure 1 The Car Racing Game
CSEE 4840 Embedded System Design Jing Shi (js4559), Mingxin Huo (mh3452), Yifan Li (yl3250), Siwei Su (ss4483) Car Racing Game -- Project Design 1 Introduction For this Car Racing Game, we would like to
Exception and Interrupt Handling in ARM
Exception and Interrupt Handling in ARM Architectures and Design Methods for Embedded Systems Summer Semester 2006 Author: Ahmed Fathy Mohammed Abdelrazek Advisor: Dominik Lücke Abstract We discuss exceptions
Designing VM2 Application Boards
Designing VM2 Application Boards This document lists some things to consider when designing a custom application board for the VM2 embedded controller. It is intended to complement the VM2 Datasheet. A
Architectures and Platforms
Hardware/Software Codesign Arch&Platf. - 1 Architectures and Platforms 1. Architecture Selection: The Basic Trade-Offs 2. General Purpose vs. Application-Specific Processors 3. Processor Specialisation
ISTQB Certified Tester. Foundation Level. Sample Exam 1
ISTQB Certified Tester Foundation Level Version 2015 American Copyright Notice This document may be copied in its entirety, or extracts made, if the source is acknowledged. #1 When test cases are designed
find model parameters, to validate models, and to develop inputs for models. c 1994 Raj Jain 7.1
Monitors Monitor: A tool used to observe the activities on a system. Usage: A system programmer may use a monitor to improve software performance. Find frequently used segments of the software. A systems
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Point of view HDMI Smart TV dongle Mini RF Keyboard
Point of view HDMI Smart TV dongle Mini RF Keyboard English Contents Contents... 1 General notices for use... 2 Disclaimer... 2 Box Contents... 2 1. HDMI TV dongle... 3 1.1. Product display... 3 1.2. Instructions
Your guide to DevOps. Bring developers, IT, and the latest tools together to create a smarter, leaner, more successful coding machine
Your guide to DevOps Bring developers, IT, and the latest tools together to create a smarter, leaner, more successful coding machine Introduction The move to DevOps involves more than new processes and
Chapter 13: Verification
Chapter 13: Verification Prof. Ming-Bo Lin Department of Electronic Engineering National Taiwan University of Science and Technology Digital System Designs and Practices Using Verilog HDL and FPGAs @ 2008-2010,
The new 32-bit MSP432 MCU platform from Texas
Technology Trend MSP432 TM microcontrollers: Bringing high performance to low-power applications The new 32-bit MSP432 MCU platform from Texas Instruments leverages its more than 20 years of lowpower leadership
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage
The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...
MAGENTO HOSTING Progressive Server Performance Improvements
MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 [email protected] 1.866.963.0424 www.simplehelix.com 2 Table of Contents
Introduction to Digital System Design
Introduction to Digital System Design Chapter 1 1 Outline 1. Why Digital? 2. Device Technologies 3. System Representation 4. Abstraction 5. Development Tasks 6. Development Flow Chapter 1 2 1. Why Digital
Quality Management. Lecture 12 Software quality management
Quality Management Lecture 12 Software quality management doc.dr.sc. Marko Jurčević prof.dr.sc. Roman Malarić University of Zagreb Faculty of Electrical Engineering and Computing Department of Fundamentals
Soft processors for microcontroller programming education
Soft processors for microcontroller programming education Charles Goetzman Computer Science University of Wisconsin La Crosse [email protected] Jeff Fancher Electronics Western Technical College
Introducing the Dezyne Modelling Language
Introducing the Dezyne Modelling Language Bits & Chips Smart Systems, 20 November 2014 Paul Hoogendijk. [email protected] Software Controlled Systems Software Controlled Systems Event driven Concurrent,
Design and Implementation of the Heterogeneous Multikernel Operating System
223 Design and Implementation of the Heterogeneous Multikernel Operating System Yauhen KLIMIANKOU Department of Computer Systems and Networks, Belarusian State University of Informatics and Radioelectronics,
Von der Hardware zur Software in FPGAs mit Embedded Prozessoren. Alexander Hahn Senior Field Application Engineer Lattice Semiconductor
Von der Hardware zur Software in FPGAs mit Embedded Prozessoren Alexander Hahn Senior Field Application Engineer Lattice Semiconductor AGENDA Overview Mico32 Embedded Processor Development Tool Chain HW/SW
Linux Driver Devices. Why, When, Which, How?
Bertrand Mermet Sylvain Ract Linux Driver Devices. Why, When, Which, How? Since its creation in the early 1990 s Linux has been installed on millions of computers or embedded systems. These systems may
Programming NAND devices
Technical Guide Programming NAND devices Kelly Hirsch, Director of Advanced Technology, Data I/O Corporation Recent Design Trends In the past, embedded system designs have used NAND devices for storing
ARM Ltd 110 Fulbourn Road, Cambridge, CB1 9NJ, UK. *[email protected]
Serial Wire Debug and the CoreSight TM Debug and Trace Architecture Eddie Ashfield, Ian Field, Peter Harrod *, Sean Houlihane, William Orme and Sheldon Woodhouse ARM Ltd 110 Fulbourn Road, Cambridge, CB1
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur
Real-Time Systems Prof. Dr. Rajib Mall Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No. # 26 Real - Time POSIX. (Contd.) Ok Good morning, so let us get
