Texas Instruments Reduces Test Time using SwifTest-TTO Adaptive Test

Similar documents
TestScape. On-line, test data management and root cause analysis system. On-line Visibility. Ease of Use. Modular and Scalable.

Test-data validation. Ing. Rob Marcelis

Load Testing and Monitoring Web Applications in a Windows Environment

Process Intelligence: An Exciting New Frontier for Business Intelligence

What is Data Mining, and How is it Useful for Power Plant Optimization? (and How is it Different from DOE, CFD, Statistical Modeling)

Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX

Course Overview Lean Six Sigma Green Belt

Lean Six Sigma Black Belt Body of Knowledge

Rapid Bottleneck Identification A Better Way to do Load Testing. An Oracle White Paper June 2009

The role of integrated requirements management in software delivery.

How B2B Customer Self-Service Impacts the Customer and Your Bottom Line. zedsuite

Learning Objectives Lean Six Sigma Black Belt Course

HP Hard Disk Drive Quality System The Driving Force of Reliability

Lean Six Sigma Black Belt-EngineRoom

elearning Course Catalog

An Oracle White Paper February Rapid Bottleneck Identification - A Better Way to do Load Testing

How to Eliminate the No: 1 Cause of Network Downtime. Learn about the challenges with configuration management, solutions, and best practices.

Using WebLOAD to Monitor Your Production Environment

TrakSYS.

An Introduction to. Metrics. used during. Software Development

STSG Methodologies and Support Structure

SC21 Manufacturing Excellence. Process Overview

BillQuick Agent 2010 Getting Started Guide

Remote Services. Managing Open Systems with Remote Services

A Better Statistical Method for A/B Testing in Marketing Campaigns

ORACLE ENTERPRISE MANAGER 10 g CONFIGURATION MANAGEMENT PACK FOR ORACLE DATABASE

A Guide to Implementing a World-Class Cycle Counting Program

Adaptive Automated GUI Testing Producing Test Frameworks to Withstand Change

Operational Business Intelligence in Manufacturing

Coverity White Paper. Effective Management of Static Analysis Vulnerabilities and Defects

What is Application Lifecycle Management? At lower costs Get a 30% return on investment guaranteed and save 15% on development costs

WHITE PAPER. Leveraging a LEAN model of catalogbased performance testing for quality, efficiency and cost effectiveness

SPC Demonstration Tips

Proactive Performance Management for Enterprise Databases

INTELLIGENT DEFECT ANALYSIS, FRAMEWORK FOR INTEGRATED DATA MANAGEMENT

OneSight Voice Quality Assurance

Performance Testing. on Production System

Improving Quality and Yield Through Optimal+ Big Data Analytics

Software Configuration Management Best Practices for Continuous Integration

On Correlating Performance Metrics

How To Close The Loop On A Fully Differential Op Amp

Dynamic Thread Pool based Service Tracking Manager

ni.com/sts NI Semiconductor Test Systems

Key Benefits of Microsoft Visual Studio Team System

Copyright 1

Manufacturing Analytics: Uncovering Secrets on Your Factory Floor

Using TechExcel s DevSuite to Achieve FDA Software Validation Compliance For Medical Software Device Development

Lean Six Sigma Analyze Phase Introduction. TECH QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY

IMPORTANT FACTS ABOUT. New Product Development

Design Compiler Graphical Create a Better Starting Point for Faster Physical Implementation

Five High Order Thinking Skills

The Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z

Software Configuration Management Best Practices

1.Introduction. Introduction. Most of slides come from Semiconductor Manufacturing Technology by Michael Quirk and Julian Serda.

Lean Kitting: A Case Study

Maintenance performance improvement with System Dynamics:

Wonderware QI Analyst

Automated Testing Best Practices

ATE for Manufacturing Test. Major ATE Companies: Teradyne, Credence, Agilent, Advantest, NPTest... Agilent 83K. Advantest T6682

Synchro-Phasor Data Conditioning and Validation Project Phase 3, Task 1. Report on

Space project management

SINGLE-SUPPLY OPERATION OF OPERATIONAL AMPLIFIERS

Automation can dramatically increase product quality, leading to lower field service, product support and

Deltek Costpoint Process Execution Modes

Performance Optimization Guide

The Importance of Software License Server Monitoring

Wilhelmenia Ravenell IT Manager Eli Lilly and Company

RF Test Gage R&R Improvement

Software change and release management White paper June Extending open source tools for more effective software delivery.

CA NetQoS Unified Communications Monitor

INTELLIGENT DEFECT ANALYSIS SOFTWARE

8 Strategies for 2008

Enhance visibility into and control over software projects IBM Rational change and release management software

21ST CENTURY PROCESS MANAGEMENT USING SPC BASED MANUFACTURING ANALYTICS

Software Configuration Management Plan

Monitoring Guidelines for Microsoft.com and Update.Microsoft.com

WebAmoeba Ticket System Documentation

Alarms of Stream MultiScreen monitoring system

Cost of Poor Quality:

Changes Affecting All Merchants The New Connection Method, Simple Integration Method (SIM)

System Development Life Cycle Guide

Reaching CMM Levels 2 and 3 with the Rational Unified Process

Test Data Management Best Practice

& USER T ECH.C W WW. SERVICE

Aspen InfoPlus.21. Family

3D Interactive Information Visualization: Guidelines from experience and analysis of applications

Wafer Level Testing Challenges for Flip Chip and Wafer Level Packages

Overview Motivating Examples Interleaving Model Semantics of Correctness Testing, Debugging, and Verification

Transcription:

Texas Instruments Reduces Test Time using SwifTest-TTO Adaptive Test Business Situation Texas Instruments was looking for new ways to reduce test time on high volume, mixed signal devices. They were already deploying parallel testing, but wanted additional savings to meet their product margin goals. Solution A technical paper published by Scott Benner of Qualcomm at ITC 2001 reported that preliminary studies indicated that statistical sampling could be used to reduce test time on devices with high parametric test times. Pintail Technologies created SwifTest as the first commercial solution to deliver this capability. TI was an early adopter and worked with Pintail to make SwifTest production worthy for their high quality test floors. Benefits Results demonstrated that test time could be reduced by up to 20% with no impact to device yield or quality. (In future years, savings up to 45% were achieved.) Complete audit trails are created. Furthermore, the software can be released to production so that it runs transparently on the testers, without interfering with normal test operations. Editor s Note: This paper was first presented in May of 2004 by Texas Instruments to the Mixed Signal Program of the Teradyne Users Group Meeting. TI was instrumental in qualifying Pintail s test time reduction solution. Every recommendation at the end of this report was eventually added to the product. TI went on to release the solution on additional tester platforms including FLEX, J750 and their proprietary VLCT. Salland Engineering acquired Pintail in 2011. Overview Test cost reduction remains an important goal for most semiconductor companies, especially for mixed-signal IC devices. There are many well-known techniques to reduce test time Copyright 2012 Salland Engineering, BV.

such as multi-site parallel testing, but a relatively new idea is to apply statistical process control (SPC) based sampling to device testing. This paper describes the successful use of such an approach as well as the challenges encountered when implementing SPC-sampling in hi-volume production. Texas Instruments, Inc. has been working with Salland Engineering to develop and qualify a suite of robust software tools, called TestVision and SwifTest. The goal is to refine and improve the test process on Teradyne Image-based tester platforms like Catalysts and A5 s. The method starts with analyzing historical test data to identify potential test sampling candidates. The final list of sampled test parameters is optimized by off-tester simulation and on-tester program verification. Once the program is implemented on the tester, the most demanding requirement is a capability to monitor all the test data in realtime and apply statistical techniques to turn test sampling on or off for key parameters depending on the performance of the silicon material. Because process capability indices are being utilized to determine whether sampling should continue, sampling is most effective on devices with high parametric content. This makes the mixed signal device testing the most suitable choice for this sampling technique since there are typically many parametric test parameters available. A few device programs from different application areas like broadband and wireless were selected for the sampling implementation at Texas Instruments. The average test time reduction with sampling has been observed to range from 5% to 20%. In the end, we attained the same quality level with SPC-sampling while bringing more product to market faster and at lower cost. TI plans to extend the methodology to more devices and to other tester platforms. This paper will present the details of the sampling methodology and the lessons learned through the development and deployment phase. Good Devices for Sampling To maximize the potential for test time reduction, high volume, mixed-signal devices are good choices for sampling. Since parametric tests often take longer to perform than their functional counterparts, mixed-signal devices offer the opportunity for robust improvement. Characteristics of good candidates are: Consumption of a large number of test cells Mature process technology in silicon manufacturing Stable and robust test program A substantial list of parametric tests Existing test yield above 80% at probe and/or package test. A device may not be a good candidate for sampling if the above conditions are not met. After the analysis phase, if the potential test time reduction is found to be relatively low or if it does not meet the sampling program s goals, the device is deemed not a good candidate. Likewise, if a test program is going through revisions for yield improvement, it is better to wait until the program is finalized before preparing it for sampling. After a device is selected, the team should define an achievable set of objectives before preparing the program for sampling. The objectives should identify a set of acceptance criteria for determining if a sampling program should be released for production use. Objectives The main objective is to use sampling to improve device test with imperceptible changes to device quality and with minimal

effort on the part of the engineer. In addition, Texas Instruments was also interested in measuring the ROI provided by Salland s product. If a software product that costs a fraction of a new piece of equipment could produce the claimed results, the ROI on the product is not only high, but improves the ROI on existing capital. Likewise, ensuring the production viability of the technology in terms of durability, quality, and load testing are important goals. Goals set for early experiments were: Device test time reduction > 15% Average throughput increase > 10% Test yield difference < 1% Bin fallout difference < 0.5% Wafer map (for probe program) reasonably matched > 90% No quality or binning change issues No significant parametric data shift No system overhead or test floor integration issues. We paid special attention to any quality issues due to sampling. A complete correlation of the test results is required to ensure all criteria are met. If one or more of the criteria is not met, the team has to examine the root cause and decide if any potential risk in test quality or yield will prevail over the benefit of test time reduction by sampling. Sampling Methodology Implementation involves five steps illustrated in Figure 1 on the next page: Analysis Program preparation Test run Correlation Release to production Step 1: Analysis The analysis phase consists of using Salland s TestVision to analyze historical test data for the selected device. Data representing at least 10,000 units across the full range of testers and process variations should be used. The user determines which parametric tests are in statistical control (ISC) and simulates a sampling flow against the data. The sampling simulation compares the process capability indices Cp and Cpk of 100 percent test versus a sampled flow. The statistics for each scenario are then presented for evaluation, including the process capability indices, outliers, normality, deviation percentages, and test escape results. Discussions between the team members and device experts may further narrow the list, as certain parametric tests may be deemed too critical for sampling regardless of their high stability. Sampling candidates should have high Cpk values, low yield loss, almost have never failed, and have little impact on test yield. Tests that should not be designated as candidates for sampling include continuity, leakage, fuse trimming, register programming, or functional patterns. After several iterations of analysis, a final candidate list of sampling tests is selected. When this list is combined with test time information from the test profile, the potential test time savings with sampling can be estimated. While the focus is on tests that can be sampled, tests that are out of statistical control (OSC) may be analyzed for future improvement potential in both yield and throughput. Step 2: Program Preparation Preparing the test program for sampling is straightforward. By looking at the test sequence and the structure of the test program, test groups are chosen within the same sequencer test routine to minimize the amount of effort and reduce test program revision. The goal is to disturb the test flow as little as possible, making correlation easier while maximizing the test time reduction.

Figure 1: Step-by step Quality-focused Methodology Sampled tests are not physically removed, but rather they are skipped over using a simple IF THEN statement that calls Salland s thin Sampling Client. The Sampling Client determines whether the test should be skipped or tested based on results from the real time monitoring engine. For sampling to start, the user also needs to create a configurable device profile to store the baseline threshold and the sampling interval for each test candidate. The initial phase involves skipping a single test group from the test flow to determine if it has any negative downstream effect on subsequent tests in the flow during lab testing. This is important to check for test dependencies and to prove that a group of tests can be turned on and off without negative side effects. Following successful correlation in the lab, the sampling-enabled program can be released for full correlation. Step 3: Test Run When a lot starts on a tester enabled with SwifTest, 50 to 100 devices are 100% tested to establish a baseline. The baseline is used to determine if sampling candidates are in statistical control for the current lot. Any nonperforming test will be 100% tested to avoid any quality issues. For example, if the Cp or Cpk value for a selected test is below the preset threshold after the baseline, that test will not be sampled. After the baseline, Salland s SwifTest begins sampling selected test candidates at the specified interval, while continually monitoring incoming test data in real-time to decide if the statistics are still within acceptable thresholds. In the case of an exception violation, SwifTest disables sampling for a particular candidate while allowing sampling to continue on all other selected tests. An alert can be sent out to a responsible engineer if configured. At lot

end, the whole sampling process is completed with a comprehensive lot-end and traceability report for review. Salland s SwifTest provides numerous safeguards to prevent any quality or yield issues due to sampling during lot testing. These are conservative measures to guarantee test quality, but these measures may be improved later to streamline the sampling process. Sampling safeguards: All tests are performed 100% until the baseline has been reached Sampling candidates will be 100% tested if the baseline data does not meet thresholds Sampling on each test candidate will revert back to 100% testing at any time if: Test data flags an OSC condition (above/below threshold) The monitoring engine is unable to track the data stream in real-time for any reason (e.g. network interruption) Tester inactivity causes a time out condition. Step 4: Correlation Exercise A complete correlation exercise consists of evaluating all the results of sampling for test time, test yield and binning, and parametric data. Wafer map analysis is useful if sampling was performed at wafer probe. It is recommended to start the correlation exercise in a lab using several packaged parts, followed by one or two small lots or correlation wafers (for probe programs) on one tester, and then complete the study with a few lots on multiple testers using both the sampling and standard non-sampling program. Using silicon material from different fab source lots is beneficial. Our recommended sequence for running the correlation exercise: Initial correlation in lab on a few packaged parts Second correlation on a production tester using one or two small lots or wafers Final split lot evaluation on multiple testers using a few production lots At each stage of the correlation, it is important to compare the results against the pre-defined acceptance criteria. Depending on the complexity of the device program, several iterations of data correlation, program debug, sampling preparation and re-correlation may be needed until all the key objectives are satisfied. The main focus of the correlation exercise is to validate the functionality, test time reduction and production worthiness of the sampling program in a manufacturing environment. Table 1 shows the goals and results for a broadband product sampled at final test. Criteria Goal Results Device test time >15% 19.6% reduction Throughput increase >10% 14.5% Yield difference <1% 0.11% Binning difference <0.5% 0.0023% No quality or bin None None changing issues No parametric sifts None None No system overhead of integration issues None None Table 1: Correlation Results vs, Goals Step 4a: Parametric Data Analysis A simple data analysis technique on a sampling candidate test is to use the parametric trend graph to review the baseline data, the sampling rate, any significant shift in the mean and range, and whether sampling is turned off during device testing. Figure 2 shows two such parametric trend graphs (1) with 10% sampling for the entire test, and (2)

Figure 2: Parametric trends for standard vs. sampled test result with sampling disabled in the middle when the test becomes out of control. The correlation graph can be combined with the lot end statistical summary and traceability report to determine if sampling is behaving properly. For non-sampled tests, it is also critical to examine the parametric data to ensure there has been no impact from sampling other tests. If a test has no correlation issues on its parametric data, it is not likely that there will be any bin or yield issues with that test. Step 4b: Bin and Yield Comparison A summary comparison of the lot and bin yield between the standard and sampling modes is necessary to assess any significant differences in test yields. If die ID information or die location from a wafer is available, a direct bin-to-bin comparison on a die-per-die basis may be preferred to evaluate the yield impact. Special attention must be paid to those die that pass under sampling mode but fail under standard mode. If skipping a test causes a few bad die to become test escapes by sampling, it may be better to exclude those tests from the list of sampling candidates. The acceptable amount of yield and binning changes should be pre-defined in the objectives. Step 5: Production Testing After a sampling program is released to production, it is important to verify its reproducibility on multiple testers over an extended period of time. System down time or production issues observed on a tester after sampling implementation reveals any potential weaknesses in the tool or perhaps in the test program itself. Table 2 shows the test throughput improvement from the final-test sampling program measured across five Teradyne A5xx production testers. The calculation of the test throughput includes the lot or wafer load time, device test time and handler/prober index time, which provides a better measure of actual test cost reduction. The best test time savings were observed on testers with 2 heads running the same

sampling final-test program. Lower test time savings were from testers running sampling programs at probe with lower test yields. Criteria Goal Results Device test time >15% 18.7% reduction Throughput increase >10% 12.1% Continuous, uninterrupted operation >2 weeks >8 weeks Table 2: Final test results over five testers What We Have Learned Special manufacturing software installed on a tester, such as a laser trimmer controller, creates additional complications that Salland had to overcome. Any integration issues within the existing test infrastructure must be addressed first. Another production issue involved ensuring the tool does not time out in case of long idle times from potential handler jams, misalignments, auto-calibration, or setup issues. Initial training and coaching are important to familiarize test engineers with the tools, prepare sampling programs, run the correlation exercises, and release the sampling programs to production. A team approach is encouraged until test engineers become proficient in working through the sampling process. The software was designed to run transparently on the tester with no special operator intervention. We learned that operators eventually needed to be informed of its presences as they were in disbelief that the program would complete normal jobs 20% faster! Support is important. Salland continues to provide excellent and timely technical support to resolve these types of issues at Texas Instruments. Each of TI s manufacturing facilities may have site-specific requirements and differences, and the tool must be flexible and extensible enough to support these. When sampling does not start or becomes disabled, it is important to identify the root cause, e.g. license issue or program problem, and resolve it. Future Improvements While the current Salland tools are showing good results, some improvements are recommended to streamline the implementation process and maximize the test time savings. Suggestions from the users in Texas Instruments include the following items: Tighter integration within existing TI manufacturing infrastructure, Automatic sampling rate adjustment according to silicon material quality, Flexible scheme to deal with new program releases, Devise a dynamic sampling list of test candidates based on current and historical data, Integrate an outlier strategy for parametric data, Incorporate time-based, SPC rules like Western Electric (WECO) in the statistical engine, An alert notification system whenever sampling does not start or becomes disabled, Test floor monitor for sampling activities, A graphical report will be nice for summarizing the status of all sampling activities on the production test floor for management. Conclusions To reduce test cost within Texas Instruments, Inc., Salland s Test Improvement Suite has been used to enable sampling on device testing. This toolset provides the capability to monitor and evaluate the performance of

parametric data in real-time to determine if a test remains in statistical control. The toolset allows a sampling candidate, while in control, to be sample tested at a user-selected interval, saving test time and improving throughput. The correlation results from several test programs show that the methodology used and Salland s TestVision and SwifTest tools are viable within a manufacturing environment. Gradually, more and more test programs are being converted at Texas Instruments to take advantage of the provided test cost reduction. An average saving of 5% to 20% has been observed in production by employing the test sampling technique. Lessons have been learned in the development and implementation phase that will lead to further improvements of the tool. Inevitably, we will face new challenges when we try to adapt it for other types of device programs and tester platforms. www.salland.com