Technical Note Memory Management in NAND Flash Arrays



Similar documents
NAND Flash Status Register Response in Cache Programming Operations

Technical Note. Micron NAND Flash Controller via Xilinx Spartan -3 FPGA. Overview. TN-29-06: NAND Flash Controller on Spartan-3 Overview

Technical Note. SFDP for MT25Q Family. Introduction. TN-25-06: Serial Flash Discovery Parameters for MT25Q Family. Introduction

NAND Flash FAQ. Eureka Technology. apn5_87. NAND Flash FAQ

A New Chapter for System Designs Using NAND Flash Memory

Choosing the Right NAND Flash Memory Technology

NAND Flash Architecture and Specification Trends

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

RealSSD Embedded USB Mass Storage Drive MTFDCAE001SAF, MTFDCAE002SAF, MTFDCAE004SAF, MTFDCAE008SAF

MCF54418 NAND Flash Controller

NAND Flash Architecture and Specification Trends

1 Gbit, 2 Gbit, 4 Gbit, 3 V SLC NAND Flash For Embedded

Technical Note NAND Flash 101: An Introduction to NAND Flash and HowtoDesignItIntoYourNextProduct

A Storage Architecture for High Speed Signal Processing: Embedding RAID 0 on FPGA

1.55V DDR2 SDRAM FBDIMM

HP Smart Array Controllers and basic RAID performance factors

Solid State Drive Technology

Design of a High-speed and large-capacity NAND Flash storage system based on Fiber Acquisition

Technical Note Booting from Embedded MMC

Technical Note. Initialization Sequence for DDR SDRAM. Introduction. Initializing DDR SDRAM

NAND Flash Memories. Understanding NAND Flash Factory Pre-Programming. Schemes

Customer Service Note Lead Frame Package User Guidelines

Technical Note. Installing Micron SEDs in Windows 8 and 10. Introduction. TN-FD-28: Installing Micron SEDs in Windows 8 and 10.

What Types of ECC Should Be Used on Flash Memory?

An In-Depth Look at Variable Stripe RAID (VSR)

Booting from NAND Flash Memory

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

AN10860_1. Contact information. NXP Semiconductors. LPC313x NAND flash data and bad block management

Using Xilinx CPLDs to Interface to a NAND Flash Memory Device

DDR SDRAM SODIMM. MT8VDDT3264H 256MB 1 MT8VDDT6464H 512MB For component data sheets, refer to Micron s Web site:

enabling Ultra-High Bandwidth Scalable SSDs with HLnand

S34ML08G2 NAND Flash Memory for Embedded

Benefits of Using RAID 50 or 60 in Single High Capacity RAID Array Volumes Greater than 16 Disk Drives

Production Flash Programming Best Practices for Kinetis K- and L-series MCUs

Advanced Core Operating System (ACOS): Experience the Performance

Introduction to PCI Express Positioning Information

NAND 201: An Update on the Continued Evolution of NAND Flash

How System Settings Impact PCIe SSD Performance

Dependable Systems. 9. Redundant arrays of. Prof. Dr. Miroslaw Malek. Wintersemester 2004/05

Advantages of e-mmc 4.4 based Embedded Memory Architectures

Performance Beyond PCI Express: Moving Storage to The Memory Bus A Technical Whitepaper

Comparison of NAND Flash Technologies Used in Solid- State Storage

Technical Note DDR3 ZQ Calibration

DELL RAID PRIMER DELL PERC RAID CONTROLLERS. Joe H. Trickey III. Dell Storage RAID Product Marketing. John Seward. Dell Storage RAID Engineering

High Availability Server Clustering Solutions

RAID Technology Overview

DDR SDRAM UDIMM MT16VDDT6464A 512MB MT16VDDT12864A 1GB MT16VDDT25664A 2GB

RAID Basics Training Guide

Bluetooth in Automotive Applications Lars-Berno Fredriksson, KVASER AB

Addressing Fatal Flash Flaws That Plague All Flash Storage Arrays

Using Altera MAX Series as Microcontroller I/O Expanders

USB - FPGA MODULE (PRELIMINARY)

EDI s x32 MCM-L SRAM Family: Integrated Memory Solution for TMS320C4x DSPs

Technical Note DDR2 Offers New Features and Functionality

Understanding Data Locality in VMware Virtual SAN

White Paper. Enhancing Storage Performance and Investment Protection Through RAID Controller Spanning

Data Center Solutions

AN LPC24XX external memory bus example. Document information

DDR3 SDRAM UDIMM MT8JTF12864A 1GB MT8JTF25664A 2GB

Computer Organization & Architecture Lecture #19

Nasir Memon Polytechnic Institute of NYU

Impact of Stripe Unit Size on Performance and Endurance of SSD-Based RAID Arrays

Trends in NAND Flash Memory Error Correction

Chapter 9: Peripheral Devices: Magnetic Disks

Atmel s Self-Programming Flash Microcontrollers

Technical Note. NOR Flash Cycling Endurance and Data Retention. Introduction. TN-12-30: NOR Flash Cycling Endurance and Data Retention.

White Paper Using the Intel Flash Memory-Based EPC4, EPC8 & EPC16 Devices

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

A Comparison of Client and Enterprise SSD Data Path Protection

Empirical Inspection of IO subsystem for Flash Storage Device at the aspect of discard

USER GUIDE EDBG. Description

Amadeus SAS Specialists Prove Fusion iomemory a Superior Analysis Accelerator

Storage Technologies for Video Surveillance

Connecting AMD Flash Memory to a System Address Bus

1 / 25. CS 137: File Systems. Persistent Solid-State Storage

Technologies Supporting Evolution of SSDs

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Universal Flash Storage: Mobilize Your Data

Managing Data Center Power and Cooling

Violin Memory Arrays With IBM System Storage SAN Volume Control

SSD Performance Tips: Avoid The Write Cliff

High-Performance SSD-Based RAID Storage. Madhukar Gunjan Chakhaiyar Product Test Architect

Benefits of Solid-State Storage

Protect Microsoft Exchange databases, achieve long-term data retention

AVR1309: Using the XMEGA SPI. 8-bit Microcontrollers. Application Note. Features. 1 Introduction SCK MOSI MISO SS

Definition of RAID Levels

Lecture N -1- PHYS Microcontrollers

Agilent Technologies Truevolt Series Digital Multimeters

White Paper Utilizing Leveling Techniques in DDR3 SDRAM Memory Interfaces

73S1215F, 73S1217F Device Firmware Upgrade Host Driver/Application Development User s Guide April 27, 2009 Rev UG_12xxF_029

File Systems for Flash Memories. Marcela Zuluaga Sebastian Isaza Dante Rodriguez

QLIKVIEW SERVER LINEAR SCALING

Flash Memory Technology in Enterprise Storage

Two Flash Technologies Compared: NOR vs NAND

Providing Battery-Free, FPGA-Based RAID Cache Solutions

Speed and Persistence for Real-Time Transactions

Serial ATA technology

Transcription:

Technical Note Memory Management in NAND Flash Arrays TN-29-28: Memory Management in NAND Flash Arrays Overview Overview NAND Flash devices have established a strong foothold in solid-state mass storage, as both a removable storage medium and an embedded storage medium. As NAND Flash adoption has increased, the variety of configurations in which NAND Flash devices are employed has also increased. This proliferation is due in part to the various ways NAND Flash memory arrays can be combined. The way NAND Flash memory arrays are combined can have a major impact on application performance; it also influences the resources that will be required to organize and maintain a NAND Flash memory array. This technical note describes common NAND Flash memory-management methods, advantages and considerations associated with each, and suggestions for the most effective use of the NAND Flash memory array. A solid understanding of NAND Flash devices is necessary for designers incorporating the concepts discussed in this document. The examples used are based on the Micron NAND Flash device. For detailed information on this device, refer to the full data sheet at www.micron.com/products/nand. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 1 2005 Micron Technology, Inc. All rights reserved. Products and specifications discussed herein are for evaluation and reference purposes only and are subject to change by Micron without notice. Products are only warranted by Micron to meet Micron s production data sheet specifications. All information discussed herein is provided on an as is basis, without warranties of any kind.

Combining NAND Flash memory arrays can improve performance by increasing the apparent size of the NAND Flash data register and the apparent size of individual blocks. Parallel configurations support a higher volume of bytes being read, programmed, or erased simultaneously. Benefits of Combining NAND Blocks across Chip Enables (CE#s) The asynchronous nature and the block memory structure of NAND Flash devices support combining NAND Flash devices across chip enables (CE#s) in a relatively straightforward process (see Figure 1). Shorting CE#s together is a common practice when combining NAND Flash blocks. By not shorting CE#s together, as shown in Figure 1, a design retains the flexibility to communicate with only one NAND device at a time. Figure 1: NAND Flash Blocks Combined Across CE#s NAND Controller NAND I/O[7:0] NAND CE# Device 1 I/O[7:0] CE# NAND CLE NAND RE# NAND WP# NAND R/B# NAND WE# NAND ALE CLE ALE RE# WE# WP# R/B# Device 2 R/B# WP# WE# RE# ALE CLE NAND CE2# NAND I/O[15:8] CE# I/O[7:0] Creating Superblocks A CE# superblock uses shared command signals (CLE, ALE, WE#, RE#, WP#, R/B#), with input/output signals (I/O[7:0]) and CE# signals distributed among two or more NAND Flash memory arrays on two or more CE#s. This configuration makes it possible to combine two or more NAND Flash blocks across CE#s to form the CE# superblock. Figure 1 depicts an implementation using two NAND devices with blocks combined across CE#s. Figure 2 shows the creation of a CE# superblock using two NAND Flash blocks. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 2 2005 Micron Technology, Inc. All rights reserved.

Figure 2: NAND Flash Blocks Combined to Create a CE# Superblock 4,224 bytes per page CE# superblock created by combining one block from device #1 with one block from device #2 2,112 bytes per page 2,112 bytes per page Block A from device #1 Blocks combined across NAND Flash CE#s Block A from device #2 Superblock Shared-Signal Benefits Each NAND Flash memory array sharing command signals receives the same command instructions when its own CE# is active (LOW). For example, when a PROGRAM PAGE (80h-10h) command is sent on the shared command signals, each NAND Flash memory array will respond and execute a PROGRAM PAGE (80h-10h) operation. If the I/O[7:0] and CE# signals are not shared among multiple NAND Flash memory arrays, each NAND Flash memory array can address different addresses and different data when implementing operations. This enables selection of a single NAND Flash memory array for specific operations when it is advantageous to do so. Superblock Configuration A CE# superblock is generally comprised of a shared CE# and the same physical block from each NAND Flash memory used. Different physical blocks in the NAND Flash memory arrays can be used; however, using the same physical block in each NAND Flash device simplifies block management. The system design designates which physical blocks in each array are combined as a CE# superblock. When the superblock is accessed, the combined blocks on each CE# are being accessed at the same time, providing parallel access to the number of bytes in an page (2,112 bytes) multiplied by the number of combined arrays sharing the CE#. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 3 2005 Micron Technology, Inc. All rights reserved.

Combining across the CE#s makes it possible to execute the same command on all of the combined arrays in parallel, and to input data, read data, or erase data on all of the combined arrays in parallel. The superblock enables a significantly higher volume of data movement across the combined arrays, with X times the number of bytes programmed or read from the array for each t WC or t RC issued from the NAND controller, using only the t PROG or t R array access time associated with single program or read operations. The superblock also enables improved erase performance by erasing two blocks, one on each CE# by way of the combined arrays, within the t BERS time associated with a single BLOCK ERASE operation. Benefits of Combining NAND Blocks across NAND Planes Similar to combining NAND blocks across CE#s, combining NAND blocks across NAND planes creates a plane superblock made up of two or more NAND Flash blocks. The difference is that when combining NAND Flash blocks across planes, the combined blocks exist on the same NAND Flash die. The plane superblock has a page size of X bytes, where X is the number of NAND planes combined, multiplied by the number of bytes per page. The difference between CE# superblocks and plane superblocks is that the combined blocks of a plane superblock exist on the same CE#. Figure 3 on page 5 depicts creation of a plane superblock using two NAND Flash blocks that reside on different planes of the same CE#. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 4 2005 Micron Technology, Inc. All rights reserved.

Figure 3: NAND Flash Blocks Combined to Create a Plane Superblock 4,224 bytes per page Plane superblock created by combining two NAND Flash blocks from separate planes of a single device 2,112 bytes per page 2,112 bytes per page Block 0 of plane 0, device 1 Block 1 of plane 1, device 1 Even block Odd block Blocks combined across NAND Flash planes A plane superblock is generally made up of adjacent physical blocks, one from each plane (e.g., block 0 with block 1, or block 2 with block 3). Micron NAND Flash devices have the ability to use adjacent physical blocks or nonadjacent physical blocks that are on different planes to create plane superblocks. The system design determines which physical blocks in each NAND Flash plane are combined as a plane superblock. The performance improvement using plane superblocks is not as significant as the performance improvement realized using CE# superblocks. Independent data signals going to the NAND Flash memory arrays combined between CE#s deliver parallel command, address, or data transmissions. Commands, addresses, and data are transmitted serially to each NAND Flash plane in a plane superblock, requiring more time to complete all of the transactions; the second-plane data input also requires additional I/O time. Achieving Optimal Performance To obtain the best performance in a linked NAND Flash configuration, use Micron twoplane commands. These commands are described in detail in the data sheet at www.micron.com/products/nand. Supplementary information regarding twoplane commands is available in Technical Note TN-29-25, Improved Performance Using Two-Plane Commands, at www.micron.com/products/nand/technotes. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 5 2005 Micron Technology, Inc. All rights reserved.

Benefits of Combining NAND Blocks across CE#s and NAND Planes Combining NAND Flash blocks across both CE#s and NAND planes creates a megablock made up of four or more NAND Flash blocks. Two sets of two-plane commands can be issued simultaneously to a megablock using two independent data channels. Creating a megablock delivers the greatest possible throughput in Micron NAND Flash devices. Figure 4 shows the creation of a megablock using four NAND Flash blocks from two different NAND Flash devices. Figure 4: NAND Flash Blocks Linked to Create a Megablock 8,448 bytes per page Megablock created by combining four NAND Flash blocks between two devices 2,112 bytes per page 2,112 bytes per page 2,112 bytes per page 2,112 bytes per page Block 0 of plane 0, device 1 Block 1 of plane 1, device 1 Blocks combined across NAND Flash CE#s Block 0 of plane 0, device 2 Block 1 of plane 1, device 2 Even block Odd block Even block Odd block Blocks combined across NAND Flash planes Blocks combined across NAND Flash planes tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 6 2005 Micron Technology, Inc. All rights reserved.

Linking Trade-offs TN-29-28: Memory Management in NAND Flash Arrays Linking Trade-offs As with most memory architecture options, there are trade-offs when organizing NAND Flash memory arrays by linking NAND blocks across CE#s and/or NAND planes. When using the most common approach linking the same physical blocks together across CE#s and linking physically adjacent NAND blocks together across NAND planes the potential trade-off is reduced NAND Flash storage. This common approach makes it possible for the same cluster of blocks in each linked arrangement to always be linked together, and facilitates easy file management and memory maintenance. If any of the individual blocks are bad, for simplicity the file management software will assume that any block combined with a bad block is unusable. There are some drawbacks to these linking arrangements. Linking NAND blocks together without consideration for bad blocks can cause a reduction in available storage capacity. If just one of the combined blocks is bad, then all the blocks combined with that bad block are also considered bad. This can lead to a situation where, in an effort to realize the benefits of linking noted previously, good blocks end up being designated as bad, thus reducing storage availability. Each designer must weigh the benefits of linking against the potential for reduced storage. By combining NAND Flash blocks blindly across CE#s and/or across NAND planes without first considering any bad blocks that may be present, the total number of bad blocks in a system increases artificially; the good blocks combined with bad blocks can double the total bad-block count. This creates a situation where a NAND Flash device that meets its data sheet specification for the number of bad blocks may not meet the minimum number of valid blocks required by a system due to the artificially increased bad-block count. This potential artificial memory loss will increase as more blocks are combined, particularly if future applications use greater numbers of NAND Flash devices or devices with more than two planes. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 7 2005 Micron Technology, Inc. All rights reserved.

Effective Linking Methods TN-29-28: Memory Management in NAND Flash Arrays Effective Linking Methods Designers can minimize the number of good blocks artificially made bad by being linked to bad blocks. The linking scenarios that follow demonstrate the differences between linking methods, and the best approach for conserving the greatest number of good blocks. Block Preservation Linking Across CE#s There is little chance that a bad physical block on one device s CE# will also be bad on the CE# of another linked device. This being the case, linking CE#s without checking the viability of the blocks being linked can lead to bad-block doubling when a bad block is linked unknowingly with a good block. Bad-block doubling automatically reduces the number of valid blocks and, thus, the available density. Designers can use linking to their advantage by first checking for physical bad blocks in a multiple NAND-CE# environment, then linking different identified bad blocks from each device on the CE#s. This method improves throughput and preserves available density without artificially increasing bad-block overhead. Same-Block Linking For example, consider two NAND Flash devices with the same physical blocks in each device linked together across CE#s (see Figure 5). Each of the devices has only three bad blocks, and the bad blocks in one device are at different physical block locations than the bad blocks in the other device. When the same physical blocks are linked across the CE#s of the two NAND devices, and each of the bad blocks is at a unique physical block location, the total number of bad blocks between the two NAND devices doubles from six to twelve. Figure 5: Linking Using the Same Physical Blocks Device #1 Block 4095 Block 4094 Block 4093 Block 4092 Block 4091 Block 4090 Block 4089 Block 4088 Block 4087. Block 4 Block 3 Block 2 Block 1 Block 0 Same physical NAND Flash blocks combined across CE#s Device #2 Block 4095 Block 4094 Block 4093 Block 4092 Block 4091 Block 4090 Block 4089 Block 4088 Block 4087. Block 4 Block 3 Block 2 Block 1 Block 0 Initial bad block in the NAND Flash Initial good block that becomes bad due to combining with a bad block tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 8 2005 Micron Technology, Inc. All rights reserved.

Effective Linking Methods Different-Block Linking Now consider the same two NAND Flash devices with three bad blocks in each device, and different bad-block locations in each device (see Figure 6). By combining the bad blocks together on the CE#s, rather than combining the same physical block location on each device, only bad blocks are combined; no good blocks are wasted, and the total bad-block count between the two NAND Flash devices remains at six. Figure 6: Linking Using Different Physical Blocks Device #1 Block 4095 Block 4094 Block 4093 Block 4092 Block 4091 Block 4090 Block 4089 Block 4088 Block 4087. Block 4 Block 3 Block 2 Block 1 Block 0 Different physical NAND Flash blocks combined across CE#s Device #2 Block 4095 Block 4094 Block 4093 Block 4092 Block 4091 Block 4090 Block 4089 Block 4088 Block 4087. Block 4 Block 3 Block 2 Block 1 Block 0 Initial bad block in the NAND Flash Block Preservation Linking Across Planes The concept presented for combining across CE#s also applies when linking NAND Flash blocks across planes within a die. When adjacent physical blocks are combined arbitrarily across the planes of a die, the potential for linking good blocks to bad increases. By verifying block validity and linking bad blocks across planes, rather than automatically linking adjacent blocks, the total bad-block count is retained, no good blocks are wasted, and available density is preserved. Optimized Linking Managing block linking makes it possible to preserve available device density by preventing artificial bad-block multiplication. The process is somewhat more complex than simply combining the same blocks in each device via CE#, or adjacent blocks across planes. It requires dynamically keeping track of blocks between CE#s and/or planes over the lifetime of the NAND Flash devices. The payoff for managing block linking up front includes reduced bad-block overhead, maximum available density, and less chance that the application will fall below minimum density requirements over the operational life of the device. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 9 2005 Micron Technology, Inc. All rights reserved.

TN-29-28: Memory Management in NAND Flash Arrays Summary Summary Performance in NAND Flash solid-state storage can be improved by using an architectural and system-level approach to create various types of linked superblocks. It is important to consider the side effects that may be imposed on a system when linking NAND Flash blocks together. The actual effects will depend on how block linking is managed. Each of the two possible linking methods linking based on physical block location, or dynamic linking are accompanied by inherent advantages and disadvantages. The easiest approach to implementing and maintaining a linked system is linking NAND Flash blocks together based strictly on physical location in the NAND device, with no regard for any bad blocks that may already exist in the system. This approach is accompanied by risks associated with the high potential for reduced available NAND Flash density over the operational life of the system, as blocks go bad over time. A more efficient approach to linked systems is dynamic linking accomplished by initially pairing blocks that are already bad, and later pairing any that go bad over time. This approach is more difficult to implement and maintain, but it helps retain more validblock density and reduces the risk of falling below minimum density requirements, as blocks go bad over time. The latest information on Micron NAND Flash devices is available at www.micron.com/products/nand/. 8000 S. Federal Way, P.O. Box 6, Boise, ID 83707-0006, Tel: 208-368-3900 prodmktg@micron.com www.micron.com Customer Comment Line: 800-932-4992 Micron, the M logo, and the Micron logo are trademarks of Micron Technology, Inc. All other trademarks are the property of their respective owners. tn2928_mem_mgmt_nand_array.fm - Rev. A 7/07 EN 10 2005 Micron Technology, Inc. All rights reserved.