VIDEO PROCESSING FOR DLPTM DISPLAY SYSTEMS



Similar documents
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

Calibration Best Practices

Video-Conferencing System

Composite Video Separation Techniques

. ImagePRO. ImagePRO-SDI. ImagePRO-HD. ImagePRO TM. Multi-format image processor line

Technical Paper DISPLAY PROFILING SOLUTIONS

III. MEMS Projection Helvetica 20 Displays

Prepared by: Paul Lee ON Semiconductor

AUDIO. 1. An audio signal is an representation of a sound. a. Acoustical b. Environmental c. Aesthetic d. Electrical

MONOCHROME RGB YCbCr VIDEO DIGITIZER

Dynamic IR Scene Projector Based Upon the Digital Micromirror Device

Note monitors controlled by analog signals CRT monitors are controlled by analog voltage. i. e. the level of analog signal delivered through the

Overview. Raster Graphics and Color. Overview. Display Hardware. Liquid Crystal Display (LCD) Cathode Ray Tube (CRT)

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Video Camera Image Quality in Physical Electronic Security Systems

RECOMMENDATION ITU-R BO.786 *

SDI TO HD-SDI HDTV UP-CONVERTER BC-D2300U. 1080i/720p HDTV Up-Converter with built-in Audio/Video processor

System-Level Display Power Reduction Technologies for Portable Computing and Communications Devices

3D Input Format Requirements for DLP Projectors using the new DDP4421/DDP4422 System Controller ASIC. Version 1.3, March 2 nd 2012

Outline. Quantizing Intensities. Achromatic Light. Optical Illusion. Quantizing Intensities. CS 430/585 Computer Graphics I

Assessment of Camera Phone Distortion and Implications for Watermarking

Aliasing, Image Sampling and Reconstruction

Chapter 5 Fundamental Concepts in Video

FPGA. AT6000 FPGAs. Application Note AT6000 FPGAs. 3x3 Convolver with Run-Time Reconfigurable Vector Multiplier in Atmel AT6000 FPGAs.

This document describes how video signals are created and the conversion between different standards. Four different video signals are discussed:

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

High Definition (HD) Image Formats for Television Production

A Computer Vision System on a Chip: a case study from the automotive domain

White paper. CCD and CMOS sensor technology Technical white paper

White Paper Video and Image Processing Design Using FPGAs

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

White paper. HDTV (High Definition Television) and video surveillance

Shear :: Blocks (Video and Image Processing Blockset )

Sampling Theorem Notes. Recall: That a time sampled signal is like taking a snap shot or picture of signal periodically.

Scanners and How to Use Them

How To Use A High Definition Oscilloscope

Maximizing Receiver Dynamic Range for Spectrum Monitoring

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

Using the HP DreamColor LP2480zx Display and AJA Mini-Converters for Professional Video Applications

Video and Image Processing Design Example

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Electronic Communications Committee (ECC) within the European Conference of Postal and Telecommunications Administrations (CEPT)

Pulse width modulation control in DLP TM projectors

Designing Custom DVD Menus: Part I By Craig Elliott Hanna Manager, The Authoring House at Disc Makers

jorge s. marques image processing

Lecture 16: A Camera s Image Processing Pipeline Part 1. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

BCC Multi Stripe Wipe

Whitepaper. Image stabilization improving camera usability

LEVERAGING FPGA AND CPLD DIGITAL LOGIC TO IMPLEMENT ANALOG TO DIGITAL CONVERTERS

Understanding CIC Compensation Filters

JPEG Image Compression by Using DCT

Epson 3LCD Technology A Technical Analysis and Comparison against 1-Chip DLP Technology

Technical Note TN_158. What is the Camera Parallel Interface?

Digital to Analog Converter. Raghu Tumati

MATRIX TECHNICAL NOTES

The front end of the receiver performs the frequency translation, channel selection and amplification of the signal.

CMOS Image Sensor Noise Reduction Method for Image Signal Processor in Digital Cameras and Camera Phones

Solomon Systech Image Processor for Car Entertainment Application

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Lesson 10: Video-Out Interface

Synthetic Sensing: Proximity / Distance Sensors

Vector Signal Analyzer FSQ-K70

Video Codec Requirements and Evaluation Methodology

Introduction to graphics and LCD technologies. NXP Product Line Microcontrollers Business Line Standard ICs

Digital image processing

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

Tuning Band * DTV-Signal. Auto Ch Search Add On Ch Search Manual Ch Set Channel Labels. Menu Language *

IsumaTV connexion to COOP cable system Set up manual

On Cables and Connections A discussion by Dr. J. Kramer

Analog Video Connections Which Should I Use and Why?

MP3 Player CSEE 4840 SPRING 2010 PROJECT DESIGN.

Create Colorful and Bright LED Light with an LED Matrix Dimmer

A Short Introduction to Computer Graphics

Parameter values for the HDTV standards for production and international programme exchange

Bandwidth Adaptation for MPEG-4 Video Streaming over the Internet

Skyworth LCD Video Wall Controller User Manual Please read the user guide book carefully before using this product

A System for Capturing High Resolution Images

Comparison of different image compression formats. ECE 533 Project Report Paula Aguilera

The Array Scanner as Microdensitometer Surrogate: A Deal with the Devil or... a Great Deal?

For Articulation Purpose Only

Temporal Sampling and Interpolation

Advantage of the CMOS Sensor

A Proposal for OpenEXR Color Management

White Paper. "See" what is important

Digital Camera Imaging Evaluation

Performance Verification of Super-Resolution Image Reconstruction

Performance Analysis and Comparison of JM 15.1 and Intel IPP H.264 Encoder and Decoder

DOLBY SR-D DIGITAL. by JOHN F ALLEN

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Computational Foundations of Cognitive Science

Data Storage 3.1. Foundations of Computer Science Cengage Learning

TI GPS PPS Timing Application Note

Application Note Noise Frequently Asked Questions

Digital Video Measurements. A Guide to Standard and High-Definition Digital Video Measurements

OLED into Mobile Main Display

AVR127: Understanding ADC Parameters. Introduction. Features. Atmel 8-bit and 32-bit Microcontrollers APPLICATION NOTE

DIGITAL-TO-ANALOGUE AND ANALOGUE-TO-DIGITAL CONVERSION

Figure 1: Relation between codec, data containers and compression algorithms.

Transcription:

VIDEO PROCESSING FOR DLP TM DISPLAY SYSTEMS Vishal Markandey, Todd Clatanoff, Greg Pettitt Digital Video Products, Texas Instruments, Inc. P.O.Box 65502, MS 4, Dallas, TX 75265 ABSTRACT Texas Instruments Digital Light Processing TM (DLP TM ) technology provides all-digital projection displays that offer superior picture quality in terms of resolution, brightness, contrast, and color fidelity. This paper provides an overview of the digital video processing solutions that have been developed by Texas Instruments for the all-digital display. The video processing solutions include: progressive scan conversion, digital video resampling, picture enhancements, color processing, and gamma processing. The real-time implementation of the digital video processing is also discussed, highlighting the use of the Scanline Video Processor (SVP) and the development of custom ASIC solutions. INTRODUCTION The Digital Micromirror Device TM (DMD TM ) is a binary spatial light modulator developed at Texas Instruments. The DMD consists of an array of movable micromirrors functionally mounted over a CMOS SRAM. Each mirror is independently controllable and is used to modulate reflected light, mapping a pixel of video data to a pixel on display. A DMD mirror is controlled by loading data into the memory cell located below the mirror. The data electrostatically controls the mirror s tilt angle in a binary fashion, where the mirror states are either +0 degrees (ON) or -0 degrees (OFF). Light reflected by the ON mirrors is then passed through a projection lens and onto a screen. The DMD forms the heart of Digital Light Processing (DLP) all-digital display systems. DLP systems are being developed in various forms, suitable for applications such as conference room projectors, institutional projectors, home theater, standard television, high definition displays, and motion pictures. These systems are characterized by: All digital display: DLP display systems are completely digital in that, except for the A/D conversion at the front end, all data processing and display are digital. Progressive display: DMD displays are inherently progressive, displaying complete frames of video. For interlace inputs such as most current video inputs, this necessitates an interlace to progressive scan conversion. Progressive scan display also has the advantage of improving display quality by removing interlace artifacts such as flicker. Square pixels, fixed display resolution: DMD display resolution is fixed by the number of mirrors on the DMD. This, combined with the : aspect ratio of the pixels, requires resampling of various input video formats to fit onto the DMD array. Digital color creation: Spectral characteristics of the color filters and the lamp are coupled to digital color processing in the system. Digital display transfer characteristic: DMD displays exhibit a linear relationship between the gray scale value used to modulate the micromirrors and the corresponding light intensity. A degamma process is performed as part of the video processing, prior to DMD display, to compensate for video signal gamma and prepare the data for DMD display.

Video Signal Front-end Processing Y C Interpolative Processing 2Y 2C Back-end Processing 2R 2G 2B DMD Display - Analog/Digital - Component/ Composite - Sources: NTSC, PAL, PALplus -A/D Conversion -Y/C Separation - Chroma Demod. - ACC, ACK - Progressive-Scan - Color Space Conv. Conversion - Degamma - Resampling - Error Diffusion - Picture Enhancements DLP Video Processing Figure : Video Processing For DLP Display Systems The requirements described above are addressed in the Video Processing part of DLP display systems. Figure shows the video processing functional block diagram of a typical DLP display system. The first block, labeled front-end processing performs signal decoding operations. The input signal, if composite, is subjected to Y/C separation and chroma demodulation. Additional operations include Automatic Chroma Control (ACC), and Automatic Color Killer (ACK). The signal is digitized - depending on the front end, the A/D conversion may occur either before or after Y/C separation. Digital Y/C data is output from this block and is subjected to Interpolative Processing, where the data undergoes interlace to progressive scan conversion, resampling, and picture enhancements. Picture enhancements include luminance and chrominance sharpening, and noise reduction. The enhanced, progressively scanned Y/C data is then passed through a color space conversion to obtain RGB data. This data is subjected to a degamma operation to remove the gamma imposed on the signal at transmission. During the degamma operation, error diffusion can be used as a means of subjectively improving the non-linear digital remapping process. The linearized, progressive RGB data is then reformatted into bit plan level data which is used to drive the DMD display using a Pulse Width Modulation (PWM) technique[]. The following sections will describe interlace-to-progressive-scan conversion, resampling, picture enhancements, and degamma/error diffusion. PROGRESSIVE-SCAN CONVERSION The progressive nature of DMD display requires interlace data (such as NTSC and PAL video) to be converted to progressive form before it can be displayed on the DMD. Progressive-scan conversion is the process of creating new scan lines between existing lines, to provide double the vertical sampling rate of interlaced scanning. Progressive-scan conversion has the additional advantage of reducing interlace scanning artifacts such as interline flicker, raster line visibility, and field flicker. These artifacts are particularly objectionable for larger display systems where projection displays find their main application. Progressive-scan conversion is a well-studied field [,2,3,4,5] with algorithms ranging from simple line repeat to complex variations of motion adaptive interpolation.

Input Video Data D C D2 Created Video Data B B2 X A B3 Intra-field filters: X = f(b, B2, B3, B4,...) Line Repeat: X = B2 or B3 Line Average: X = (B2+B3)/2 Inter-field filters: X = f(a, B, C, D,...) Motion-adaptive: X = f(a, B, C, D,...) m where m is motion signal at X Field: D C B4 B A (a) Figure 2: Progressive-Scan Conversion (b) Base Motion Signal Estimation Temporal Filtering Spatial Filtering Motion Adaptive m2 Interpolation Memory HLPF ABS/NL m3 m m m4 m5 m = MAX (m, m2, m3, m4, m5 ) VLPF HLPF NL 2 B2 C X B3 X = m(b2+b3)/2+(-m)c Motion Estimation Figure 3: Motion Adaptive Progressive-Scan Conversion The basic problem addressed in progressive-scan conversion is shown in Figure 2a. The solid lines show the input interlace signal and the dashed line shows the spatial location where the data is to be created by interlace to progressive-scan conversion. The data at X can be created by various proscan techniques listed in Figure 2b: Intra-field filters use data from the same field - the filters can range from multi-tap FIR filters to simple line average or line repeat. The latter two are indicated separately in Figure 2b because of their common use. The major advantage of intra-field filters is that, unlike other progressive-scan techniques, they do not use multiple fields of data and hence do not require field data storage, translating into lower system implementation cost. The penalty paid is in terms of performance, intra-field filters being characterized by horizontal line flicker, still area flicker, and low resolution. In view of such performance issues, this class of progressive-scan conversion is primarily useful in low end applications. Various filters in this class were designed and tested for DLP display - it was concluded that considering the limited performance of the class as a whole and its low cost target applications, multi-tap filters offer little advantage over the cheaper line average/line repeat techniques. The combination of data from multiple fields offers the potential of improving the performance of progressive-scan conversion by removing the artifacts noted above. A major concern in the design of such

techniques is the sensitivity of the technique to the presence of motion or changes in the picture. The use of inter-field data in moving parts of the video can cause the appearance of tearing artifacts in the picture. On the other hand, the use of intra-field data in still parts of the video can cause artifacts such as loss of resolution, and flicker. Inter-field filters that combine data from multiple fields in a pre-determined fashion are offered by some vendors. The combination of data from multiple fields can be realized in either a linear or non-linear fashion. Linear combinations are inherently not adaptive to motion and hence must be biased in favor of intra-field filtering to avoid tearing artifacts. This however leads to enhancing other artifacts such as loss of resolution and flicker. Non-linear combinations, such as median filters, and others, offer the potential of data adaptation and are an area of active research [4,5]. Various filters in this class were designed and tested for DMD display - it was concluded that since the filters are data adaptive rather than motion adaptive, they can display the artifacts mentioned above depending on data values. Motion adaptive techniques that compute motion-weighted combination of inter and intra-field data offer the potential of overcoming the artifacts associated with other techniques noted above. Motion detection is a critical step in these techniques, because failure to detect motion can result in the use of inter-field data in moving parts of the video, where it can cause the appearance of tearing artifacts. On the other hand, oversensitive motion detection can cause the motion detector to be triggered by noise, resulting in intra-field data in still parts of the picture. This can lead to noticeable loss of resolution in the video. Thus, there is a need to balance the algorithm s motion sensitivity with the ability to provide good resolution. We have developed motion detection techniques that explicitly addresses this issue and provides good picture quality in both moving and still picture areas. Figure 3 shows a functional block diagram of motion adaptive progressive-scan conversion. A base motion signal is estimated for location X of Figure 2a, by performing temporal data differencing. This signal is subjected to a non-linear clipping function, and temporal and spatial filtering are applied to shape the motion signal. Temporal filtering produces the maximum motion value from among values in the current and past field. Following the temporal filter are spatial (vertical and horizontal) low-pass filters. These filters are used for smoothing and spreading the motion signal, such that the filtered signal is adequately expanded around moving edge boundaries. Insufficient spreading would result in noticeable tearing artifacts, caused from combining inter-field data in moving picture areas. After completion of all filtering and non-linear operations, the motion signal is normalized to values ranging between 0 and. The normalized motion signal, m, is then used as a weighting factor between the inter- and intra-field components. For large motion values, interpolation is biased towards using intrafield components, which is required due to the variation between the successive video fields. In this case, use of inter-field data in the presence of motion would result in visible motion artifacts due to the lack of correlation between the current and previous field. However, for small motion values, a high degree of temporal correlation exists between adjacent video fields, which necessitates heavily weighting the interfield component. The general motion adaptive progressive-scan conversion technique described above forms the basis for a family of algorithms that we have developed. These algorithms cover a wide range of performance and associated cost. Each algorithm was developed for a specific base motion estimation technique - with reference to Figure 2a, motion estimation at X can be performed by computing a difference between field B and C (field differencing), B and D (frame differencing), or combination of A and C with B and D (multiple field differencing). For each such base motion estimation, the temporal/spatial filters, as well as

the motion adaptive interpolation were optimized. Table provides a listing of the algorithms with estimated ASIC gate counts that benchmark their complexity, external memory requirements, as well as comments on performance. Algorithms from this family are used in various DMD display applications representing various performance/cost trade-offs. Algorithm ASIC Size (gates) Field Memory Performance Line Repeat 0 0 Poor: Flicker, Jaggies Line Average 6k 0 Poor: Blurriness, Flicker Intra-field FIR 46k 0 Poor-Fair: Blurriness, Flicker M.A. Field Diff. 47.7k Fair: High Res. Loss M.A. Frame Diff. 86.9k 2 Good M.A. Temporal 90.9k 3 Best Table : Progressive-Scan Algorithms DIGITAL RESAMPLING Digital resampling (or scaling) is required for resizing video data to fit the DMD s pixel array, expanding letterbox video sources (various display feature modes), and maintaining correct aspect ratio for square pixel DMD display. Digital resampling can be used to maximize the active pixel area in a fixed-resolution optical configuration by appropriately resizing the video data to fit within the active optical window. Various display modes, such as cinema, panorama, etc., can be offered where the user can select a picture resizing option for letterbox source material. Also, digital input data may be available in non-square pixel form (e.g. CCIR60 component video with 720 active pixel per line). This data is digitally resampled to square pixel form for display on the square pixel DMD array. A typical DLP display system uses digital resampling in both horizontal and vertical directions. The resampling filters are optimized for DMD display high spatial frequency display characteristics and also for the degree of computational complexity allowed in real-time applications. Table 2 shows an example case, for a DLP projection system based on DMD resolution of 800x600 pixels. Scaling requirements are shown for various input sources including NTSC, PAL, PALplus, and computer graphics. In this table, an Source (F s = 4.75 MHz) Horz. Resample Vert. Resample NTSC - 5:6 NTSC Letterbox 9:0 3:4 PAL - - PAL 6:9 6:7 8:7 PALplus 6:7 6:7 VGA 4:5 4:5 SVGA - - Table 2: Resampling Factors For DLP Projection Display M:L resampling filter would imply converting M input samples into L output samples. In developing resampling filters for DLP display systems, a broad range of filter design algorithms was examined. The algorithms included nearest neighbor, linear interpolation, cubic splines, and custom designed FIR filters

[6, 7]. Algorithms were evaluated based on performance and implementation complexity. Performance evaluation include characterization of resampling artifacts such as aliasing and blurriness on DLP display. The result is a filter design methodology producing filters optimized for DLP display that can implemented within the constraints of DLP system architectures (the Video Processing Systems section describes system architecture implementation). Vertical resampling filters are realized by convolving the input video lines with a discrete filter kernel. Filter kernel coefficients are optimized for performance and implementation cost, as noted above. Figure 4 shows the discrete convolution process used in 3:4 vertical resampling. In this figure, output lines o 0 -o 3 are created by convolving the input lines with a multi-tap filter kernel. This process results in output lines which are derived from the summation of weighted input lines, where the weighting corresponds to samples from the filter kernel. A similar approach is used for horizontal resampling. i 0 i i 2 i 3 i 4 i 5 i 6 d i =.0 Input weights o 0 2 o 3 Input Output Input Output Frequency responses for example cases (3:4 vertical and 9:0 horizontal resampling filters) are shown in Figure 5. The optimized filter frequency response is compared to the frequency response of both the ideal and bi-linear filters. It can seen that the optimal filter will result in less aliasing and blurriness in the scaled image as compared to the bi-linear filter. i 0 i i 2 i 3 i 4 i 5 i 6 o 0 2 o 3 d o = 0.75 i 0 i 0 i i i 2 o 0 i i 2 o 0 3 i i 2 3 4 o 2 i 3 i 4 o 3 5 i 5 i 6 i 6 Figure 4: Discrete Convolution Process Used For 3:4 Vertical Resampling.2.2 Custom FIR Bi-linear Ideal Custom FIR Ideal 0.8 0.8 Response 0.6 Bi-linear 3:4 Vertical Scaling Filter Response 0.6 9:0 Horizontal Scaling Filter 0.4 0.4 0.2 0.2 0 0 0.2 0.4 0.6 0.8 L*Normalized Frequency Figure 5: Frequency Response of Resampling Filters 0 0 0.2 0.4 0.6 0.8 L*Normalized Frequency

VIDEO ENHANCEMENT Enhancement algorithms are used to increase the subjective display quality. These algorithms include luminance and chrominance sharpening, noise reduction, dynamic range expansion, color correction, as well as the more traditional picture controls such as brightness, contrast, color saturation, hue, and color temperature selection. Luminance and chrominance sharpening algorithms are used to enhance mid-tohigh image frequencies. Figure 6 shows an implementation of luminance sharpening. The sharpness filter Y Sharpness Filter Y SF Coring Y core + x Limit Y sharp -Th Y core Th Y SF Gain Coring Function Figure 6: Luminance Enhancement Implementation 95 Input Luminance and Sharpened Luminance 95 Input Chrominance and Sharpened Chrominance 90 90 85 85 80 80 Pixel Intensity 75 70 65 Input Luminance Pixel Intensity 75 70 65 Input Chrominance 60 Sharpened Luminance 60 Sharpened Chrominance 55 55 50 50 45 5 20 25 30 35 40 45 Pixel Sample Number Figure 7: Luminance Enhancement 45 5 20 25 30 35 40 45 Pixel Sample Number Figure 8: Chrominance Enhancement is realized using one or two-dimensional filtering, the filters being bandpass filters, highpass filters, or combinations thereof. Coring is applied to the filtered sharpness component, Y SF, in order to reject signal variations attributed to noise. Figures 7 and 8 show the effect of luminance and chrominance sharpening for a -D linear edge transition. Notice that in Figure 7, the -D luminance signal is enhanced by peaking the signal at the edge transition points. Peaking the signal increases the apparent contrast of the edge, thereby creating the effect of a higher frequency transition. Luminance sharpening can also be implemented using 2-D filters, where enhancement takes place both horizontally and vertically. Figure 8 shows an example of chrominance sharpening enhancement. In this case, chrominance data, which has much lower bandwidth than luminance, is enhanced by increasing the slope of the edge transition without introducing the ringing effects used in luminance sharpening. Use of ringing effects to enhance the chrominance signal would lead to undesired color aberrations around edges. The use of luminance and chrominance sharpening, as well as other enhancement/picture control algorithms, greatly increases the subjective quality of the processed video, leading to more visually pleasing video imagery.

COLOR SPACE CONVERSION Color conversion provides the ability for video data conversion from luminance and color difference encoding to an RGB data format. The color conversion process utilizes a general 3x3 matrix multiplied by a 3x input vector producing the 3x RGB resultant vector. The general nature of the 3x3 matrix multiply used for color conversion allows for a wide range of video level mappings from the input luminance and color difference format. Through adjustment of the coefficient values, the input video levels can be mapped into the appropriate range of RGB values. Additionally, the 3x3 color conversion matrix can be merged with other matrix operators providing implementations of contrast, hue, color saturation picture controls. The expanded color conversion matrix has the following form R G B = CC CC2 CC3 CC4 CC5 CC6 CC7 CC8 CC9 CV CV2 CV3 CT 0 0 0 0 CV4 CV5 CV6 0 SR 0 0 cosθ sinθ CV7 CV8 CV9 0 0 SB 0 sinθ cosθ Y R Y B Y Color Correction Color Conversion Contrast and Color Saturation Hue Control The color correction 3x3 matrix can be used for complete adaptation of the color gamut to the required color reproduction, or for selection of the white color temperature using only the diagonal elements. Following the matrix multiplication, the output RGB values must be rounded to the required bit resolution and limited to match the output dynamic range. The dynamic output range limitation is necessary due to the input luminance and color difference color space having the ability to express values that are not realizable RGB values. GAMMA PROCESSING Video signals typically have gamma correction applied to them to compensate for the nonlinear signal-to-light characteristics of the CRT receiver. Gamma correction provides the additional benefit of increased signal-to-noise ratio of low level signals during analog transmission. Gamma correction, applies at the camera, maps linear light intensity to a non-linear voltage signal. The transfer function of a CRT display system inverts the gamma correction resulting in a linear output response. DMD displays, unlike CRT displays, have a linear signal-to-light transfer function. The linear response of the DMD requires that gamma correction be removed from the input signal before display. The inverse gamma response, as known as degamma is realized in DLP display video processing. Degamma response is given by the following equation and is shown in Figure 9. R = R' 709 4.5 R' 709 0.082 R' 709 + 0.099 ------------------------------ 0.45 R'.099 709 > 0.082 The degamma response is applied to the input signal using a RAM based lookup table (LUT), with individual LUTs used for red, green, and blue channels. The degamma response function is mapped into the RAM LUT using the desired number of output bits. The RAM based LUT structure provides additional data mapping benefits by performing brightness and contrast picture control adjustments as well as color temperature adjustment. The brightness control is accomplished through the addition of an offset

Light Output 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0. 0 0 0.2 0.4 0.6 0.8 Signal Input Output Signal (7 bits) 35 30 25 20 5 0 5 0 0 20 40 60 80 00 20 40 Input Signal (8 bits) Figure 9: Degamma Response Figure 0: 7-bit Degamma Response factor to the degamma function. The contrast picture control modifies the LUTs using a scale factor, and color temperature adjustment is implemented using individual scale factor adjustments for each of the R, G, B LUTs. Additional, the flexibility of the LUTs can be used to perform non-linear mapping of the video. The mid-range values of the image can be stretched to provide higher detail contrast using S-curves mapping functions. The S-curve mapping function and degamma transfer function are combined to form a composite transfer function. The precision of the LUT used in the degamma process can have great impact on the quality of the displayed image. As the output bit resolution is decreased, the low intensity portion of the degamma function shows increasing areas of poor quantization. This poor quantization results in the appearance of objectionable contours in the low intensity portions of the image. Figure 9 shows the degamma response curve, while Figure 0 shows the degamma curve using 7 bits of output resolution. In Figure 0, only the first 28 input code words are shown, clearly showing poor quantization at lower intensity levels. The contouring effect could be reduced through increased degamma output resolution, but such a solution can add a great deal of cost to the display product. An alternate solution is the use of error diffusion. Error diffusion solutions seek to replace the low frequency contour areas of the image with a higher frequency mask signal. This higher frequency signal masks the contour effects to the observer, while maintaining the overall appearance of the continuous signal. Error diffusion distributes the mask signal to neighboring pixels through a diffusion filter. The mask signal is based on the difference between the ideal signal and the degamma mapping table output. Figure shows a functional block diagram of the error diffusion process. The input signal is modified by the mask signal from neighboring pixels. The displayable bit resolution of the modified signal is passed on for display. The remaining signal is passed on to neighboring pixels. This error diffusion process is recursively repeated for the entire image frame. The process of passing the process error to neighboring pixels is controlled by the diffusion filter. The diffusion filter can be designed to diffuse the error signal in many ways. Figure 2 shows an example diffusion filter. In this example, the error is diffused from four neighboring pixels to the current sample. A

weighted combination of the error signals is summed into the current pixel s intensity. R E-,- E 0,- E+,0 R' R' R error signal Filter /4 /4 /4 G' Degamma and Error Signal G /4 B' B (Process repeated for G and B signals) E-,0 Current Sample Figure : Error Diffusion Process Figure 2: Error Diffusion Filter Example VIDEO PROCESSING SYSTEMS An example DLP video processing system architecture is shown in Figure 3. The complete system consists of front-end signal decoding, an SVP2 video processor, a programmable timing controller, field memories, a video processing ASIC (Data Path ASIC), DMD timing FPGA, and DMD display. The video processing algorithms described in this paper are implemented using the second generation Scanline Video Processor (SVP2) [8] and the Data Path ASIC, both developed at Texas Instruments. Composite Video Component Video Signal Decode A/D Conversion Y/C Separation Chroma Demod. ACC, ACK R-Y B-Y Y mux MEM MEM MEM MEM Vertical Expan. Mem. Proscan Mem. Timing & Control SVP2 MEM m Progressive-Scanning Resampling Picture Enhancements Y R-Y B-Y Computer Video Inputs RGB Data Path ASIC CS Conversion Degamma Error Diffusion Picture Controls OSD RGB Inputs R G B FPGA PWM Control Reformatting DMD Display Figure 3: Video Processing Architecture for DMD Display System The SVP2 is a fully programmable video processor capable of performing 5.7 billion 8-bit additions/ second. The SVP2 uses a SIMD (Single Instruction, Multiple Data) architecture, comprised of a one

dimensional array of -bit Processing Elements (PEs). Each PE, corresponding to one pixel on a line of video data, consists of a -bit ALU, two Register Files (RF0 and RF), four working registers, and a Left/ Right communication bus. Data is serially loaded into the PEs using the 48-bit wide Data Input Register (DIR). Processed data is transferred to the 32-bit wide Data Output Register (DOR) and output in a wordserial fashion. Independent clocks control the concurrent operations of data input by DIR, processing by PEs, and output by DOR. The Data Path ASIC (DPA) has been designed specifically to support the DLP architecture. The DPA supports video signal inputs and an on screen display (OSD) signal. The DPA output provides RGB pixel databus for interface to the data reformater. The DPA provides a general 3x3 matrix multiplier for color space conversion, RGB LUTs for gamma processing, contrast, brightness, and color temperature picture controls, error diffusion processing, an auxiliary data channel for picture-in-picture (POP), picture-outof-picture (POP), and computer graphics inputs, and OSD injection and control. The video processing technology described in this paper is used in various forms in various DLP systems. For example, a DLP system meant primarily for computer signal display may not require features (e.g. resampling modes) specific to television. Various DLP systems incorporating this video processing technology to varying degrees are being developed for a wide range of applications including: conference room projectors, home theater, standard television, and institutional projectors. Table 3 lists some recent forums where DLP based projection displays incorporating our video processing technology have been demonstrated. Display Application Collaborations Demonstrations Conference Room Projector nview / TI INFOCOM 995 (Dallas) COMDEX 995 (Las Vegas) European Media Day 995 (Brussels) SATIS 995 (Paris) Home Theater Runco / TI Vidikron / TI CEDIA 995 (Dallas) European Media Day 995 (Brussels) Standard Television Nokia / TI IFA 995 (Berlin) JES 995 (Osaka) COMDEX 995 (Las Vegas) CES 996 (Las Vegas) Institutional Projector TI JES 995 (Osaka) Table 3: DLP Video Processing Demonstrations SUMMARY DLP projection displays have been demonstrated for use in applications ranging from standard television to institutional projection systems. Video processing technology has been developed to address video signal conversion and enhancement for DLP display. The digital video processing solutions have been developed to maintain the superior quality of the DMD based display.

REFERENCES [] V. Markandey, T. Clatanoff, R. Gove, K. Ohara, Motion Adaptive Deinterlacer For DMD (Digital Micromirror Device) Based Digital Television, IEEE Transactions on Consumer Electronics, Vol. 40, No. 3, August 994. [2] M. Karlsson, et al., Evaluation of Scanning Rate Up Conversion Algorithms; Subjective Testing of Interlaced to Progressive Conversion, IEEE Transactions on Consumer Electronics, Vol. 38, No. 3, August 992. [3] M. Yugami, K. Ohara, A. Takeda, EDTV with Scan-Line Video Processor, IEEE Transactions on Consumer Electronics, Vol.38, No.3, August 992. [4] S. Naimpally, et. al., Integrated Digital IDTV Receiver with Features, IEEE Transactions on Consumer Electronics, Vol. 34, No. 3, August 988. [5] T. Doyle, M. Looymans, Progressive Scan Conversion Using Edge Information, Signal Processing of HDTV, II, Proc. of the Third Intl. Workshop on HDTV, August 989. [6] R. Crochiere, L. Rabiner, Multirate Digital Signal Processing, Prentice-Hall, 983. [7] G. Wolberg, Digital Image Warping, IEEE Computer Society Press, 990. [8] K. Nakayama, et al., EDTV-II Decoder by SVP2 (the 2nd generation Scan-line Video Processor), IEEE Transactions on Consumer Electronics, Vol 4, No. 3, August 995.