Color correction in 3D environments Nicholas Blackhawk



Similar documents
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

How To Teach Computer Graphics

Performance Optimization and Debug Tools for mobile games with PlayCanvas

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Programming 3D Applications with HTML5 and WebGL

Shader Model 3.0. Ashu Rege. NVIDIA Developer Technology Group

Video Tracking Software User s Manual. Version 1.0

Silverlight for Windows Embedded Graphics and Rendering Pipeline 1

Developer Tools. Tim Purcell NVIDIA

OpenEXR Image Viewing Software

CSE 564: Visualization. GPU Programming (First Steps) GPU Generations. Klaus Mueller. Computer Science Department Stony Brook University

OpenGL Insights. Edited by. Patrick Cozzi and Christophe Riccio

CS231M Project Report - Automated Real-Time Face Tracking and Blending

A Hybrid Visualization System for Molecular Models

Lecture Notes, CEng 477

3D Stereoscopic Game Development. How to Make Your Game Look

Computer Graphics Hardware An Overview

Dynamic Resolution Rendering

Low power GPUs a view from the industry. Edvard Sørgård

EVIDENCE PHOTOGRAPHY TEST SPECIFICATIONS MODULE 1: CAMERA SYSTEMS & LIGHT THEORY (37)

GPU Shading and Rendering: Introduction & Graphics Hardware

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Tutorial. Making Augmented Reality Accessible for Everyone. Copyright (c) 2010 Human Interface Technology Laboratory New Zealand

Parallel Web Programming

The Rocket Steam Locomotive - Animation

PRODUCT SHEET.

Introduction to WebGL

Advanced Rendering for Engineering & Styling

Mondays and Thursdays, 10:05-11:25, F24-406

Recent Advances and Future Trends in Graphics Hardware. Michael Doggett Architect November 23, 2005

Programmable Graphics Hardware

Computer Graphics on Mobile Devices VL SS ECTS

3D Modeling Using Stereo Projection

Visualizing Data: Scalable Interactivity

About the Render Gallery

Beyond Built-in: Why a Better Webcam Matters

GUI GRAPHICS AND USER INTERFACES. Welcome to GUI! Mechanics. Mihail Gaianu 26/02/2014 1

Optimizing AAA Games for Mobile Platforms

Outline. srgb DX9, DX10, XBox 360. Tone Mapping. Motion Blur

Hardware design for ray tracing

WHAT You SHOULD KNOW ABOUT SCANNING

How To Create A Flood Simulator For A Web Browser (For Free)

How To Use Trackeye

<User s Guide> Plus Viewer. monitoring. Web

Beyond 2D Monitor NVIDIA 3D Stereo

ONYX. Preflight Training. Navigation Workflow Issues Preflight Tabs

CAM-HFR-A HIGH FRAME RATE CAMERA

MetaMorph Software Basic Analysis Guide The use of measurements and journals

SAPPHIRE TOXIC R9 270X 2GB GDDR5 WITH BOOST

GPU Architecture. Michael Doggett ATI

Android and OpenGL. Android Smartphone Programming. Matthias Keil. University of Freiburg

Introduction.

NVPRO-PIPELINE A RESEARCH RENDERING PIPELINE MARKUS TAVENRATH MATAVENRATH@NVIDIA.COM SENIOR DEVELOPER TECHNOLOGY ENGINEER, NVIDIA

Introduction to Computer Graphics

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

Getting more out of Matplotlib with GR

Instructor. Goals. Image Synthesis Examples. Applications. Computer Graphics. Why Study 3D Computer Graphics?

D animation. Advantages of 2-D2. Advantages of 3-D3. Related work. Key idea. Applications of Computer Graphics in Cel Animation.

Institutionen för systemteknik Department of Electrical Engineering

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

Managing Adaptability in Heterogeneous Architectures through Performance Monitoring and Prediction

GPGPU Computing. Yong Cao

PHOTOGRAPHIC guidlines for PORTRAITS

L20: GPU Architecture and Models

4D Interactive Model Animations

GR.jl Plotting for Julia based on GR

QNX Software Development Platform 6.6. Screen Graphics Subsystem Developer's Guide

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

How To Make A Texture Map Work Better On A Computer Graphics Card (Or Mac)

Kapitel 12. 3D Television Based on a Stereoscopic View Synthesis Approach

A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)

Writing Applications for the GPU Using the RapidMind Development Platform

Maya 2014 Basic Animation & The Graph Editor

CS 378: Computer Game Technology

Understanding Network Video Security Systems

The Big Data methodology in computer vision systems

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Technical Specifications: tog Live

PC GRAPHICS CARD INSTALLATION GUIDE & USER MANUAL. AGP version: GC- K2A-64 PCI version: GC- K2P-64

Hypercosm. Studio.

Comp 410/510. Computer Graphics Spring Introduction to Graphics Systems

Graphical displays are generally of two types: vector displays and raster displays. Vector displays

INVENTION DISCLOSURE

Water Flow in. Alex Vlachos, Valve July 28, 2010

Architecture for Direct Model-to-Part CNC Manufacturing

Computer Graphics. Computer graphics deals with all aspects of creating images with a computer

Gauge Drawing Tool Slider Drawing Tool Toggle Button Drawing Tool One-Way List Drawing Tool... 8

T-REDSPEED White paper

1. Introduction... 3 Introduction Deal command... 13

PRODUCING DV VIDEO WITH PREMIERE & QUICKTIME

Making Dreams Come True: Global Illumination with Enlighten. Graham Hazel Senior Product Manager Sam Bugden Technical Artist

Flash MX 2004 Animation Lesson

Overview Image Acquisition of Microscopic Slides via Web Camera

QuickSpecs. NVIDIA Quadro K5200 8GB Graphics INTRODUCTION. NVIDIA Quadro K5200 8GB Graphics. Technical Specifications

Basler. Line Scan Cameras

C n o t n e t n e t n s

Creating Stop Motion Animation in Corel VideoStudio Pro

Laser Gesture Recognition for Human Machine Interaction

Transcription:

Color correction in 3D environments Nicholas Blackhawk Abstract In 3D display technologies, as reviewers will say, color quality is often a factor. Depending on the type of display, either professional or automated calibration may be required. This type of calibration is a global adjustment, but localized color distortion is not addressed. Our method measures distortion in each color channel, for each quadrant of the screen, using a camera for quick feedback. The data is then processed into a GLSL ES(OpenGl Shader Language for Embeded Systems) fragment shader, which we use to interpolates the values of each pixel in postprocessing. Our experimental process resulted in a color correction that could be applied to a Stereoscopic 3D render, without extensive modification of the original render pipeline that generated it. 1. Introduction Accurate color values are important for any display, especially in 3D Stereoscopic Display modes, which can often dim down colors from their appropriate 2D display values[1]. In addition, table values in the process of generating a profile for that particular display. The adjustment of the look-up table can certainly handle whole screen color distortion, if the calibration goes well, but it cannot handle localized effects, such as back-light weakening for portions of a display. Other techniques need to be used to fix local distortion. To that end, we apply a method similar to Scaramuzza and Tsai[4] in section 2 to characterize distortion on a 3D display. We then apply the distortion data to the construction of a GLSL ES shader in section 3. Finally, in sections 4, we apply the shader to a real-time 3D render in post-processing. as noted by Tseng, Lee and Shie[3], some LCD displays, including 3D LCDs may have physical imperfections such as back-light weakening, or Mura effects, which affect their output colors. Hardware related issues like these are often dealt with using expensive colorimeter packages, which actually re-tune the graphics card's color look-up 2. Display Distortion For ease of expansion, we began work with a standard quality web-cam to capture image data from the display. Our camera was always a set distance from the display, and we controlled for light, by allowing using diffuse overhead

lighting. There was minimal extra light from another room, and blackout shades, preventing additional light from shifting our data, prompting us to take all samples with the same conditions. Given our setup conditions, we displayed a series of 7 frames, consisting of no value(or black), full distortion value for that channel. Obviously the value can be divided into the different sources of distortion, but we are going to correct for all of it in a given quadrant. 3. Shader Construction Given the relative coordinates of the sampled value, and half value in each of the color points on the 2D image of the screen, and the channels. These frames were displayed one at a time, and captured using the web-cam. The corners of each of the displayed frames were then identified in the images, and their color values were recorded. From there we did additional sampling across the image, recording one value for each of the quadrants we divided the image into. The number of sampled points, or quadrants, in the image improves quality of the shader, but at a high cost due to the number color values that need to be imported. The sampled points consist of the values of each of the color channels, at half and full values in the frames. This gives us a representative of the color output per channel, at those points, which we can then compare to the expected values. The comparison between the output of a given color channel and the expected channel value, is henceforth collectively referred to as the distortion value calculated in each channel of those points, we construct a shader. A GL ES shader is a program which runs on the graphics card of a machine running some embedded system which supports the language. In our case, the system is WebGL, the implementation of OpenGL for web browser contexts. The shader is constructed with a series of static values for each of the channels, and coordinates we got in section 2. The specific implementation of storage for this data is not important, so long as the values can be applied appropriately to the output of the shader. The shader handles each pixel in the Render Buffer, or Frame Buffer Object, which is a 2D context. A fragment shader can be applied at a number of places in the pipeline, but we choose to apply ours at the end of the render pipeline,

so that we could reference the output color values of the frame to be rendered, and adjust them according to the distortion value per channel. For each pixel, the output value for each channel is calculated as, the sum of the ratio's of each that channel's corrective value at each of the sampling points adjacent to the pixel in question. (See eq. 1) 4. Post-processing As stated in section 3, our shader works with a Frame Buffer Object to pass in the computed color values for each pixel. This is a fairly common technique for video operations, and non-stereo 3D scenes. Our technique utilizes the power of postprocessing on two perspective cameras, rendering both with the the correction required. This does not appear to be achieved in any open source WebGL, or OpenGL project prior. The process is similar to other post-processing pipelines, except that it is run twice. The first portion is a method from stereoscopic rendering: we set up the scene and then place The corrective value for our first attempt is a simple additive value based on a line of best fit. But ideally it should be interpolated as the shader language allowed, but the current technique sufficed for demonstration. Just like any other post-processing shader, the output of each color channel is written to it's respective pixel, and the pixel is written out as a fragment into the output buffer of the render, which is then drawn to screen. two controllable cameras, pointed at the scene, with a set distance between them. The distance between them is of course the stereo-separation factor required in other stereoscopic rendering processes. The cameras are each set to render to different Frame Buffer objects, which are then set up to receive a pass of the shader. The output of the shader is another Frame Buffer Object, which can now be drawn to the screen. But before it can be drawn, the view-port of the renderer must be set to a different portion of the rendering context, or the buffers will overwrite each other. Each camera can then be updated as normal, including movement operations and other animations. In this way, post-processing allows us to avoid injecting a shader into a render

pipeline, which would require chaining of shaders, passing the output of one fragment shader to the next. The shader also doesn't need to bind to objects within a scene, saving the calculation on each object for one single calculation at the end. These factors simplify the construction of both the shader, since it operates in a 2D context, and the scene. Obviously this proof of concept has opportunities for optimization. 5. Results and Future Work The result of the correction on the render is favorable. The corrective applied is either just as good, or better than the unadjusted color quality, but obviously more data is required both for sampling, and comparison to unadjusted values. The frame-rate of the rendering process appears to be directly related to the number of sampling points. The frame-rate dropped from around 30 to 15 frames-per-second or less, when the number of sampling points doubled, and when the number of color frames was increased to include the half values for each channel, the frame-rate dropped from around 45 to 30. This appears to be related to the amount of memory allotted to each shader for constant values, and compiled program size. If run on a more powerful graphics card with more memory, we predict that the shader performance would improve beyond these approximate performance markers. Additionally, the optimization of the interpolation process would likely give us better corrective factors to apply in the shader. Finally, the process for the shader itself can be optimized to handle more data, or to use the data we have, more effectively. References [1] Digital Projection International. How to Calibrate a Display for 3D Viewing <http://www.cepro.com/article/how_to_calibrate _a_display_for_3d_viewing/> [2] Gerhardt, Jérémie, Thomas, Jean-Baptiste. Toward an Automatic Color Calibration for 3D Displays Web 18 Sept. 2012 <http://rivervalley.tv/toward-an-automatic-color-calibrationfor-3d-displays/ > [3] Din-Chang Tseng, You-Ching Lee and Cheng-En Shie. LCD Mura Detection with Multi-Image Accumulation and Multi- Resolution Background Subtraction. International Journal of Innovative Computing, Information and Control Volume 8, Number 7, July 2012 <http://www.ijicic.org/ijicic-11-04025.pdf> [4] Scaramuzza, Mary. 3D Telepresence, Unpublished, St. Olaf College, 17 Sept. 2012