Color correction in 3D environments Nicholas Blackhawk Abstract In 3D display technologies, as reviewers will say, color quality is often a factor. Depending on the type of display, either professional or automated calibration may be required. This type of calibration is a global adjustment, but localized color distortion is not addressed. Our method measures distortion in each color channel, for each quadrant of the screen, using a camera for quick feedback. The data is then processed into a GLSL ES(OpenGl Shader Language for Embeded Systems) fragment shader, which we use to interpolates the values of each pixel in postprocessing. Our experimental process resulted in a color correction that could be applied to a Stereoscopic 3D render, without extensive modification of the original render pipeline that generated it. 1. Introduction Accurate color values are important for any display, especially in 3D Stereoscopic Display modes, which can often dim down colors from their appropriate 2D display values[1]. In addition, table values in the process of generating a profile for that particular display. The adjustment of the look-up table can certainly handle whole screen color distortion, if the calibration goes well, but it cannot handle localized effects, such as back-light weakening for portions of a display. Other techniques need to be used to fix local distortion. To that end, we apply a method similar to Scaramuzza and Tsai[4] in section 2 to characterize distortion on a 3D display. We then apply the distortion data to the construction of a GLSL ES shader in section 3. Finally, in sections 4, we apply the shader to a real-time 3D render in post-processing. as noted by Tseng, Lee and Shie[3], some LCD displays, including 3D LCDs may have physical imperfections such as back-light weakening, or Mura effects, which affect their output colors. Hardware related issues like these are often dealt with using expensive colorimeter packages, which actually re-tune the graphics card's color look-up 2. Display Distortion For ease of expansion, we began work with a standard quality web-cam to capture image data from the display. Our camera was always a set distance from the display, and we controlled for light, by allowing using diffuse overhead
lighting. There was minimal extra light from another room, and blackout shades, preventing additional light from shifting our data, prompting us to take all samples with the same conditions. Given our setup conditions, we displayed a series of 7 frames, consisting of no value(or black), full distortion value for that channel. Obviously the value can be divided into the different sources of distortion, but we are going to correct for all of it in a given quadrant. 3. Shader Construction Given the relative coordinates of the sampled value, and half value in each of the color points on the 2D image of the screen, and the channels. These frames were displayed one at a time, and captured using the web-cam. The corners of each of the displayed frames were then identified in the images, and their color values were recorded. From there we did additional sampling across the image, recording one value for each of the quadrants we divided the image into. The number of sampled points, or quadrants, in the image improves quality of the shader, but at a high cost due to the number color values that need to be imported. The sampled points consist of the values of each of the color channels, at half and full values in the frames. This gives us a representative of the color output per channel, at those points, which we can then compare to the expected values. The comparison between the output of a given color channel and the expected channel value, is henceforth collectively referred to as the distortion value calculated in each channel of those points, we construct a shader. A GL ES shader is a program which runs on the graphics card of a machine running some embedded system which supports the language. In our case, the system is WebGL, the implementation of OpenGL for web browser contexts. The shader is constructed with a series of static values for each of the channels, and coordinates we got in section 2. The specific implementation of storage for this data is not important, so long as the values can be applied appropriately to the output of the shader. The shader handles each pixel in the Render Buffer, or Frame Buffer Object, which is a 2D context. A fragment shader can be applied at a number of places in the pipeline, but we choose to apply ours at the end of the render pipeline,
so that we could reference the output color values of the frame to be rendered, and adjust them according to the distortion value per channel. For each pixel, the output value for each channel is calculated as, the sum of the ratio's of each that channel's corrective value at each of the sampling points adjacent to the pixel in question. (See eq. 1) 4. Post-processing As stated in section 3, our shader works with a Frame Buffer Object to pass in the computed color values for each pixel. This is a fairly common technique for video operations, and non-stereo 3D scenes. Our technique utilizes the power of postprocessing on two perspective cameras, rendering both with the the correction required. This does not appear to be achieved in any open source WebGL, or OpenGL project prior. The process is similar to other post-processing pipelines, except that it is run twice. The first portion is a method from stereoscopic rendering: we set up the scene and then place The corrective value for our first attempt is a simple additive value based on a line of best fit. But ideally it should be interpolated as the shader language allowed, but the current technique sufficed for demonstration. Just like any other post-processing shader, the output of each color channel is written to it's respective pixel, and the pixel is written out as a fragment into the output buffer of the render, which is then drawn to screen. two controllable cameras, pointed at the scene, with a set distance between them. The distance between them is of course the stereo-separation factor required in other stereoscopic rendering processes. The cameras are each set to render to different Frame Buffer objects, which are then set up to receive a pass of the shader. The output of the shader is another Frame Buffer Object, which can now be drawn to the screen. But before it can be drawn, the view-port of the renderer must be set to a different portion of the rendering context, or the buffers will overwrite each other. Each camera can then be updated as normal, including movement operations and other animations. In this way, post-processing allows us to avoid injecting a shader into a render
pipeline, which would require chaining of shaders, passing the output of one fragment shader to the next. The shader also doesn't need to bind to objects within a scene, saving the calculation on each object for one single calculation at the end. These factors simplify the construction of both the shader, since it operates in a 2D context, and the scene. Obviously this proof of concept has opportunities for optimization. 5. Results and Future Work The result of the correction on the render is favorable. The corrective applied is either just as good, or better than the unadjusted color quality, but obviously more data is required both for sampling, and comparison to unadjusted values. The frame-rate of the rendering process appears to be directly related to the number of sampling points. The frame-rate dropped from around 30 to 15 frames-per-second or less, when the number of sampling points doubled, and when the number of color frames was increased to include the half values for each channel, the frame-rate dropped from around 45 to 30. This appears to be related to the amount of memory allotted to each shader for constant values, and compiled program size. If run on a more powerful graphics card with more memory, we predict that the shader performance would improve beyond these approximate performance markers. Additionally, the optimization of the interpolation process would likely give us better corrective factors to apply in the shader. Finally, the process for the shader itself can be optimized to handle more data, or to use the data we have, more effectively. References [1] Digital Projection International. How to Calibrate a Display for 3D Viewing <http://www.cepro.com/article/how_to_calibrate _a_display_for_3d_viewing/> [2] Gerhardt, Jérémie, Thomas, Jean-Baptiste. Toward an Automatic Color Calibration for 3D Displays Web 18 Sept. 2012 <http://rivervalley.tv/toward-an-automatic-color-calibrationfor-3d-displays/ > [3] Din-Chang Tseng, You-Ching Lee and Cheng-En Shie. LCD Mura Detection with Multi-Image Accumulation and Multi- Resolution Background Subtraction. International Journal of Innovative Computing, Information and Control Volume 8, Number 7, July 2012 <http://www.ijicic.org/ijicic-11-04025.pdf> [4] Scaramuzza, Mary. 3D Telepresence, Unpublished, St. Olaf College, 17 Sept. 2012