High Dynamic Range and other Fun Shader Tricks Simon Green
Demo Group Motto If you can t make it good, make it big. If you can t make it big, make it shiny.
Overview The OpenGL vertex program and texture shader extensions enable hardware acceleration of effects that were previously only possible in offline rendering 3 case studies: High Dynamic Range Images Real-time Caustics Procedural Terrain
Case Study 1: High Dynamic Range Summary: displaying HDR images in realtime Definition of dynamic range: The ratio of the maximum intensity in an image to the minimum detectable intensity Most imagery used in computer graphics today is stored in 8 bits per component Low Dynamic Range: 0 = black, 255 = white Light in the real world is not constrained in this way! Dynamic range between bright sunshine and shadow can easily be 10,000 to 1
Exposure time = 0.270 s
Exposure time = 0.133 s
Exposure time = 0.066 s
Exposure time = 0.033 s
What is High Dynamic Range? The human visual system adapts automatically to changes in brightness In photography, shutter speed and lens aperture are used to control the amount of light that reaches the film HDR imagery attempts to capture the full dynamic range of light in real world scenes Measures radiance = amount of energy per unit time per unit solid angle per unit area W / (sr.m 2) 8 bits is not enough!
Why Do We Need HDR? It effectively allows us to change the exposure after we've taken/rendered the picture Dynamic adaptation effects e.g. moving from a bright outdoor environment to indoors Allows physically plausible image-based lighting BRDFs may need high dynamic range Enables realistic optical effects glows around bright light sources, more accurate motion blurs
Creating HDR Images from Photographs "Recovering High Dynamic Range Radiance Maps from Photographs", Debevec, Malik, Siggraph 1997 Using several images of the same scene taken with different exposures: Calculates the non-linear response curve of camera Recovers the actual radiance at each pixel Environment maps can be captured by either: Photographing a mirrored sphere ( lightprobe ) Combining 2 or more 180 degree fisheye images
Displaying HDR Images To display an HDR image at a given exposure, we use the following equation: Z = f(et) where Z = pixel value E = irradiance value t = exposure time f = camera response curve
Displaying HDR Images using Graphics Hardware Previous work: Real-Time High Dynamic Range Imagery, Cohen, Tchou, Hawkins, Debevec, Eurographics 2001 Split HDR image into several 8-bit textures, display by recombining using multitexturing and register combiners on NVIDIA TNT2 and above Hard because combiners treat texture values as fixed-point numbers between 0 and 1. Largest number you can multiply by is 4 Requires different combiner setups for different exposure ranges, so exposure can only be changed on a per-primitive basis
Representing HDR Imagery in OpenGL GeForce3/4 support a 16-bit format known as HILO Stores 2 16-bit components: (HI, LO, 1) Filtered by hardware at 16-bit precision Signed version intended for storing high-precision normal maps We can also use this format to store high(er) dynamic range imagery Remap floating point HDR data to gamma encoded 16-bit fixed-point range [0, 65535] Unfortunately, only two components so we need two HILO textures to store RGB
Displaying HDR Images using the OpenGL Texture Shader Extension To display the image, we need to multiply the HDR radiance values by the exposure factor, and then re-map them to the displayable [0,255] range This can be achieved using the GL_DOT_PRODUCT_TEXTURE_2D operation of the OpenGL texture shader extension Exposure is sent as texture coordinates, the dot product performs the multiply for both channels We create a 2D texture that maps the result back to displayable values
Displaying HDR Images using OpenGL Texture Shaders NVParse code:!!ts1.0 texture_cube_map(); dot_product_2d_1of2(tex0); dot_product_2d_2of2(tex0); Pseudo code: 0: hilo = texture_cube_map(hdr_texture, s0, t0, r0) 1: dot1 = s1*hi + t1*lo + r1*1.0; // = r_exposure*r + 0 + r_bias 2: dot2 = s2*hi + t2*lo + r2*1.0; // = 0 + g_exposure*g + g_bias color = texture_2d(lut_texture, dot1, dot2)
Displaying HDR Images using OpenGL Texture Shaders Requires 2 passes to render RGB, using glcolormask to mask off color channels First pass renders R and G: texcoord1 = (r_exposure, 0.0, r_bias) texcoord2 = (0.0, g_exposure, g_bias) Second pass renders B: texcoord1 = (0, 0, 0) texcoord2 = (b_exposure, 0.0, b_bias)
Exposure = 0.25
Exposure = 0.0625
Exposure = 0.015625
HDR Effects HDR Fresnel Glow Automatic exposure Vignette
HDR Fresnel Surfaces more tangent to the viewer reflect more Reflectivity can vary by a factor of 20 or more HDR environment map produces more accurate results Calculate per-vertex in vertex program Approximate Fresnel function as (1-V.N)^p Send down exposure as texture coordinate
Image-Space Glow Also known as: Glare Specular bloom Flare Blur image of bright parts of scene Can use hardware mipmap generation and LOD bias to calculate box filtering Ideally should do convolution with HDR values Real Gaussian blur would be smoother Blend back on top of original image Glow reaches around object
Original image
Blurred version
Glow = original + blurred
Image Based Lighting Lighting synthetic objects with real light An environment map represents all light arriving at a point for each incoming direction By convolving (blurring) an environment map with the diffuse reflection function (N.L) we can create a diffuse reflection map Indexed by surface normal N, this gives the sum of N.L for all light sources in the hemisphere Very slow to create Low freq - cube map can be small - e.g. 32x32x6 HDRShop will do this for you
References "Recovering High Dynamic Range Radiance Maps from Photographs", Debevec, Malik, Siggraph 1997 "Real-time High Dynamic Range Texture Mapping", Cohen, Tchou, Hawkins, Debevec, Eurographics Rendering Workshop 2001 Illumination and Reflection Maps: Simulated Objects in Simulated and Real Environments, Gene S. Miller and C. Robert Hoffman, Siggraph 1984 Course Notes for Advanced Computer Graphics Animation "Real Pixels", Greg Ward, Graphics Gems II P.80-83 http://www.debevec.org/
Case Study 2: Real-time Caustics Summary: simulating refractive caustics in realtime using OpenGL and vertex programs Inspired by Jos Stam s work at A/W What are caustics? Light patterns seen on bottom of swimming pools Caused by focusing of reflected or refracted light Traditionally calculated offline using photon mapping etc. Usually approximated in realtime using precalculated textures
Step 1: Generate Water Surface Drawn as triangle mesh Displaced using 4 octaves of procedural noise Each octave translates at speed proportional to frequency Calculated on CPU
Step 2: Refract Light Ray Using a vertex program: Calculate light ray from local light source to surface vertex Calculate refraction of ray about vertex normal Determine intersection between refracted ray and floor plane Y = Yo + Yd * t = 0 t = -Yo / Yd Set vertex position to intersection This gives refracted mesh on bottom of pool
Step 3: Simulate Light Focusing Use additive blending But we want intensity to be inversely proportional to area of triangle. Assuming the same of amount of light hits each triangle on the surface, smaller triangles = more focused, therefore brighter. We could send all three triangle vertices to calculate area, but that would be slow. Trick: use texture LOD as measure of projected area.
Step 3: Simulate Light Focusing (cont.) Create a texture with just two mipmap levels: 2x2: all black 1x1: all white Apply texture to refracted mesh, set texture coordinates so that pixels map roughly to texels. With tri-linear filtering, this will produce shades of gray depending on amount texture is minified. Unfortunately this is view dependent. Solution - render caustics from above using orthographic projection. Copy to texture.
Step 4: Final Surface Refraction Calculate refraction of view vector about surface normal. Intersect refracted ray with floor Calculate texture coordinates for caustic texture Also calculate reflected ray, used to index into environment cubemap. Attenutate reflection using Fresnel approximation. Result: convincing refractive caustics in real-time. Can also do refraction three times with different indices of refraction to simulate refractive dispersion (aka chromatic aberration )
References Jos Stam s Periodic Caustic Textures : http://www.dgp.toronto.edu/people/stam/reality/rese arch/periodiccaustics/index.html
Case Study 3: Procedural Terrain Summary: generate procedural terrain using vertex programs, register combiners and 3D textures Advantages of Procedural Modeling Small storage requirements Non-repeating Parameterized Disadvantages of Procedural Modeling Computation time Harder to control Not really practical on current hardware
Step 1: Noise in Vertex Program Displace triangle mesh using procedural noise Geometry doesn t move, just the displacement Similar to Perlin noise Uses permutation table stored in constant memory Generates a repeatable random value using recursive lookup into table based on vertex position Interpolates between 4 neighbors to produce smooth result Requires 42 vertex program instructions!
Step 2. Ridged Multi-fractal Function Ken Musgrave s trick to make noise look more like terrain Ridges: Take absolute value of signed noise Subtract from 1 Square result to produce sharper ridges Multi-fractal Scale each octave by previous result Valleys are smooth, peaks are rough We only have room for 2 octaves
Ridged Multi-fractal Function
Step 3: Lighting Hard to calculate normals in vertex program Calculate in image-space using register combiners instead Render terrain height field from above, color = height Copy to texture Bind to three texture units with offset texture coordinates Calculate approximate normal in register combiners: ( h(x,y)-h(x+1,y), h(x,y)-h(x,y+1), 1) Calculate diffuse lighting as N.L
Step 4. Texturing Use a 3D texture Different terrain type for each 2D slice R texture coordinate determines terrain type, computed in vertex program based on height Unfortunately 3D textures are mip-mapped in 3D! From a distance, all layers blend to a single image Can duplicate slices to help avoid blending Add reflective lakes: render scene upside down to texture displace in image space using GL_OFFSET_PROJECTIVE_TEXTURE_2D_NV
Terrain Textures
The Future Hardware is getting Faster More programmable Higher precision Today s off-line rendering effects will be real-time tomorrow Start thinking about it now!
Questions? E-mail: sgreen@nvidia.com