Implementing Stereoscopic 3D in Your Applications

Similar documents
The In and Out: Making Games Play Right with Stereoscopic 3D Technologies. Samuel Gateau

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

INTRODUCTION TO RENDERING TECHNIQUES

Shader Model 3.0. Ashu Rege. NVIDIA Developer Technology Group

3D Stereoscopic Game Development. How to Make Your Game Look

Beyond 2D Monitor NVIDIA 3D Stereo

Optimizing Unity Games for Mobile Platforms. Angelo Theodorou Software Engineer Unite 2013, 28 th -30 th August

Realtime 3D Computer Graphics Virtual Reality

Optimizing AAA Games for Mobile Platforms

Dynamic Resolution Rendering

iz3d Stereo Driver Description (DirectX Realization)

Outline. srgb DX9, DX10, XBox 360. Tone Mapping. Motion Blur

Making natural looking Volumetric Clouds In Blender 2.48a

3D Modeling Using Stereo Projection

GPU(Graphics Processing Unit) with a Focus on Nvidia GeForce 6 Series. By: Binesh Tuladhar Clay Smith

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Deferred Shading & Screen Space Effects

Advanced Visual Effects with Direct3D

Stereo 3D Monitoring & Correction

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Computer Graphics. Introduction. Computer graphics. What is computer graphics? Yung-Yu Chuang

Silverlight for Windows Embedded Graphics and Rendering Pipeline 1

6 Space Perception and Binocular Vision

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

CS 4204 Computer Graphics

GPU Architecture. Michael Doggett ATI

Scan-Line Fill. Scan-Line Algorithm. Sort by scan line Fill each span vertex order generated by vertex list

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

OpenGL Performance Tuning

Performance Optimization and Debug Tools for mobile games with PlayCanvas

VRayPattern also allows to curve geometry on any surface

Computer Graphics Hardware An Overview

DATA VISUALIZATION OF THE GRAPHICS PIPELINE: TRACKING STATE WITH THE STATEVIEWER

Recent Advances and Future Trends in Graphics Hardware. Michael Doggett Architect November 23, 2005

NVPRO-PIPELINE A RESEARCH RENDERING PIPELINE MARKUS TAVENRATH MATAVENRATH@NVIDIA.COM SENIOR DEVELOPER TECHNOLOGY ENGINEER, NVIDIA

Space Perception and Binocular Vision

Binocular Vision and The Perception of Depth

Real-Time Realistic Rendering. Michael Doggett Docent Department of Computer Science Lund university

CSE 564: Visualization. GPU Programming (First Steps) GPU Generations. Klaus Mueller. Computer Science Department Stony Brook University

Lesson 26: Reflection & Mirror Diagrams

Image Synthesis. Transparency. computer graphics & visualization

Intermediate Tutorials Modeling - Trees. 3d studio max. 3d studio max. Tree Modeling Matthew D'Onofrio Page 1 of 12

EXPERIMENT 6 OPTICS: FOCAL LENGTH OF A LENS

Lecture Notes, CEng 477

Hi everyone, my name is Michał Iwanicki. I m an engine programmer at Naughty Dog and this talk is entitled: Lighting technology of The Last of Us,

Computer Applications in Textile Engineering. Computer Applications in Textile Engineering

How To Understand The Power Of Unity 3D (Pro) And The Power Behind It (Pro/Pro)

The Limits of Human Vision

GPU Christmas Tree Rendering. Evan Hart

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

2: Introducing image synthesis. Some orientation how did we get here? Graphics system architecture Overview of OpenGL / GLU / GLUT

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

GUI GRAPHICS AND USER INTERFACES. Welcome to GUI! Mechanics. Mihail Gaianu 26/02/2014 1

Go to contents 18 3D Visualization of Building Services in Virtual Environment

Protocol for Microscope Calibration

Low power GPUs a view from the industry. Edvard Sørgård

White Paper AMD HD3D Pro Technology: Stereoscopic 3D For Professionals. Table of Contents

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies

CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS

Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

Basic Principles of Stereoscopic 3D. Simon Reeve & Jason Flock Senior Editors, BSKYB simon.reeve@bskyb.com jason.flock@bskyb.com

The Evolution of Computer Graphics. SVP, Content & Technology, NVIDIA

CS130 - Intro to computer graphics. Dr. Victor B. Zordan vbz@cs.ucr.edu Objectives

Modern Graphics Engine Design. Sim Dietrich NVIDIA Corporation

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

SketchUp Instructions

NVFX : A NEW SCENE AND MATERIAL EFFECT FRAMEWORK FOR OPENGL AND DIRECTX. TRISTAN LORACH Senior Devtech Engineer SIGGRAPH 2013

1. INTRODUCTION Graphics 2

L20: GPU Architecture and Models

3D Computer Games History and Technology

Color correction in 3D environments Nicholas Blackhawk

Introduction to GPGPU. Tiziano Diamanti

Advanced Rendering for Engineering & Styling

Digital Photography Composition. Kent Messamore 9/8/2013

RADEON 9700 SERIES. User s Guide. Copyright 2002, ATI Technologies Inc. All rights reserved.

How To Use 3D On A Computer Or Tv

Introduction to Computer Graphics

Optimization for DirectX9 Graphics. Ashu Rege

A Crash Course on Programmable Graphics Hardware

How To Run A Factory I/O On A Microsoft Gpu 2.5 (Sdk) On A Computer Or Microsoft Powerbook 2.3 (Powerpoint) On An Android Computer Or Macbook 2 (Powerstation) On

Transformations in the pipeline

What is GPUOpen? Currently, we have divided console & PC development Black box libraries go against the philosophy of game development Game

Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Maxwell Render 1.5 complete list of new and enhanced features

Our One-Year 3D Animation Program is a comprehensive training in 3D using Alias

NVIDIA workstation 3D graphics card upgrade options deliver productivity improvements and superior image quality

The Rocket Steam Locomotive - Animation

High Dynamic Range and other Fun Shader Tricks. Simon Green

Head-Coupled Perspective

Writing Applications for the GPU Using the RapidMind Development Platform

An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

How To Teach Computer Graphics

Specular reflection. Dielectrics and Distribution in Ray Tracing. Snell s Law. Ray tracing dielectrics

Q. Can an Exceptional3D monitor play back 2D content? A. Yes, Exceptional3D monitors can play back both 2D and specially formatted 3D content.

GPU-Driven Rendering Pipelines

Blender addons ESRI Shapefile import/export and georeferenced raster import

REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS

CLOUD GAMING WITH NVIDIA GRID TECHNOLOGIES Franck DIARD, Ph.D., SW Chief Software Architect GDC 2014

Transcription:

Implementing Stereoscopic 3D in Your Applications Room C 09/20/2010 16:00 17:20 Samuel Gateau, NVIDIA, Steve Nash, NVIDIA

Agenda How It Works NVIDIA 3D Vision Implementation Example Stereoscopic Basics Depth Perception Parallax Budget Rendering in Stereo What s Next?

3D vs Stereo In 3D is the new Stereo They are used interchangeably, stereoscopic rendering is the technical means to make an image In 3D Each eye gets its own view rendered with a slightly different camera location: usually about as far apart as your eyes Stereo displays and APIs are used to manage these two views Stereo Stereo Not Stereo

Stereoscopic Basics How It Works Applications render a Left Eye view and Right Eye view with slight offset between Left Eye view Right Eye view A Stereoscopic display then shows the left eye view for even frames (0, 2, 4, etc) and the right eye view for odd frames (1, 3, 5, etc).

Stereoscopic Basics How It Works In this example active shutter glasses black-out the right lens when the left eye view is shown on the display and black-out the left lens when the right eye view is shown on the display. Left eye view on, right lens blocked Right eye view on, left lens blocked This means that the refresh rate of the display is effectively cut in half for each eye. (e.g. a display running at 120 Hz is 60 Hz per eye) on off off on Left lens Right lens Left lens Right lens The resulting image for the end user is a combined image that appears to have depth in front of and behind the stereoscopic 3D Display.

NVIDIA 3D Vision Software 3D Vision SW automatically converts mono games to Stereo Direct X only Hardware IR communication 3D Vision certified displays Support for single screen or 1x3 configurations

NVIDIA 3D Vision Pro Software Supports Consumer 3D Vision SW or Quad Buffered Stereo QBS: OpenGL or DirectX For DX QBS, e-mail 3DVisionPro_apps@nvidia.com for help Hardware RF communication 3D Vision certified displays, Passive Displays, CRTs and projectors Up to 8 displays Mix Stereo and Regular Displays G-Sync support for multiple displays and systems Direct connection to GPU mini-din

NVIDIA 3D Vision Pro Hardware cont d Designed for multi-user professional installations No line of sight requirement, no dead spots, no cross talk RF bi-directional communication with UI 50m range Easily deploy in office no matter what the floor plan

Implementation Example

Implementation Example: OpenGL Step 1: Configure for Stereo

Implementation Example: OpenGL Step 2: Query and request PFD_STEREO ipixelformat = DescribePixelFormat(hdc, 1, sizeof(pixelformatdescriptor), &pfd); while (ipixelformat) { DescribePixelFormat(hdc, ipixelformat, sizeof(pixelformatdescriptor), &pfd); if (pfd.dwflags & PFD_STEREO){ istereopixelformats++; } ipixelformat--; } if (istereopixelformats== 0) // no stereo pixel formats available StereoIsAvailable = FALSE; else StereoIsAvailable = TRUE;

Implementation Example: OpenGL Step 2 cont d if (StereoIsAvailable){ ZeroMemory(&pfd, sizeof(pixelformatdescriptor)); pfd.nsize = sizeof(pixelformatdescriptor); pfd.nversion = 1; pfd.dwflags = PFD_DRAW_TO_WINDOW PFD_SUPPORT_OPENGL PFD_DOUBLEBUFFER PFD_STEREO; pfd.ipixeltype = PFD_TYPE_RGBA; pfd.ccolorbits = 24; ipixelformat = ChoosePixelFormat(hdc, &pfd); if (ipixelformat!= 0){ if (SetPixelFormat(hdc, ipixelformat, &pfd)){ hglrc = wglcreatecontext(hdc); if (hglrc!= NULL){ if (wglmakecurrent(hdc, hglrc)){

Implementation Example: OpenGL Step 3: Render to Left/Right buffer with offset between // Select back left buffer gldrawbuffer(gl_back_left); glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT); // Setup the frustum for the left eye glmatrixmode(gl_projection); glloadidentity(); glfrustum(xmin - FrustumAssymmetry, Xmax FrustumAssymmetry, -0.75, 0.75, 0.65, 4.0); gltranslatef(eyeoffset, 0.0f, 0.0f); glmatrixmode(gl_modelview); glloadidentity(); <Rendering calls>

Implementation Example: OpenGL Step 3 cont d // Select back right buffer gldrawbuffer(gl_back_right); glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT); // Setup the frustum for the right eye. glmatrixmode(gl_projection); glloadidentity(); glfrustum(xmin + FrustumAssymmetry, Xmax + FrustumAssymmetry, -0.75, 0.75, 0.65, 4.0); gltranslatef(-eyeoffset, 0.0f, 0.0f); gltranslatef(0.0f, 0.0f, -PULL_BACK); glmatrixmode(gl_modelview); glloadidentity(); <Rendering calls> // Swaps both left and right buffers SwapBuffers(hdc);

Changes to the rendering pipe FROM MONO TO STEREO

In Mono Scene Scene is viewed from one eye and projected with a perspective projection along eye direction on Near plane in Viewport Mono Frustum Near plane Viewport Y Z X Eye space

In Stereo Scene Near plane Y Z X Eye space

In Stereo: Two eyes Scene Left and Right eyes Shifting the mono eye along the X axis Near plane Y Z X Eye space

In Stereo: Two eyes Scene Left and Right eyes Shifting the mono eye along the X axis Eye directions are parallels Near plane Y Z X Eye space

In Stereo: Two Eyes, One Screen Left and Right eyes Shifting the mono eye along the X axis Eye directions are parallels One virtual screen Virtual Screen Scene Near plane Y Z X Eye space

In Stereo: Two Eyes, One Screen Left and Right eyes Shifting the mono eye along the X axis Eye directions are parallels One virtual screen Where the left and right frustums converge Left Frustum Virtual Screen Scene Right Frustum Near plane Y Z X Eye space

In Stereo: Two Eyes, One Screen, Two Images Left and Right eyes Shifting the mono eye along the X axis Eye directions are parallels One virtual screen Where the left and right frustums converge Virtual Screen Scene Two images 2 images are generated at the near plane in each views Near plane Left Image Right Image Y Z X Eye space

In Stereo: Two Eyes, One Screen, Two Images Scene Left Image Right Image Real Screen Virtual Screen Two images 2 images are generated at the near plane in each views Near plane Left Image Right Image Presented independently to each eyes of the user on the real screen Y Z X Eye space

Stereoscopic Rendering Render geometry twice From left and right eyes Into left and right images

Basic definitions so we all speak English DEFINING STEREO PROJECTION

Stereo Projection Stereo projection matrix is a horizontally offset version of regular mono projection matrix Offset Left / Right eyes along X axis Screen Eye space Y Z X Left Eye Mono Eye Right Eye Mono Frustum

Stereo Projection Projection Direction is parallel to mono direction (NOT toed in) Left and Right frustums converge at virtual screen Virtual Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Right Frustum

Interaxial Distance between the 2 virtual eyes in eye space The mono, left & right eyes directions are all parallels Eye space Y Z X Left Eye Mono Eye Right Eye Interaxial

Separation The normalized version of interaxial by the virtual screen width More details in a few slides. Virtual Screen Eye space Left Eye Y Z Mono Eye Interaxial Screen width X Right Eye Separation = Interaxial / Screen Width

Convergence Virtual Screen s depth in eye space ( Screen Depth ) Plane where Left and Right Frustums intersect Virtual Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Right Frustum Convergence

Interaxial Parallax Signed Distance on the virtual screen between the projected positions of one vertex in left and right image Parallax is function of the depth of the vertex Virtual Screen Parallax Eye space Y Z X Left Eye Mono Eye Right Eye Convergence Vertex depth

Where the magic happens and more equations DEPTH PERCEPTION

Virtual vs. Real Screen Scene Left Image Real Screen Right Image The virtual screen is perceived AS the real screen Virtual Screen Parallax creates the depth perception for the user looking at the real screen presenting left and right images Y Z X Virtual Space

In / Out of the Screen Vertex Depth Parallax Vertex Appears Further than Convergence Positive In the Screen Equal Convergence Zero At the Screen Closer than Convergence Negative Out of the Screen Out of the Screen Screen In the Screen Left Eye Mono Eye Right Eye Eye space Y X Z Convergence

Parallax in normalized image space Convergence Parallax in normalized image space Parallax = Separation * ( 1 Convergence / W ) Separation Maximum Parallax at infinity is separation distance between the eyes Vertex Depth (W) Parallax is 0 at screen depth Parallax diverges quickly to negative infinity for object closer to the eye

Eye Separation Real Screen Interocular (distance between the eyes) is on average 2.5 6.5 cm Equivalent to the visible parallax on screen for objects at infinity Depending on the screen width, we define a normalized Eye Separation Eye Separation = Interocular / Real Screen Width Different for each screen model A reference maximum value for the Separation used in the stereo projection for a comfortable experience Interocular Parallax at infinity Screen Width

Separation should be Comfortable The maximum parallax at infinity is Separation Eye Separation is an average, should be used as the very maximum Separation value Never make the viewer look diverge People don t have the same eyes For Interactive application, let the user adjust Separation When the screen is close to the user (PC scenario) most of the users cannot handle more than 50% of the Eye Separation Real Screen

Eye Separation is the Maximum Comfort Separation Real Screen Real Screen

Convergence Parallax Safe Parallax Range Eye Separation Separation 1 Separation 2 Depth -Eye Separation

PARALLAX BUDGET

Parallax Convergence Parallax Budget How much parallax variation is used in the frame Nearest pixel Farthest pixel Separation Parallax budget Depth

Parallax Convergence In Screen : Farthest Pixel At 100 * Convergence, Parallax is 99% of the Separation For pixels further than 100 * Convergence, Elements looks flat on the far distance with no depth differentiation Between 10 to 100 * Convergence, Parallax vary of only 9% Objects in that range have a subtle depth differentiation Separation Depth

Convergence Parallax Out of the Screen : Nearest pixel At Convergence / 2, Parallax is equal to -Separation, out of the screen Parallax is very large (> Separation) and can cause eye strains Separation Depth

Convergence 1 Convergence 2 Parallax Convergence sets the scene in the screen Defines the window into the virtual space Defines the style of stereo effect achieved (in / out of the screen) Near pixel Parallax budget 1 Far pixel Separation Depth Parallax budget 2

Convergence Parallax Separation scales the parallax budget Scales the depth perception of the frame Near pixel Far pixel Separation 1 Separation 2 Depth Parallax budget 21

Adjust Convergence Convergence must be controlled by the application Camera parameter driven by the look of the frame Artistic / Gameplay decision Should adjust for each camera shot / mode Make sure the scene elements are in the range [ Convergence / 2, 100 * Convergence ] Adjust it to use the Parallax Budget properly Cf Bob Whitehill Talk (Pixar Stereographer) at Siggraph 2010 Dynamic Convergence is a bad idea Except for specific transition cases Analyze frame depth through an Histogram and focus points? Ongoing projects at NV

Let s do it RENDERING IN STEREO

Stereoscopic Rendering Render geometry twice Do stereo drawcalls Duplicate drawcalls From left and right eyes Apply stereo projection Modify projection matrix Into left and right images Use stereo surfaces Duplicate render surfaces

How to implement stereo projection? Fully defined by mono projection and Separation & Convergence Replace the perspective projection matrix by an offset perspective projection horizontal offset of Interaxial Negative for Right eye Positive for Left eye Or just before rasterization in the vertex shader, offset the clip position by the parallax amount (Nvidia 3D vision driver solution) clippos.x += EyeSign * Separation * ( clippos.w Convergence ) EyeSign = +1 for right, -1 for left

Stereo Transformation Pipeline Standard Mono World space View Transform Vertex Shader Eye space Projection Transform Clip space Perspective Divide Rasterization Normalized space Viewport Transform Image space Pixel Shader Stereo Projection Matrix Eye Space Vertex Shader Stereo Projection Transform Stereo Clip space Perspective Divide Rasterization Stereo Normalized space Viewport Transform Stereo Image space Pixel Shader Stereo Separation on clip position Eye space Projection Transform Vertex Shader Clip space Stereo Separation Stereo Clip space Perspective Divide Rasterization Stereo Normalized space Viewport Transform Stereo Image space Pixel Shader

Stereo rendering surfaces View dependent render targets must be duplicated Back buffer Depth Stencil buffer Intermediate full screen render targets used to process final image High dynamic range, Blur, Bloom Screen Space Ambient Occlusion Left Image Right Image Screen Left Image Right Image

Mono rendering surfaces View independent render targets DON T need to be duplicated Shadow map Spot light maps projected in the scene Screen

How to do the stereo drawcalls? Simply draw the geometries twice, in left and right versions of stereo surfaces Can be executed per scene pass Draw left frame completely Then Draw right frame completely Need to modify the rendering loop Or for each individual objects Bind Left Render target, Setup state for left projection, Draw geometry Bind Right render target, Setup state for right projection, Draw Geometry Might be less intrusive in an engine Not everything in the scene needs to be drawn Just depends on the render target type

When to do what? Use Case Render Target Type Stereo Projection Stereo Drawcalls Shadow maps Mono No Use Shadow projection Draw Once Main frame Any Forward rendering pass Stereo Yes Draw Twice Reflection maps Stereo Yes Generate a stereo reflection projection Draw Twice Post processing effect (Drawing a full screen quad) Stereo No No Projection needed at all Draw Twice Deferred shading lighting pass (Drawing a full screen quad) Stereo G-buffers Yes Be careful of the Unprojection Should be stereo Draw twice

What could go possibly wrong? EVERYTHING IS UNDER CONTROL

3D Objects All the 3D objects in the scene should be rendered using a unique Perspective Projection in a given frame All the 3D objects must have a coherent depth relative to the scene Lighting effects are visible in 3D so should be computed correctly Highlight and specular are probably best looking evaluated with mono eye origin Reflection and Refraction should be evaluated with stereo eyes

Pseudo 3D objects : Sky box, Billboards Sky box should be drawn with a valid depth further than the regular scene Must be Stereo Projected Best is at a very Far distance so Parallax is maximum And cover the full screen Billboard elements (Particles, leaves ) should be rendered in a plane parallel to the viewing plane Doesn t look perfect Relief mapping looks bad

Several 3D scenes Different 3D scenes rendered in the same frame using different scales Portrait viewport of selected character Split screen Since scale of the scene is different, Must use a different Convergence to render each scene

Out of the screen objects The user s brain is fighting against the perception of hovering objects out of the screen Extra care must be taken to achieve a convincing effect Objects should not be clipped by the edges of the window Be aware of the extra horizontal guard bands Move object slowly from inside the screen to the outside area to give eyes time to adapt Make smooth visibility transitions No blinking Realistic rendering helps

2D Objects Starcraft2 screenshot, Courtesy of Blizzard 2D object in depth attached to 3D anchor point Billboards in depth Particles with 3D positions 2D object in depth attached to 3D anchor point

2D Objects must be drawn at a valid Depth With no stereo projection Head Up Display interface UI elements Either draw with no stereo projection or with stereo projection at Convergence At the correct depth when interacting with the 3D scene Labels or billboards in the scene Must be drawn with stereo projection Use the depth of the 3D anchor point used to define the position in 2D window space Needs to modify the 2D ortho projection to take into account Stereo

2D to 3D conversion shader function float4 2Dto3DclipPosition( in float2 posclip : POSITION, // Input position in clip space uniform float depth // Depth where to draw the 2D object ) : POSITION // Output the position in clip space { return float4( posclip.xy * depth, // Simply scale the posclip by the depth // to compensate for the division by W // performed before rasterization 0, // Z is not used if the depth buffer is not used // If needed Z = ( depth * f nf )/(f n); // ( For DirectX ) } depth ); // W is the Z in eye space

Selection, Pointing in S3D Selection or pointing UI interacting with the 3D scene don t work if drawn mono Mouse Cursor at the pointed object s depth Can not use the HW cursor Crosshair Needs to modify the projection to take into account depth of pointed elements Draw the UI as a 2D element in depth at the depth of the scene where pointed Compute the depth from the Graphics Engine or eval on the fly from the depth buffer (Contact me for more info) Selection Rectangle is not perfect, could be improved

3D Objects Culling When culling is done against the mono frustum Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Mono Frustum Right Frustum

3D Objects Culling Some in screen regions are missing in the right and left frustum They should be visible Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Mono Frustum Right Frustum

3D Objects Culling And we don t want to see out of the screen objects only in one eye It disturbs the stereo perception Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Mono Frustum Right Frustum

3D Objects Culling Here is the frustum we want to use for culling Screen Left Frustum Eye space Y Z X Left Eye Mono Eye Right Eye Mono Frustum Right Frustum

3D Objects Culling Computing Stereo Frustum origin offset Z = Convergence / ( 1 + 1 / Separation ) Z Screen Left Frustum Eye space Y Z Interaxial Left Eye Mono Eye Screen Width Mono Frustum X Right Eye Right Frustum Convergence

3D Objects Culling Culling this area is not always a good idea Blacking out pixels in this area is better Through a shader Screen Left Frustum Left Eye Mono Eye Right Eye Mono Frustum Equivalent to the Floating window used in movies Right Frustum

Fetching Stereo Render Target When fetching from a stereo render target use the good texture coordinate Render target is addressed in STEREO IMAGE SPACE Use the pixel position provided in the pixel shader Or use a texture coordinate computed in the vertex shader correctly Pixel Shader Pixel Shader Mono Image Space uv Fetch Texel at uv Do something with it Stereo Image Space POSITION.xy Fetch Texel at POSITION.xy Do something with it Stereo Render Target Stereo Render Target

Unprojection in pixel shader When doing deferred shading technique, Pixel shader fetch the depth buffer (beware of the texcoord used, cf previous slide) And evaluate a 3D clip position from the Depth fetched and XY viewport position Make sure to use a Stereo Unprojection Inverse transformation to go to Mono Eye space Otherwise you will be in a Stereo Eye Space! Stereo Image Space POSITION.xy Fetch Depth at POSITION.xy Evaluate Image Space Position Image space Viewport Inverse Transform Pixel Shader Normalized space Perspective Multiply Clip space Mono Projection Inverse Transform Stereo Eye space Stereo Depth Buffer Stereo Image Space POSITION.xy Fetch Depth at POSITION.xy Evaluate Image Space Position Image space Viewport Inverse Transform Pixel Shader Normalized space Perspective Multiply Clip space Stereo Projection Inverse Transform Mono Eye space Stereo Depth Buffer

One or two things to look at WHAT S NEXT?

Performance considerations At worse the frame rate is divided by 2 But applications are rarely GPU bound so less expensive in practice Since using Vsynch when running in stereo, you see the standard Vsync frequence jumps Not all the rendering is executed twice (Shadow maps) Memory is allocated twice for all the stereo surfaces Try to reuse render targets when possible to save memory Get another GPU

Tessellation Works great with stereoscopy Unigine Demo

Letterbox Emphasize the out of the screen effect Simply Draw 2 extra horizontal bands at Convergence Out of the screen objects can overdraw the bands G-Force movie from Walt DIsney

Nvidia Demo Sled SHOW TIME