Monte Carlo Path Tracing



Similar documents
Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping. Lecture 15 Taku Komura

An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

CSE168 Computer Graphics II, Rendering. Spring 2006 Matthias Zwicker

PATH TRACING: A NON-BIASED SOLUTION TO THE RENDERING EQUATION

Monte Carlo Path Tracing

Specular reflection. Dielectrics and Distribution in Ray Tracing. Snell s Law. Ray tracing dielectrics

path tracing computer graphics path tracing 2009 fabio pellacini 1

Path Tracing. Michael Doggett Department of Computer Science Lund university Michael Doggett

Advanced Computer Graphics. Rendering Equation. Matthias Teschner. Computer Science Department University of Freiburg

PHOTON mapping is a practical approach for computing global illumination within complex

INTRODUCTION TO RENDERING TECHNIQUES

CS 431/636 Advanced Rendering Techniques"

A Short Introduction to Computer Graphics

Computer Animation: Art, Science and Criticism

Chapter 10. Bidirectional Path Tracing

Dhiren Bhatia Carnegie Mellon University

Photon Mapping Made Easy

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

Mathematics for Global Illumination

CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS

The RADIANCE Lighting Simulation and Rendering System

A Ray Tracing Solution for Diffuse Interreflection

How To Improve Efficiency In Ray Tracing

Thea Omni Light. Thea Spot Light. Light setup & Optimization

SkillsUSA 2014 Contest Projects 3-D Visualization and Animation

VARIANCE REDUCTION TECHNIQUES FOR IMPLICIT MONTE CARLO SIMULATIONS

Path Tracing - Literature Research. Rick de Bruijne May 17, 2011

Optical Design Tools for Backlight Displays

So, you want to make a photo-realistic rendering of the Earth from orbit, eh? And you want it to look just like what astronauts see from the shuttle

A Theoretical Framework for Physically Based Rendering

REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS

Path tracing everything. D.A. Forsyth

Computer Applications in Textile Engineering. Computer Applications in Textile Engineering

Volumetric Path Tracing

Rendering Area Sources D.A. Forsyth

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

Lezione 4: Grafica 3D*(II)

ABS 731 Lighting Design & Technology. Spring 2006

Introduction to Computer Graphics

One Step Closer to a Fully Realistic 3D World on the Consumer PC A White Paper on the 3dfx T-Buffer

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies

IN previous chapters we assumed that all lighting interactions occurred at surfaces. In particular,

Hunting Ghosts. For the development of imaging optical STRAY LIGHT ANALYSIS IN IMAGING OPTICS

Introduction to Computer Graphics. Reading: Angel ch.1 or Hill Ch1.

Deferred Shading & Screen Space Effects

Computer-Generated Photorealistic Hair

Reflection and Refraction

Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings

Design, Analysis, and Optimization of LCD Backlight Unit using Ray Tracing Simulation

Teaching Introductory Computer Graphics Via Ray Tracing

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

Computer Animation of Extensive Air Showers Interacting with the Milagro Water Cherenkov Detector

Monte Carlo Ray Tracing

Computer Graphics. Introduction. Computer graphics. What is computer graphics? Yung-Yu Chuang

Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Instructor. Goals. Image Synthesis Examples. Applications. Computer Graphics. Why Study 3D Computer Graphics?

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

GUI GRAPHICS AND USER INTERFACES. Welcome to GUI! Mechanics. Mihail Gaianu 26/02/2014 1

Solving Simultaneous Equations and Matrices

PHYS 39a Lab 3: Microscope Optics

Introduction to acoustic imaging

CALCULATION OF CLOUD MOTION WIND WITH GMS-5 IMAGES IN CHINA. Satellite Meteorological Center Beijing , China ABSTRACT

Common Core Unit Summary Grades 6 to 8

Learning about light and optics in on-line general education classes using at-home experimentation.

Sound Power Measurement

Enhanced LIC Pencil Filter

CHAPTER 3: DIGITAL IMAGING IN DIAGNOSTIC RADIOLOGY. 3.1 Basic Concepts of Digital Imaging

Understanding astigmatism Spring 2003

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Data Visualization Study at Earth Simulator Center

Cork Education and Training Board. Programme Module for. 3 Dimensional Computer Graphics. Leading to. Level 5 FETAC

NEW MEXICO Grade 6 MATHEMATICS STANDARDS

Path Tracing Overview

4.430 Daylighting. Christoph Reinhart Daylight Simulations

CS 371 Project 5: Midterm Meta-Specification

Efficient Implementation of Bi-directional Path Tracer on GPU

Motion Graphs. It is said that a picture is worth a thousand words. The same can be said for a graph.

Self-adjusting Importances for the Acceleration of MCBEND

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Approval Sheet. Interactive Illumination Using Large Sets of Point Lights

AS COMPETITION PAPER 2008

Interactive Visualization of Magnetic Fields

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND

VALLIAMMAI ENGNIEERING COLLEGE SRM Nagar, Kattankulathur

Volume visualization I Elvins

The Basics of Scanning Electron Microscopy

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?

Geometry and Measurement

Adding Animation With Cinema 4D XL

Specific Intensity. I ν =

animation animation shape specification as a function of time

SHOW MORE SELL MORE. Top tips for taking great photos

CAUSTICS are complex patterns of shimmering

Transcription:

HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on Computer Graphics Spring 2002: Advanced Rendering Techniques Monte Carlo Path Tracing Petri Häkkinen 46561N

Monte Carlo Path Tracing Petri Häkkinen HUT, Telecommunications Software and Multimedia Laboratory Petri.Hakkinen@hut.fi Abstract This paper describes the Monte Carlo path tracing algorithm which is used to solve global illumination problems in computer graphics. The theoretical foundation of the algorithm is discussed starting from the rendering equation and leading to the concept of light paths. Various implementation details such as pixel filtering, Russian roulette and distributing directions on the hemisphere are discussed. Two extensions to the basic path tracing algorithm are also briefly introduced. These extensions are bidirectional path tracing and Metropolis light transport. Finally we discuss a path tracer that has been implemented using some of the techniques presented in this paper. 1 INTRODUCTION 1.1 The global illumination problem The term global illumination refers to physically correct simulation of light scattering in a synthetic environment. A global illumination algorithm transforms an initial data set consisting of 3D models, material properties, light sources and a virtual camera into a set of discrete lighting values. These lighting values are typically recorded as pixels in the output image but other uses are possible such as pre-rendered lightmaps in a video game. In comparison to local illumination algorithms, which handle only a single light bounce from a surface, a global illumination algorithm needs to trace light rays traveling through the scene via bounces at multiple surfaces. 1.2 Model of light In this paper we assume that the light model is based on ray optics so that light travels in straight paths and reflections can be handled using a set of geometric rules. This model does not handle participating media and smoothly varying index of refraction. For example, air does not have a constant index of refraction and light rays actually bend when traveling through the atmosphere. Also phenomena based on wave optics such as diffraction and interference are ignored. 1

1.3 From classical ray tracing to path tracing Whitted's classical ray tracing algorithm [8] works by tracing light rays in a 3D scene. For each pixel in the output image, a ray is shot from the camera through the pixel into the scene and the intersection point with the nearest object is calculated. At the point of intersection direct lighting is evaluated and the ray is split into reflected and refracted components and those are recursively traced into the scene. Lights in Whitted's model are considered to be points that radiate energy uniformly in all directions. For each light source a shadow ray is shot from the point of intersection to the light source. If the shadow ray is unblocked, direct illumination is calculated and accumulated, otherwise the intersection point is considered to be in shadow for that particular light. Ray traced images suffer from several artifacts: shadows and reflections are unnaturally sharp, indirect lighting is not handled etc. Distributed ray tracing [4] attempts to alleviate some of the problems by adding support for proper antialising, soft shadows, blurred reflections, motion blur and depth of field. A distributed ray tracer like the name suggests, distributes rays using Monte Carlo techniques. Cook uses 16 rays per pixel and distributes them over the pixel area (pixel antialising), camera lens (depth of field) and time (motion blur). Because rays are distributed randomly this results in a powerful antialising scheme which trades aliasing for noise. Lights are considered to have an area. Random sampling points are selected on the light surface for tracing shadow rays. Blurred reflections are handled by distributing the reflected ray direction. Distributed ray tracing is an elegant algorithm that can produce near photorealistic images. However, indirect illumination is still not handled properly. Path tracing is a physically correct stochastic method for calculating global illumination using Monte Carlo techniques. It borrows many aspects from distributed ray tracing and it is quite easy to convert an existing distributed ray tracer to a full path tracer. 1.4 Organization of the paper In chapters 2 and 3 we develop the theoretical basis for path tracing beginning from the rendering equation and leading to the concept of light paths. Basic Monte Carlo integration and some variance reduction schemes are also discussed here. Chapter 4 describes various implementation details. Chapter 5 briefly introduces some extensions to the basic algorithm. Chapter 6 concludes with some results from the path tracer implemented by the author. 2 SOLVING THE RENDERING EQUATION 2.1 Kajiya's rendering equation In 1986 James T. Kajiya presented an equation that unified the light transport problem [6]. Prior to that, the Utah approximation, recursive ray tracing and radiosity [3] methods all solved a single problem whose solutions seemed unrelated but which all attempt to model the same phenomenon. The term Utah approximation is borrowed from Kajiya and refers to the standard local illumination method. 2

The rendering equation [6] couples the outgoing radiance of a surface to the emitted and reflected radiance. This energy balance can be expressed as: L o (x,ω) = L e (x,ω) + L r (x,ω) (1) where L o, L e and L r are the outgoing, emitted and reflected radiance. L r is affected by other surfaces since reflected light travels from surface to surface in the scene. L r can be written as: L r (x,ω) = f r (x, ω(x,x'),x'),ω) L o (x', ω(x',x)) G(x,x') (2) The integral is taken over all surface points x'. The BRDF f r (x,ω,ω') is a reflection distribution function that gives the reflection coefficient for light coming from direction ω being reflected to direction ω'. Direction vectors ω and ω' point outward from the surface. An important property of the BRDF is that it is symmetric. That is, the following property holds: f r (x,ω,ω') = f r (x,ω',ω). G is the geometric function which relates two differential area surfaces exchanging light energy (see figure 1). It is composed of three terms: the cosine term, the distance term, and the visibility term. The cosine term depends on the relative orientation of the surfaces and is cos(θ i ) cos(θ o ). Figure 1. The geometric function The visibility term is 0 if there is no visibility from x' to x, or 1 if the path is unoccluded. In the case of transparent occluder surfaces the visibility term is somewhere in the range [0,1]. The distance term is simply 1/r 2 where r is the distance from x' to x. Combining all the terms, G can be written as: (3) where V is the visibility term. Substituting L r into the energy balance equation results in the following equation: 3

L o (x,ω) = L e (x,ω) + f r (x,ω(x,x'),x'),ω) L o (x',ω(x',x)) G(x,x') (4) This is the rendering equation [6]. An algorithm that fully implements this equation solves the global illumination problem and this is the main goal of a path tracer. Radiosity [3] and ray tracing algorithms particularly solve only a specific part of the equation. Note that the integral is taken over all surfaces and that the unknown L o appears inside the integral. This recursive nature makes the rendering equation problematic to solve. 2.2 Path integrals A path tracer solves the rendering equation in a special form that is often referred to as the path integral form. The idea is to recursively substitute L o on the right side of the rendering equation with the equation itself. This results in rather lengthy expressions, so in order to simplify the notation we define an operator T which maps to the integral expression in the rendering equation. The rendering equation in the operator form is: L = E + T L (5) Here we have also replaced L o by L and L e by E. Recursive evaluation of this gives the following infinite Neumann series: L = E+T(E+TL) = = E+TE+T 2 E+T 3 E+... = (6) where TE represents the light scattered once, T 2 E light scattered twice, and so forth. According to [5, p.30] the rendering equation can be similarly converted to the path form: (7) where ω = x -1 - x 0 and K(x'',x',x) = f r (x'',x',x) G(x'',x'). Each integral is taken over all surface locations in the scene. For k = 0 the outgoing radiance is simply the emitted radiance. For k = 1 there is a single integral over all possible surface locations. This gathers light scattered once with all possible ways. Similarly for k = 2, we have two integrals and light that has been scattered at two locations is gathered. Equation 8 represents the sum of all light paths of length k. A light path is a set of vertices forming a connected line from the emitting surface to the receiver. For rendering purposes the receiver is often the surface that is hit by the primary ray. The total contribution for the primary ray is integrated over all possible light paths. All possible surface positions and path lengths need to be considered. However, all light paths do not contribute the same amount of radiance to the final integral because of different geometry and reflection terms and non-uniform distribution of light energy. 4

Perhaps the most dramatic difference between a traditional ray tracer and a path tracer is the fact that a path tracer never splits the ray while a standard recursive raytracer splits the ray into reflection and refraction components at intersection points. Kajiya argues that a typical ray tracer concentrates an increasing amount of computation power on deeper recursion levels. This is clearly illogical since those deep branches of the ray tree contribute less to the final image because of the accumulation of geometry and reflection terms. 3 MONTE CARLO INTEGRATION 3.1 The sample mean method As seen in the previous chapter, problems in computer graphics often require solving multidimensional integrals that are very hard or even impossible to solve using analytic integration or quadrature rules. Particularly in path tracing we must integrate over all surface locations and possibly over time if motion blurring is desired. Monte Carlo integration approximates the value of the integral by sampling the function at random locations. These random locations are often distributed non-evenly concentrating more samples on the areas of interest where the slope of the function is steep. Such areas are for example shadow boundaries and small bright areas in the hemisphere. The sample mean method samples the function randomly and estimates the integral by calculating the sample mean and multiplying it with the length of the interval. For onedimensional functions this is equivalent of approximating the integral with a rectangle of width equal to the length of the interval and height equal to the sample mean. Given a one dimensional function f(x) and uniformly distributed random numbers ξ 1... ξ N in the range [a,b], the Monte Carlo estimate of the integral is given by: (8) As the number of samples N is increased, the estimate M becomes more accurate. With an infinite number of samples, M becomes the integral. The ability to trade image quality for speed is important for the users of the rendering system so that they can generate preview quality renderings fast. With Monte Carlo integration this is easily controlled with the variable N. The convergence speed of the integral is a very important factor in computer graphics since performance is always the issue. Especially production renderers used for motion pictures must be fast [1]. 3.2 Variance reduction Images rendered with Monte Carlo techniques typically appear noisy. The noise is due to variance σ 2 in the integral estimate. Given an integral estimate M, the variance is the squared distance between M and the correct result that we would get by analytic 5

integration. Standard deviation σ is typically used as the error metric for the integral. According to [5, p. 154] the standard deviation for the Monte Carlo integral is proportional to 1/ N. Thus in order to halve the error, one must use four times more samples. This is computationally very expensive, but fortunately this can be improved using variance reduction schemes. Perhaps the simplest and most applicable variance reduction scheme is known as stratified sampling. The domain is split into N subdomains, and one sample point is placed randomly inside each subdomain. The standard deviation of stratified sampling is proportional to 1/N which is clearly an improvement to 1/ N with random sampling. Stratified sampling should be used wherever applicable. Integrals related to computer graphics are often multidimensional so an obvious way to reduce variance is to reduce the dimensionality of the problem by solving one or several dimensions analytically. This is called the use of expected values. Importance sampling is perhaps the most intuitive variance reduction scheme. As the name suggests we simply concentrate more samples on the most important areas. In effect this means shooting more rays towards bright objects, boundary areas etc. We have to know some estimate of the integral beforehand in order to use this scheme. This estimate might be calculated in the preprocessing phase or adaptively as the algorithm proceeds. Importance sampling without normalization skews the results. Refer to [7] for more advanced discussion on the topic. Russian roulette [2] is a method for eliminating unnecessary work. Assume for example that a surface is emitting particles with power p. We would normally proceed by giving an emitted particle weight equal to p. However, using Russian roulette, a particle is emitted with full power but this emission happens only with a probability proportional to p. This is illustrated by the following pseudo-code: e = rand(); // e is a random number in the range [0,1] if( e < p ) // p is a probability in the range [0,1] emit with full power else do not emit Note that because Russian roulette effectively eliminates some of the samples, variance is increased but computational workload is reduced. 4 PATH TRACING 4.1 The path tracing algorithm The following is pseudo-code for a path tracer. Various details are discussed in the following sections. 6

render_image() for( each pixel ) color = 0; for( sample = 0; sample < sample_count; sample++ ) ray = pick_random_ray_at_pixel(); // see 4.2 pick_random_time(); // see 4.2 color = color + trace( ray ) / sample_count; trace( ray ) intersection_point = find_nearest_intersection( ray ); // see 4.4 gathered = 0; for( each light ) gathered = gathered + sample_light( intersection_point, light ); // see 4.5 ray = get_next_path_segment_or_terminate(); // see 4.6 and 4.7 return gathered + trace( ray ); 4.2 Preparing a path for tracing The image plane is defined by vectors u,v and ref where u and v are two perpendicular vectors defining the image plane and ref points to the first pixel on the image plane. Given an integer pixel coordinate (x,y) and two uniformly distributed and independent random numbers ξ 1 and ξ 2 in the range [0,1] the first generation ray direction is calculated by: r = ref + u ( x + ξ 1 ) / w + v ( y + ξ 2 ) / h c (9) where w and h are the respective width and height of the image plane in pixels and c is the camera position. Each ray should also be distributed in time or rendered animations reveal aliasing artifacts in the form of jerky and unnatural motion. Antialising in time is very important when synthetic images are composited with photographs or real film footage. Antialiasing in time is also called motion blur. A real camera takes a snapshot of the environment by opening the shutter for a certain small period of time. During this time the image in front of the lens is accumulated on the film. pick_random_time() in our path tracer skeleton should simply pick a random point in time in the interval [t 0,t 1 ], where t 0 is the time when the camera shutter opens and t 1 is the closing time. All motions in the scene should be updated according to this chosen point in time. Note that because objects tend to move much slower than the speed of light, this needs to be done only when starting a new path, not for each path segment. 4.3 Pixel filtering One way to implement pixel filtering is to just select a constant number of samples per pixel and uniformly distribute random samples on the pixel area. However this leads to sample clumping and gaps between samples as illustrated in figure 2a. If the number of samples per pixel is a power of two, it is easy to see that this can be improved by dividing the pixel area into a regular grid and placing one sample in each cell (figure 2b). This is called stratified or jittered sampling. 7

Even better sample distribution can be achieved using Poisson disc sampling (figure 2c). In Poisson disc sampling points that are closer to each other than a certain limit are disallowed. Poisson disc distribution can be generated by repeatedly picking a random candidate for the next sample point. The candidate is chosen as a sample point only if it satisfies the distance condition. This point is then stored and the algorithm proceeds to find the next point. a. b. c. Figure 2. Random, stratified and Poisson disc distributions. Each image contains 100 samples. Rendered images often contain noise concentrated on certain areas while some areas are almost free of noise. This suggests that samples should be concentrated on the noisier areas and leads to adaptive sampling schemes where the number of rays shot per pixel is no longer a constant. Unfortunately at least stratified sampling does not seem to be applicable with adaptive sampling since stratified sampling requires that the number of samples is known beforehand. Kajiya proposes using k-d trees which successively split the domain into two by a plane perpendicular to the coordinate axes. A simple approach would be to select some number of samples F that is a power of two and place those samples using stratified sampling. Then the pixel value is integrated. If the pixel is found out to need more sampling, we could place an additional F samples and estimate the error again and so forth. The number of rays to shoot should be based on the relative error which can be estimated using the standard deviation. For a set of samples x i the standard deviation σ is calculated as follows: where µ is the sample mean. If the standard deviation σ exceeds a certain user set limit the pixel needs more sampling. (10) 4.4 Intersection calculation The workhorse for any ray tracing algorithm is the core that calculates intersections. It is especially important for a path tracer that this core is as efficient as possible since rays shot per pixel can be several thousands. A path tracer like a standard ray tracer must find the nearest object intersected by a ray, as well as the intersection point and normal. 8

A number of acceleration schemes have been devised for ray tracing and they should be used for path tracing where applicable. Especially space division schemes such as octrees, bsp-trees and bounding volumes are useful. Complex objects should be enclosed in bounding volumes. If the ray does not intersect the bounding volume that particular object or a group of objects contained in the volume need not be tested against the ray. 4.5 Sampling light sources The classical ray tracing model only supports point lights. This results in lighting that is unnatural and particularly shadow boundaries are too sharp. Real lights always occupy a volume. Therefore it is much better to use area lights instead of point lights. Light energy can be gathered in Monte Carlo fashion by shooting a number of shadow rays from the point of interest (the intersection point) towards random point on the area light. Each unoccluded ray transfers light equally to a point light positioned on the sample point. It is important to select the right amount of shadow rays. Too few rays and the shadows will look grainy and too many leads to unnecessarily high rendering times. Several factors should be taken into account here. Naturally a large bright light needs more sampling than a small dim light. Also because far away lights contribute less, the number of shadow rays cast should be proportional to the projected area of the light source. Stratified sampling can be used for selecting random sample points on rectangular area lights. The code for sampling a single area light is outlined below: sample_light() light = 0; for( sample = 0; sample < sample_count; sample++ ) p = get_random_point(); if( test_visibility( intersection_point, p ) ) light = light + local_illumination() / sample_count; return light; 4.6 Russian roulette in path tracing Because a path tracer never splits a ray, reflections and refractions must be handled differently than in a standard ray tracer. Path tracer resorts to particle physics approximation and picks a random mode for each path segment. A ray can be reflected, transmitted or absorbed. If Phong reflection model is used, lighting is split into diffuse and specular components. 9

We can use Russian roulette to select a random mode for the ray as follows: d = probability of diffuse reflection s = probability of specular reflection t = probability of transmission e = rand(); // returns a random number in the range [0,1] if( e < d ) distribute the ray diffusely (Phong reflection model) else if( e < d + s ) distribute the ray specularly (Phong reflection model) else if( e < d + s + t ) transmission, calculate the refraction ray else terminate path Probabilities d, s and t are material properties for which the following condition must hold: d+s+t 1. It is important to remember that when using Russian roulette unlike in ray tracing we must not reduce the weight of the ray by multiplying with respective material coefficients. 4.7 Distributing the reflected ray direction One way to initialize the next path segment is to pick a ray direction that is uniformly distributed on the hemisphere. The weight of this ray should then be scaled by the BRDF. Given two uniformly distributed random numbers ξ 1 and ξ 2 in the range [0,1] the uniform distribution on the hemisphere can be generated by: (11) where (θ,φ) are the spherical coordinates for the ray. 5 EXTENSIONS TO THE BASIC ALGORITHM 5.1 Bidirectional path tracing Because of the nature of the light, it does not matter in which direction light paths are followed. While most lighting situations can be efficiently captured by tracing from the viewer and traversing towards the scene, certain cases with e.g. strong indirect lighting are better handled by tracing from the light sources towards the viewer. This suggests a more robust light transport algorithm which is a hybrid between forward and backward tracing. Bidirectional path tracing constructs a path starting from a light source and the viewer and connects them in the middle. More information about bidirectional path tracing can be found in [5] and [7]. 10

5.2 Metropolis light transport Metropolis light transport, as introduced by Eric Veach in 1997, is a new Monte Carlo approach for solving the rendering equation. Paths are constructed by selecting an initial light transport path which is then randomly mutated. A mutation might add a vertex to the path or otherwise change the path. A probability for the mutated path is then carefully chosen and the path is either accepted or rejected. Mutations that do not contribute to the final image or those that contribute very little are rejected. When an important path is found nearby paths are good candidates for further mutations. Metropolis light transport method can handle difficult lighting situations such as strong indirect lighting and small geometric holes and can be several magnitudes faster than standard path tracing algorithms. 6 RESULTS A path tracer was written using some of the techniques presented in this paper. The intersection calculation core was able to handle planes, spheres, boxes and triangles as primitive types as well as solid bsp-trees for complex models. Pixel filtering was done using stratified sampling. Poisson disc sampling was also tried but this didn t improve the image quality noticeably with the test scenes. Supported basic light source types were points and solid spheres. In addition a special sky light model was used. Lights were sampled using uniform random sampling. Material properties were the diffuse and specular colors and coefficients. These properties were used to setup probabilities for Russian roulette. Russian roulette was used to pick a random mode for each path segment. Reflected rays were distributed uniformly on the hemisphere and outgoing rays were weighted appropriately. Figure 3 contains images rendered using our path tracer. The first image consists of a bsp-tree (the cow) and a ground plane. A large spherical light illuminates the scene. The cow appears to be tinted red because some of the light bounces off the floor and hits the cow. The second image contains an enclosed room with a sphere standing on a pedestal. The pedestal is partly reflective. The scene is illuminated with a strong bluish light. The red color on the floor and the pedestal is reflected from a nearby wall behind the viewer. The bluish light is so strong that even a small percentage of the light reflecting off the red wall is enough to cast a red glow on the scene. 11

Figure 3. Images created using author s path tracer. Both images were rendered using 400 paths per pixel. Rendering time was about 3 hours for both images on a 233Mhz Intel Pentium II. ACKNOWLEDGEMENTS Many thanks to Jaakko Lehtinen, Eetu Martola and Mikko Kallinen for giving constructive criticism, discussing various implementation details and proofreading the paper. REFERENCES [1] Apocada A.A. and Gritz L. 2000. Advanced RenderMan, San Diego, Ca. Academic Press. p. 12. [2] Arvo, J. and Kirk D. B. 1990. Particle Transport and Image Synthesis. Proceedings of SIGGRAPH 90, August 1990. pp. 63-66. [3] Cohen M. F. and Wallace J. R. 1993. Radiosity and Realistic Image Synthesis. San Diego, Ca. Academic Press. [4] Cook R. L.; Porter T.; Carpenter L. 1984. Distributed ray tracing. Proceedings of SIGGRAPH 84, July 1984, pp. 137-145. [5] Jensen H. W. 2001. Realistic Image Synthesis Using Photon Mapping. Natick, Massachusetts. A K Peters, Ltd. [6] Kajiya J. T. 1986. The rendering equation. Proceedings of SIGGRAPH 86, August 1986. pp. 143-150. [7] Veach E. 1997. Robust Monte Carlo methods for light transport simulation. PhD thesis, Stanford University. [8] Whitted T. 1980. An improved illumination model for shaded display. Communications of the ACM. pp. 343-349. 12