Three-Dimensional Data Recovery Using Image-Based Modeling

Size: px
Start display at page:

Download "Three-Dimensional Data Recovery Using Image-Based Modeling"

Transcription

1 Three-Dimensional Data Recovery Using Image-Based Modeling Jeremy W. Cannon Jonathan C. Derryberry Vitaly Y. Kulikov Abstract 6.837: Introduction to Computer Graphics MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final Project Report Team 13 December 6, 2002 Extraction of a three-dimensional model from a set of two-dimensional projections is a well-known problem of contemporary computer science. Termed image-based modeling, solutions to this problem have a number of practical applications ranging from virtual tours and image recognition to generation of physical models from image data. However, this problem remains the subject of active research as it has not yet been solved in the general case. Although this general case has proven very challenging, there are certain special cases where a satisfactory solution can be achieved with minimal human intervention. The following report describes our approach to a general solution to the problem of inferring geometric information from a photographic image. A detailed description of our algorithm and the method of implementation are provided along with sample results demonstrating the capabilities of this approach. I. Introduction Since the initial work of Horn in 1970 [1], the use of photograph images for constructing physical models has evolved into a range of new disciplines in the fields of both computer graphics and computer vision. This classical work has been termed shape from shading as it uses the reflectance equation (1) to relate image brightness, I to the surface normal, N: I = R( p, q) = ρ( N L) (1) where R(p,q) is the reflectance function in terms of the surface gradient, ρ is the composite albedo, and L is the light source direction. To derive the surface normals, the radiosity at a point P on the surface of the object is given by (2): BP ( ) = ρ( P) N( P) L (2) where ρ(p) is the surface albedo, N is the surface normal, and L is the light source vector. Assuming the camera response is linear with respect to the surface radiosity, the intensity value of each pixel can be written as I( x, y) = kb( x, y) = kρ( x, y) N( x, y) L (3) = g( xy, ) V where k is the constant relating camera response to surface radiance thereby making V a vector containing elements related to both the scene lighting and the camera. Although the surface normal is not uniquely - 1 -

2 determined in this expression, it can be obtained by assuming a convex surface. Because N is a unit normal, ρ(x,y) is simply the 2-norm of the surface vector g(x,y). Thus, N can be found as 1 N( x, y) = g( x, y) (4) g( xy, ) 2 A surface model can then be determined from this reference normal by recognizing that the normal can also be written as a homogeneous vector: T 1 f f N( xy, ) = (5) f f x y 1+ + x y where f(x,y) is the equation for the parameterized surface which can then be integrated over x and y to yield the final model. Subsequent work by Chen and Williams, McMillan, Debevec, and others has spawned the fields of Image-based Modeling and Rendering (IBMR) which seeks to enhance the realism of computer graphics scenes by extracting environmental information about a scene from photographs [2,3]. This environmental information typically goes far beyond derivation of realistic geometry to include new approaches to visibility, modeling view-dependent variations in the appearance of materials, and the extraction of more accurate lighting models for complex scenes [3]. Indeed, many of these new approaches to model generation view photographic images as measurements which can inform the realism of any given scene. Although the general concepts of shape from shading have been studied for decades, this field remains quite active as a research discipline due to the wide range of complex issues that have been uncovered as research in this field has progressed. Examples of these complex issues include variable albedo within an object which confounds the relationship expressed in Equation 1 [4], interreflections which lead to dramatically different appearance from that predicted by local lighting models [5], and ambiguous geometries which cannot be resolved based on shading alone [6]. In summary, a general solution to the complete extraction of three-dimensional geometric and environmental data has not been described in part because of the complexity of this problem and in part due to the diversity of subject matter and modeling objectives held by those employing the techniques of IBMR. II. Goals 1.1 Image-based Modeling A variety of techniques for solving image-based modeling problems have been developed since the original methods described by Horn [1,7]. More recent techniques include using an array of silhouettes to reconstruct object geometry, using surface curves from object profiles to create the model geometry, and using stereoscopic imaging to extract a so-called depth map of the object [8]. In this project, we aimed to reconstruct a graphical model of physical objects based on photographic images of the object. Our goals for this phase included: Implementing an algorithm that identifies the boundaries of the two-dimensional projection of the model and that ensures that the RGB values of pixels within the boundaries of the projection are smooth functions of their position. Implementing an algorithm that, given a reference normal or normals, scans the area within the boundaries of the preprocessed projection and restores the depth and the normal direction at each vertex of the generated 3D model

3 Implementing an algorithm that generates a complete model from a set of two or more partial models where each partial model corresponds to one two-dimensional projection (only provided enough time remaining in the term). 1.2 Using the Generated Model Once the model is extracted, it needs to be rendered and, if the results are desirable, exported for use in other applications. Therefore, we needed to develop a flexible interface for rendering the model as well as include the option to convert the extracted model to a universal format. Our goals for this phase included: Providing the user with simple tools to control different parameters of the image-based modeling process such as the granularity of the model (i.e. the number of pixels in the image per vertex in the model). Providing the user with a simple tools to view the model from different directions and distances to determine the quality of the model in real time, before exporting. Providing the user with a simple tool to save the model into VRML and/or Open Inventor file formats for easy access from other applications (whereby the generated model could be edited as needed). III. Achievements 3.1. General Approach As our initial approach to this problem, we assumed a single, diffuse (Lambertian) object with a uniform albedo illuminated by a point light source imaged using an orthographic projection. After generating several synthetic (Open Inventor) images satisfying these constraints, we began to develop an algorithm to determine a reference normal based on intensity values within the image. In particular, the reference normal was assumed to reside at the pixel of the image with the brightest intensity pointing in the positive z direction. From this normal, the entire field of normals was derived which then could be integrated to obtain a surface mesh. The coordinates in the mesh and their corresponding normals were then used to create 3D geometry to be rendered next to the original image. The user could then adjust the view of the model (orientation and zoom) to inspect the model before exporting the result to an Open Inventor (*.iv) file. Once the algorithm for model extraction was validated using synthetic images, we Synthetic Image (Development Phase) OR User-specified Concavity Image Acquisition Filtering Determine Reference Normal(s) Model Rendering Generate a Surface Mesh Compute Field of Normals Model Export Figure 1. Stepwise approach to model generation from photographic data. Synthetic images refer to idealized scenes generated using Open Inventor which were used to test our algorithm. Steps in double frames indicate those integrated into a user interface

4 then applied this approach to digital photographs obtained from scenes designed to meet the above constraints as closely as possible, so that there were no sharp edges, specular color components, or variations in surface color. We then applied this approach to a range of basic objects to test the robustness and stability of this algorithm. This sequence of steps is summarized in Figure 1, and the results of our analysis are presented below Physical Models & Lighting Synthetic images for initial development and testing of our algorithm were obtained using SceneViewer. Models of single, monochromatic, diffuse objects (with the complexity node increased to eliminate surface irregularities) were illuminated with a single directional light source and presented with orthographic projection. The SceneViewer window was then converted to a *.jpg image with minimal compression using XV 3.10a. Re-creating this environment using physical models required some approximations; however, the setup shown in Figure 2 gave images which were acceptable for use by our algorithm. In this setup, a directional light source is simulated by a single incandescent 100 W clear bulb placed 6 feet from the scene thus permitting the assumption that the light rays were nearly parallel when they hit the objects. All images were obtained with only this light source illuminating the scene contained in a chamber lined with black felt to further reduce any contribution from ambient light. A diffuse surface was achieved by using cardboard models or models of modeling clay with a slightly roughened surface. All of these modeled objects were monochromatic with a uniform or nearly uniform albedo. The most difficult constraint to approximate was an orthographic projection as we had to balance camera resolution with separation from the viewing chamber. A reasonable compromise between these parameters was achieved by fixing the camera viewpoint at 18 inches from the chamber with no zoom. All of our objects were no more than 3 inches in diameter and were centered in the camera field of view thereby giving a close approximation to orthographic projection Image Pre-processing Like estimation of surface curvature from photographic images, estimation of surface normals is highly noise sensitive [9]. To ensure the best possible model estimation, we implemented a set of image filters into our Java user interface to smooth the intensity curves while preserving the underlying shape as much as possible. In the frequency domain, we assumed that geometry causes low frequency variations in image intensity while detailed features of an image such as edges and textures, and noise are generally d >> ε Point Light Source radius=ε Camera Figure 2. Setup for image acquisition with a simulated point light source (incandescent bulb placed far away from the object) and orthographic projection obtained as well as possible. The black box containing the object represents a felt-lined box designed to minimize ambient light

5 Figure 3. Intensity profiles for a green sphere showing the native intensity values for the green channel (A) and the filtered intensity values using a 50x50 average filter (B). higher frequency components of an image [10]. On this basis, we implemented several types of lowpass filters for the user to select for the purpose of minimizing the image noise. These filters included a Gaussian kernel filter. Also, we incorporated a mean filter, which uses a normalized uniform kernel, a median filter, and a minimum filter. For the latter three filters, the user inputs the size (by setting the window radius in a dialog box). Sample results of the average filter are shown in Figure 3. Our incorporation of the median filter is based on work by Tsai and Shah [4] and sets the intensity of pixel (i.j) to the median value of the neighboring n x n pixels. The minimum filter assigns pixel values to the smallest intensity value in the surrounding n x n pixels. Finally, the user has the option to skip the filtering step by selecting the blank filter prior to the model creation step Determining surface normals from shading values Using a monocular viewpoint and a directional light source permits extraction of only a partial model of 3-D objects due to self-shadowing. However, in the setting of the constraints outlined above, this partial model can be rather convincing, and by combining multiple partial models from registered images, the complete three-dimensional geometry can be reproduced. This section presents our approach to determining the surface normals of an object based on image intensity values from which a single partial model of the object is generated Model creation a. Preliminaries Calculating the field of normal vectors over a surface proves a difficult problem because concavity/convexity can vary across the photographed object, and the result is an ambiguous picture. For example, a bowl and a sphere may have the same light intensities at each point, but are obviously different shapes. More generally, the concavity can be different in any arbitrary direction. For instance, a surface can be concave in the x direction while being convex in the y direction. One approach to resolving this ambiguity would be to have many pictures of the object from different viewpoints. However, given the time constraints, the complexity of such an algorithm was determined to be too great. Moreover, extracting the model from a single image is an interesting problem in and of itself. Therefore, we made a design decision to use just one picture but assume convexity everywhere. Obviously, this would excessively constrain the range of geometry the algorithm could successfully extract, so a set of tools was implemented in the user interface that allows the specification of regions in - 5 -

6 which the object is concave. To specify concavity the user can draw, move, and delete any number of polygonal regions to indicate that a particular region of the screen has concave underlying geometry. Moreover, the user has the freedom of making such specifications applicable only to a particular direction, either x or y. Thus, the user can specify certain regions that are concave in the x direction while specifying other regions to be concave in the y direction. For convenience, the user is allowed to specify concavity in both the x and y directions with a single polygon. Even with such flexible ability for concavity specification, there are pathological objects whose concavity cannot be specified. Consider the graph of f( x, y) = xy, which is linear in both the x and y direction. However, giving the user addition freedom to specify arbitrary convexity would burden the user with too many choices. Also, it would complicate the model extraction algorithm. Therefore, the user was only allowed the ability to specify the concavity in the x and y directions. b. Mesh generation Equation (5) describes the relationship between surface normals and a parameterized equation of the surface which can be used to generate a mesh to reconstruct the geometry of the object. The algorithm that we use to reconstruct the surface consists of two separate steps. During step one, a separate partial 3D-model is generated for each reference normal. During step two, partial 3D models from the previous step are processed to form a more precise, average, partial 3D-model of the object in question. Let us separately consider each of the steps mentioned above. During step one, a separate partial 3D-model is generated for each given reference normal. The process of generating a partial 3D-model can in turn be subdivided into two stages. During the first stage, the mesh of surface normals is recovered, using the reference normal and information about light intensity at each point of the surface. During the second stage, the field of normals from the previous step is used to recover the set of z values or depth for each vertex in the mesh. While the process of recovering the z-coordinate for each vertex in the mesh is a comparatively simple procedure once the field of normals is built, restoring the field of normals proved to be challenging. Given a reference normal at the most illuminated point of the image, the geometrical set of vectors that satisfy the illumination equation above forms a cone. Among those vectors we need to choose the one that satisfies the convexity requirement at the part of the surface in question and that is consistent with directions of normals nearby. We are considering only relatively smooth surfaces and therefore can expect the change between any two normals close to each other to be small. To determine the normal at a particular vertex in the mesh, we sample the intensity values of pixels nearby to find the direction in which the absolute change in intensity is the largest. Of course, this direction cannot be always determined precisely due to a certain amount of noise, but it can be determined well enough to avoid large errors in the normal direction. Moreover, this direction is not necessarily the direction in which the normal should point (e.g. consider the case of a cylinder slanted in the z direction). However, in practice this assumption proved to generate reasonable normals for a wide family of objects. Once we determine the direction of the maximum absolute change in intensity, we can decrease the number of potential normals to those two that lie within the vertical plane, containing that direction. Then, to choose between two normals, we only need to check which normal is consistent both with the convexity of the surface in the region and with the direction of already computed normals nearby. This can be done by analyzing how parallel the direction of the maximum absolute change in intensity is to the X- and Y-axes, and the direction of the closest neighboring vertices in the mesh. The geometry of this portion of our algorithm is shown in Figure 4. Once we have the field of normals, the process of recovering z-coordinate for each vertex in the mesh is simple. If we assume that the value of the unit normal at some point (x, y) is (a (x, y), b (x, y), c (x, y)) and Z (x, y) is the function of the surface, then it is easy to see that: z axy (, ) z bxy (, ) = and = (6) x cxy (, ) y cxy (, ) - 6 -

7 x z y x I ( xy, ) Figure 4. Geometric relationship between the reference normal (red) and a neighboring point where the normal at that point (x,y) is being computed. One of 2 possible normals are identified (black) which our algorithm then evaluates to determine which of these is the correct normal. y As a result, if we assume that the z-value at some reference point P r is 0.0, the z-value at any other z z point P d will be a linear integral of (, ) along some path from P r to P d. In theory, the z-values that we x y get from different paths should be the same. However, because the field of normals that we built is not completely error-proof, the z-values that we get along different paths are a little bit different. To make sure that they are as close to correct as possible, we compute z-value at each point through multiple paths and than take the average. If we assume that all errors are independent random variables, taking the average should help us to reduce the amount of error. When a separate 3D-model for each of the given reference normals is computed, we need to process these partial models to build a more precise, average, model. There are many different ways to do this. One possible way is just to choose some reference normal M and the coordinate system associated with it as the main, take the z-values of origins associated with other reference normals, and for each vertex compute its average depth, using the following formula: ([ ZM1+ Z1( x, y)] + [ ZM2 + Z2( x, y)] + + [ ZMN + ZN( x, y)]) ZM ( x, y) = (7) N In (7), Z MJ stands for the z-value of the origin of reference normal J in the main coordinate system, while Z J (x, y) denotes the z-value at the (x, y) point of the surface in the coordinate system associated with reference normal J, and N denotes the total number of reference normals. The algorithm described above is a final result of multiple attempts of the team to come up with the best way of solving the problem. Many of the algorithms that we considered first proved to be unsuccessful. For instance, there are many possible approaches for building a field of normals using a reference normal and light intensity at each point of the surface. One way to do it would be to average between two or more normals nearby that have already been computed and then to choose that of the two potential candidate normals that forms the smallest angle with the average of the two neighboring vertices. This algorithm is very simple to implement, for it is much less sensitive to the order in which different vertices within the mesh are processed, and does not involve a lot of computations. Unfortunately, while the algorithm worked perfectly on non-flat surfaces, it made too many errors on flat ones, where the two normals to choose from at each vertex were very close to each other

8 Another version of the algorithm that we implemented but eventually rejected was similar to our final algorithm but used only one of normals nearby. If the already-computed normal nearby belonged to a vertex located in the mesh along X-axis relatively to the vertex being processed, we used our knowledge about the surface convexity in X direction; otherwise, we used our knowledge about the surface convexity in Y direction to choose between the two possible normals at the vertex. This algorithm was also less sensitive to the order in which different vertices in the mesh were processed, and worked on most of the surfaces we considered. However, it did not work on cylinder-like surfaces, where there can be no change in the normal direction along one axis but a large change along the other axis Model viewing Following computation of the parametric surface equation and mesh fitting, the user can manipulate the resulting model in a number of different ways. These adjustments can include rotated viewpoints or altered magnification (zoom) as in the SceneViewer application. To display the 3D geometry, Java3D was used. A simple interface was provided for the model extraction algorithm to satisfy. The extraction algorithm was required to provide a two-dimensional array of points for the coordinates and a two-dimensional array of per-vertex normals, which were used for Gouraud shading a TriangleStripArray that was built from the points. To allow the user flexibility in viewing the model, a key listener was added to allow the user to rotate and recenter the model in addition to the buttons provided for such functions. IV. Description of Deliverables The following figures demonstrate the abilities of our integrated model extraction system which takes a single image as input, derives a partial model of the three-dimensional geometry which it then renders in an adjacent window for viewing by the user. Final adjustments can then be made using the control buttons on the user interface before the model is exported. Figure 5. User interface showing a filtered picture of a cylinder (left) and the resulting partial model constructed using our algorithm described above

9 A B Figure 6. Demonstration of the effect of image filtering on model generation. (A) Partial model extracted from a photograph of an unfiltered image of an egg. (B) Improved partial model of the egg after use of an average filter to reduce high-frequency noise components while preserving the geometry of the object

10 A B Figure 7. Demonstration of partial model extraction of a concave surface (original image on the left). (A) With the concave region not specified, the algorithm assumes a convex surface. However, with user-specified concavity, the model is correctly generated as in (B)

11 V. Individual Contributions Producing this integrated model extraction system required extensive background reading on IBMR and shape from shading analysis followed by systematic planning of the sequence of steps required to yield a functional result. To this end, we divided the project into four separate elements with one team member primarily responsible for each of the first 3 elements and all of the team members participating in the last element: (1) physical model and image acquisition Jeremy Cannon, (2) surface normal determination and mesh generation Vitaly Kulikov, (3) user interface for image processing and model rendering Jonathan Derryberry, and (4) integration of these elements into a single system ALL. Specifically, Jeremy Cannon performed the following tasks in support of this project: Generating suitable synthetic images using Open Inventor for primitives and MATLAB for irregular surfaces Setting up the environment for image acquisition including approximation of the modeling constraints Design and testing the image processing filters Preparation and integration of the project documentation including the project proposal, final report, and presentation Vitaly Kulikov performed these specific tasks for this project: Converting indexed images to RGB images Deriving surface reference normal(s) from image intensity values Generating a field of normals from the derived reference normal(s) Producing a smooth, continuous surface mesh from these normals Algorithm for exporting to Inventor Finally, Jonathan Derryberry supported this project with the following contributions: Image display in a Java-based User Interface Integration of image filters, model generation, and model exporting into this Interface Model rendering using Java 3D Worked with Vitaly on improving the model extraction algorithm VI. Lessons Learned This project taught us a great deal about the complexities of using images as measurements. It also gave us great appreciation for the enormous complexity of the types of problems investigators such as Leonard McMillan, Paul Debevec, and Takeo Kanade are currently tackling and for the incredible insight of Horn s groundbreaking work in the early 1970 s. Although we had hoped to synthesize a complete three-dimensional model from stereo image pairs, this did not prove possible given the time constraints of this project and the significant increase in complexity over partial model extraction. However, in producing this system which extracts partial threedimensional models, our knowledge increased greatly in the following specific ways: Appreciation of image processing techniques specific to using images as environmental measurements Understanding of the necessary constraints required to extract precise geometric data from a physical scene Understanding of the mathematical basis for model extraction from single images

12 Knowledge of Java 3D and the supporting mathematical libraries required for image processing and manipulation. Engineering experience in choosing algorithms that may not be correct generally, but provide adequate functionality without excessive complexity and computational cost so that a rich set of surface geometry could be extracted reliably in a reasonable amount of time Acknowledgments We would like to acknowledge the insights of Dr. Doug Perin who gave us inspiration for this project and offered specific suggestions on optimizing the image acquisition setup. In addition, Addy Ngan was very helpful in keeping us on schedule and in giving suggestions on debugging our algorithm. Bibliography 1. Horn BKP. Shape from shading: a method for obtaining the shape of a smooth opaque object from one view. PhD Thesis, MIT, Chen SE, Williams L. View interpolation for image synthesis. SIGGRAPH Debevec P, McMillan L. Image-based modeling, rendering, and lighting. IEEE Computer Graphics and Applications. Mar/Apr Tsai P-S, Shah M. Shape from shading with variable albedo. Opt Eng 37(4): Forsyth D, Ponce J. Sources, Shadows, and Shading in Computer Vision: A Modern Approach. Prentice Hall, NJ Horn BKP. Impossible shaded images. IEEE Transactions on Pattern Analysis and Machine Intelligence. 15(2): Horn BKP. Height and gradient from shading. Int J Comput Vision. 5(1): R. Szeliski. From images to models (and beyond): a personal retrospective. In Vision Interface , Kelowna, British Columbia. May Fan, T-J. Surface Segmentation and Description in Describing and Recognizing 3-D Ojbects Using Surface Properties. Springer-Verlag, New York Gonzales RC, Woods RE. Image Enhancement in Digital Image Processing. Addison-Wesley Publishing Company, Reading, MA

13 Appendix Compilation Instructions The source code for our ModelBuilder UI is located in the following directory: /afs/athena.mit.edu/user/j/o/jonderry/public/ The test images are contained in /afs/athena.mit.edu/user/j/o/jonderry/public/ivpics /afs/athena.mit.edu/user/j/o/jonderry/public/photos To execute this program, use the code contained in the first directory on a machine with JAVA and the JAVA 3D libraries

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007 Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total

More information

INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

More information

Computer Applications in Textile Engineering. Computer Applications in Textile Engineering

Computer Applications in Textile Engineering. Computer Applications in Textile Engineering 3. Computer Graphics Sungmin Kim http://latam.jnu.ac.kr Computer Graphics Definition Introduction Research field related to the activities that includes graphics as input and output Importance Interactive

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam cjam@sdsc.edu Mike Bailey mjb@sdsc.edu San Diego Supercomputer Center University of California San Diego Abstract With

More information

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

More information

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER

More information

Image Normalization for Illumination Compensation in Facial Images

Image Normalization for Illumination Compensation in Facial Images Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines

More information

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.

Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch. CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation

More information

Enhanced LIC Pencil Filter

Enhanced LIC Pencil Filter Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {daisy@media.yamanashi.ac.jp, mao@media.yamanashi.ac.jp, imamiya@media.yamanashi.ac.jp}

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!

More information

HIGH AND LOW RESOLUTION TEXTURED MODELS OF COMPLEX ARCHITECTURAL SURFACES

HIGH AND LOW RESOLUTION TEXTURED MODELS OF COMPLEX ARCHITECTURAL SURFACES HIGH AND LOW RESOLUTION TEXTURED MODELS OF COMPLEX ARCHITECTURAL SURFACES E. K. Stathopoulou a, A. Valanis a, J. L. Lerma b, A. Georgopoulos a a Laboratory of Photogrammetry, National Technical University

More information

Highlight Removal by Illumination-Constrained Inpainting

Highlight Removal by Illumination-Constrained Inpainting Highlight Removal by Illumination-Constrained Inpainting Ping Tan Stephen Lin Long Quan Heung-Yeung Shum Microsoft Research, Asia Hong Kong University of Science and Technology Abstract We present a single-image

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Computer Animation: Art, Science and Criticism

Computer Animation: Art, Science and Criticism Computer Animation: Art, Science and Criticism Tom Ellman Harry Roseman Lecture 12 Ambient Light Emits two types of light: Directional light, coming from a single point Contributes to diffuse shading.

More information

An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology An introduction to Global Illumination Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Isn t ray tracing enough? Effects to note in Global Illumination image:

More information

SkillsUSA 2014 Contest Projects 3-D Visualization and Animation

SkillsUSA 2014 Contest Projects 3-D Visualization and Animation SkillsUSA Contest Projects 3-D Visualization and Animation Click the Print this Section button above to automatically print the specifications for this contest. Make sure your printer is turned on before

More information

A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

More information

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS

REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS Tommy Jensen, Mikkel S. Andersen, Claus B. Madsen Laboratory for Computer Vision and Media

More information

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr

Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

Colour Image Segmentation Technique for Screen Printing

Colour Image Segmentation Technique for Screen Printing 60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone

More information

H.Calculating Normal Vectors

H.Calculating Normal Vectors Appendix H H.Calculating Normal Vectors This appendix describes how to calculate normal vectors for surfaces. You need to define normals to use the OpenGL lighting facility, which is described in Chapter

More information

Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping. Lecture 15 Taku Komura

Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping. Lecture 15 Taku Komura Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping Lecture 15 Taku Komura In the previous lectures We did ray tracing and radiosity Ray tracing is good to render specular

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

A Proposal for OpenEXR Color Management

A Proposal for OpenEXR Color Management A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used

More information

Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling

Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling 81 Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling Andrey Dimitrov 1 and Mani Golparvar-Fard 2 1 Graduate Student, Depts of Civil Eng and Engineering

More information

Recovering Primitives in 3D CAD meshes

Recovering Primitives in 3D CAD meshes Recovering Primitives in 3D CAD meshes Roseline Bénière a,c, Gérard Subsol a, Gilles Gesquière b, François Le Breton c and William Puech a a LIRMM, Univ. Montpellier 2, CNRS, 161 rue Ada, 34392, France;

More information

2: Introducing image synthesis. Some orientation how did we get here? Graphics system architecture Overview of OpenGL / GLU / GLUT

2: Introducing image synthesis. Some orientation how did we get here? Graphics system architecture Overview of OpenGL / GLU / GLUT COMP27112 Computer Graphics and Image Processing 2: Introducing image synthesis Toby.Howard@manchester.ac.uk 1 Introduction In these notes we ll cover: Some orientation how did we get here? Graphics system

More information

NEW MEXICO Grade 6 MATHEMATICS STANDARDS

NEW MEXICO Grade 6 MATHEMATICS STANDARDS PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical

More information

An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

More information

Lighting Estimation in Indoor Environments from Low-Quality Images

Lighting Estimation in Indoor Environments from Low-Quality Images Lighting Estimation in Indoor Environments from Low-Quality Images Natalia Neverova, Damien Muselet, Alain Trémeau Laboratoire Hubert Curien UMR CNRS 5516, University Jean Monnet, Rue du Professeur Benoît

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Circle Object Recognition Based on Monocular Vision for Home Security Robot Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang

More information

Digital Imaging and Image Editing

Digital Imaging and Image Editing Digital Imaging and Image Editing A digital image is a representation of a twodimensional image as a finite set of digital values, called picture elements or pixels. The digital image contains a fixed

More information

Cork Education and Training Board. Programme Module for. 3 Dimensional Computer Graphics. Leading to. Level 5 FETAC

Cork Education and Training Board. Programme Module for. 3 Dimensional Computer Graphics. Leading to. Level 5 FETAC Cork Education and Training Board Programme Module for 3 Dimensional Computer Graphics Leading to Level 5 FETAC 3 Dimensional Computer Graphics 5N5029 3 Dimensional Computer Graphics 5N5029 1 Version 3

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized

More information

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and

More information

Rendering Area Sources D.A. Forsyth

Rendering Area Sources D.A. Forsyth Rendering Area Sources D.A. Forsyth Point source model is unphysical Because imagine source surrounded by big sphere, radius R small sphere, radius r each point on each sphere gets exactly the same brightness!

More information

Solving Geometric Problems with the Rotating Calipers *

Solving Geometric Problems with the Rotating Calipers * Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

A Game of Numbers (Understanding Directivity Specifications)

A Game of Numbers (Understanding Directivity Specifications) A Game of Numbers (Understanding Directivity Specifications) José (Joe) Brusi, Brusi Acoustical Consulting Loudspeaker directivity is expressed in many different ways on specification sheets and marketing

More information

White Paper. "See" what is important

White Paper. See what is important Bear this in mind when selecting a book scanner "See" what is important Books, magazines and historical documents come in hugely different colors, shapes and sizes; for libraries, archives and museums,

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

GUIDE TO POST-PROCESSING OF THE POINT CLOUD

GUIDE TO POST-PROCESSING OF THE POINT CLOUD GUIDE TO POST-PROCESSING OF THE POINT CLOUD Contents Contents 3 Reconstructing the point cloud with MeshLab 16 Reconstructing the point cloud with CloudCompare 2 Reconstructing the point cloud with MeshLab

More information

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry.

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry. Image-Based Photo Hulls Greg Slabaugh, Ron Schafer Georgia Institute of Technology Center for Signal and Image Processing Atlanta, GA 30332 {slabaugh, rws}@ece.gatech.edu Mat Hans Hewlett-Packard Laboratories

More information

Multivariate data visualization using shadow

Multivariate data visualization using shadow Proceedings of the IIEEJ Ima and Visual Computing Wor Kuching, Malaysia, Novembe Multivariate data visualization using shadow Zhongxiang ZHENG Suguru SAITO Tokyo Institute of Technology ABSTRACT When visualizing

More information

Automotive Applications of 3D Laser Scanning Introduction

Automotive Applications of 3D Laser Scanning Introduction Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc

More information

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)

Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based) Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf Flow Visualization Image-Based Methods (integration-based) Spot Noise (Jarke van Wijk, Siggraph 1991) Flow Visualization:

More information

An Experimental Study of the Performance of Histogram Equalization for Image Enhancement

An Experimental Study of the Performance of Histogram Equalization for Image Enhancement International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-2, April 216 E-ISSN: 2347-2693 An Experimental Study of the Performance of Histogram Equalization

More information

Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park

Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park NSF GRANT # 0727380 NSF PROGRAM NAME: Engineering Design Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park Atul Thakur

More information

So, you want to make a photo-realistic rendering of the Earth from orbit, eh? And you want it to look just like what astronauts see from the shuttle

So, you want to make a photo-realistic rendering of the Earth from orbit, eh? And you want it to look just like what astronauts see from the shuttle So, you want to make a photo-realistic rendering of the Earth from orbit, eh? And you want it to look just like what astronauts see from the shuttle or ISS (International Space Station). No problem. Just

More information

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification

More information

The RADIANCE Lighting Simulation and Rendering System

The RADIANCE Lighting Simulation and Rendering System The RADIANCE Lighting Simulation and Rendering System Written by Gregory J. Ward Lighting Group Building Technologies Program Lawrence Berkeley Laboratory COMPUTER GRAPHICS Proceedings, Annual Conference

More information

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing

More information

The Rocket Steam Locomotive - Animation

The Rocket Steam Locomotive - Animation Course: 3D Design Title: Rocket Steam Locomotive - Animation Blender: Version 2.6X Level: Beginning Author; Neal Hirsig (nhirsig@tufts.edu) (May 2012) The Rocket Steam Locomotive - Animation In this tutorial

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Digital Image Requirements for New Online US Visa Application

Digital Image Requirements for New Online US Visa Application Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Thea Omni Light. Thea Spot Light. Light setup & Optimization

Thea Omni Light. Thea Spot Light. Light setup & Optimization Light setup In this tutorial we will learn how to setup lights inside Thea Studio and how to create mesh lights and optimize them for faster rendering with less noise. Let us have a look at the different

More information

Everyday Mathematics. Grade 4 Grade-Level Goals. 3rd Edition. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goals

Everyday Mathematics. Grade 4 Grade-Level Goals. 3rd Edition. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goals Content Strand: Number and Numeration Understand the Meanings, Uses, and Representations of Numbers Understand Equivalent Names for Numbers Understand Common Numerical Relations Place value and notation

More information

Klaus Goelker. GIMP 2.8 for Photographers. Image Editing with Open Source Software. rocky

Klaus Goelker. GIMP 2.8 for Photographers. Image Editing with Open Source Software. rocky Klaus Goelker GIMP 2.8 for Photographers Image Editing with Open Source Software rocky Table of Contents Chapter 1 Basics 3 1.1 Preface....4 1.2 Introduction 5 1.2.1 Using GIMP 2.8 About This Book 5 1.2.2

More information

Pre-Algebra 2008. Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems

Pre-Algebra 2008. Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems Academic Content Standards Grade Eight Ohio Pre-Algebra 2008 STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express large numbers and small

More information

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

More information

Character Animation Tutorial

Character Animation Tutorial Character Animation Tutorial 1.Overview 2.Modelling 3.Texturing 5.Skeleton and IKs 4.Keys 5.Export the character and its animations 6.Load the character in Virtools 7.Material & texture tuning 8.Merge

More information

TEXTURE AND BUMP MAPPING

TEXTURE AND BUMP MAPPING Department of Applied Mathematics and Computational Sciences University of Cantabria UC-CAGD Group COMPUTER-AIDED GEOMETRIC DESIGN AND COMPUTER GRAPHICS: TEXTURE AND BUMP MAPPING Andrés Iglesias e-mail:

More information

Curriculum Map by Block Geometry Mapping for Math Block Testing 2007-2008. August 20 to August 24 Review concepts from previous grades.

Curriculum Map by Block Geometry Mapping for Math Block Testing 2007-2008. August 20 to August 24 Review concepts from previous grades. Curriculum Map by Geometry Mapping for Math Testing 2007-2008 Pre- s 1 August 20 to August 24 Review concepts from previous grades. August 27 to September 28 (Assessment to be completed by September 28)

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing

More information

CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS

CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS ICCVG 2002 Zakopane, 25-29 Sept. 2002 Rafal Mantiuk (1,2), Sumanta Pattanaik (1), Karol Myszkowski (3) (1) University of Central Florida, USA, (2) Technical University of Szczecin, Poland, (3) Max- Planck-Institut

More information

3D Analysis and Surface Modeling

3D Analysis and Surface Modeling 3D Analysis and Surface Modeling Dr. Fang Qiu Surface Analysis and 3D Visualization Surface Model Data Set Grid vs. TIN 2D vs. 3D shape Creating Surface Model Creating TIN Creating 3D features Surface

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Everyday Mathematics. Grade 4 Grade-Level Goals CCSS EDITION. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goal

Everyday Mathematics. Grade 4 Grade-Level Goals CCSS EDITION. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goal Content Strand: Number and Numeration Understand the Meanings, Uses, and Representations of Numbers Understand Equivalent Names for Numbers Understand Common Numerical Relations Place value and notation

More information

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality Henry Fuchs University of North Carolina at Chapel Hill (USA) and NSF Science and Technology Center for Computer Graphics and

More information

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI) Computer Graphics CS 54 Lecture 1 (Part 1) Curves Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) So Far Dealt with straight lines and flat surfaces Real world objects include

More information

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

CLOUD DIGITISER 2014!

CLOUD DIGITISER 2014! CLOUD DIGITISER 2014 Interactive measurements of point cloud sequences July 2014 Cloud Digitiser Manual 1 CLOUD DIGITISER Interactive measurement of point clouds Bill Sellers July 2014 Introduction Photogrammetric

More information

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su (sarasu@cs.tufts.edu) TA: Matt Menke

More information

Lezione 4: Grafica 3D*(II)

Lezione 4: Grafica 3D*(II) Lezione 4: Grafica 3D*(II) Informatica Multimediale Docente: Umberto Castellani *I lucidi sono tratti da una lezione di Maura Melotti (m.melotti@cineca.it) RENDERING Rendering What is rendering? Rendering

More information

Last lecture... Computer Graphics:

Last lecture... Computer Graphics: Last lecture... Computer Graphics: Visualisation can be greatly enhanced through the Introduction to the Visualisation use of 3D computer graphics Toolkit Visualisation Lecture 2 toby.breckon@ed.ac.uk

More information

Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint

Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint While it is, of course, possible to create a Research Day poster using a graphics editing programme such as Adobe

More information

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface? Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] Goals How do we draw surfaces? Approximate with polygons Draw polygons

More information

How To Analyze Ball Blur On A Ball Image

How To Analyze Ball Blur On A Ball Image Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

More information

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene. Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.

More information

3D Modelling in Blender Based on Polygonal Data

3D Modelling in Blender Based on Polygonal Data 3D Modelling in Blender Based on Polygonal Data James Ribe MSCS Department St. Olaf College 1500 St. Olaf Ave Northfield, MN 55438 ribe@stolaf.edu Alora Killian MSCS Department St. Olaf College 1500 St.

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow 02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

Volume visualization I Elvins

Volume visualization I Elvins Volume visualization I Elvins 1 surface fitting algorithms marching cubes dividing cubes direct volume rendering algorithms ray casting, integration methods voxel projection, projected tetrahedra, splatting

More information