Multiple-view 3-D Reconstruction Using a Mirror

Size: px
Start display at page:

Download "Multiple-view 3-D Reconstruction Using a Mirror"

Transcription

1 Multiple-view 3-D Reconstruction Using a Mirror Bo Hu Christopher Brown Randal Nelson The University of Rochester Computer Science Department Rochester, New York Technical Report 863 May 2005 Abstract We propose a 3-D object reconstruction method using a stationary camera and a planar mirror. No calibration is required. The mirror provides the extra views needed for a multiple-view reconstruction. We examine the imaging geometry of the camera-mirror setup and prove a theorem that gives us the point correspondences to compute the orientation of the mirror. The correspondences are derived from the convex hull of the silhouettes of the images of the object and its mirror reflection. The distance between the mirror and the camera can be then obtained by a single object point and a pair of points on the mirror surface. After the pose of the mirror is determined, we have multiple calibrated views of the object. We show two reconstruction methods that utilize the special imaging geometry. The system setup is simple. The algorithm is fast and easy to implement. Keywords: Mirror, 3-D reconstruction, silhouette, visual hull, pose, camera calibration, catadioptric sensor This work is supported by the NSF Research Infrastructure grant No. EY EIA and the NIH grant No.

2 1 Introduction Computer vision technologies have advanced significantly since its coming into being. But few of the technologies, even those well understood in the research community, have entered the consumer market. The reason used to be that average consumers have only limited computational power and limited access to imaging devices. Now desktop PCs and digital cameras are as good as those in research labs. Yet we still face some obstacles to bringing the technologies into homes. One such technology is 3-D object reconstruction. Given a set of images or a clip of video of an object, 3-D reconstruction methods generate a 3-D model of the object. In general, except for very simple objects, the images are taken from different view points. Calibration is then required to establish the relationships between the view points. Usually employing a calibration rig, multipleview calibration is a non-trivial feat. The process can be hardly automated. Furthermore, unless multiple cameras are connected rigidly, the calibration can be easily broken and needs to be re-done. We here propose a 3-D reconstruction method that uses a stationary camera and a planar mirror. The camera is only minimally calibrated. Namely, one only needs to know the focal length of the lens. There are no multiple coordinate systems in our method. Therefore no external calibration is necessary. Knowing the focal length is in fact the minimum requirement for any Euclidean reconstruction. In practice, one can find out the focal length from the reading on the lens. The setup is simple. We place the object before a stationary camera and take a picture of the object. We then move a planar mirror into the scene and take pictures of the object and its mirror image (Fig. 1). We show a method that computes the orientations and the distances of the mirror automatically. What is new in the pose determination is that no feature correspondence is needed when computing the orientation. Once we know where the mirror is placed, we will have multiple calibrated views of the object and we can apply any 3-D reconstruction method. We show two silhouette-based methods that take advantage of the special imaging geometry. The first one is direct volume intersection. The second method builds a depth map for each object point seen by the stationary camera. This representation is very similar to that of image-based visual hull [16]. But in our case the intersection of epipolar lines takes a very simple form. Silhouette methods generate the visual hull [10] of the object, but it is usually a good approximation of the original object. The system described in this paper can be considered as a type of so-called catadioptric sensor, which combine mirrors and lenses. The majority of catadioptric sensors use curved, e.g., quadric, mirrors to increase the angle of view. Finding the pose of these sensors can be quite complex. E.g., Paulino and Araujo [12] prove that four corners of a rectangle determine the pose of a central catadioptric system. As a corollary, a rectangle (in fact, any four coplanar points [5]) determines the pose of a planar mirror too. Decomposing the pose to orientation and distance, our method only requires one point on the object or two points on the mirror surface. Doubeck and Svoboda [3] used a hyperbolic mirror to triangulate scene points. Their method recovers 3-D information of discrete points, while ours creates the whole shape. Gluckman and Nayar [8] investigated how to construct a system of planar mirrors and a camera so that the stereo images captured by the camera are rectified. Like most catadioptric sensors, the relative position of the camera and the mirror in their system is fixed and pre-calibrated. In our method, the mirror can freely change position, which makes the imaging system much more flexible. Mitsumoto et al.[11] used a planar mirror to reconstruct simple polyhedral objects. They introduced the VP (vanishing point) constraint between an object point and its mirror image to find point correspondences. It is the same constraint that we illustrate in Section 2. Zhang et al.[17] used a planar mirror to simulate mirror-symmetric objects, which allow simple 3-D reconstruction algorithms [7]. The work relies on hand-selected point correspondences. What is new in our method is that we establish the correspondences without identifying feature points. The shape from silhouette problem has a history as long as the discipline of computer vision. Silhouette-based algorithms are robust and able to create complete surfaces. One representative 1

3 Figure 1: Using a mirror to generate a new view. piece of work done by Szeleski [14] uses octrees to recover 3-D shape from a turn-table sequence. Similar algorithms are widely used; however, calibrating a turn-table sequence is not much easier than general multiple-camera setups [6]. Apple s QuickTime VR [1], which enables users to create and view an object from multiple view points, is perhaps the singular example of mature computer vision techniques entering the consumer market. It is however purely image-based and no 3-D models are created. The remaining of this paper is organized as following. In Section 2, we examine the imaging geometry of the mirror-camera setup. We prove the theorem that gives us the orientation of the mirror in Section 3. After the orientation of the mirror is known, there are different methods to determine the distance of the mirror from the camera. We will discuss two of them in Section 4. We show two 3-D reconstruction methods in Section 5. Lastly, we present some experimental results and point out opportunities for future work. Notation We use capital letters, e.g., A for 3-D points and primed letters for their mirror image, e.g., A is the mirror image of A. We use lower case letters for 2-D image points (a is the image of A and a is the image of A.) Vectors are in Bold-face. 2 The geometry of mirror imaging The imaging geometry of a camera and a mirror is depicted in Fig. 2. The stationary camera establishes a coordinate system, with its optical axis being the z-axis. The camera is at the origin O and the imaging plane is at z = f, where f is the focal length. The mirror surface can be expressed by a plane equation in this coordinate system: n x + d = 0, where n is the normal vector of the mirror plane. The normal vector always points toward the camera, which makes d positive and thus the distance from the camera to the plane. The point a is the image of an object point A. The point a is the image of A, the mirror image of point A. It is apparent that AA n. In fact, the line connecting any object point and its mirror image is parallel to the normal of the mirror. If we have another pair B and B and their images b and b respectively, we can see that aa and bb must intersect at the single point e, which is the vanishing point of the normal direction of the mirror. 2

4 B Mirror O A B A z Retina plane e a a b O b focal length Figure 2: The imaging geometry of a camera and a mirror. A and B are two object points. A and B are their mirror image. The camera is at the origin O. The image of A and A is a and a, respectively. The mirror image of the camera is a virtual camera at O. The point e is the epipole of the virtual camera. It is also the vanishing point of the normal direction of the mirror plane. It is interesting to point out that an object and its cast shadow share similar geometry as in the camera-mirror case. In fact, the lines aa and bb are what Shafer called illumination vectors [13]. Shafer observed that vectors from a shadow point to its corresponding occluder point intersect at a single point, which is the vanishing point of a directional light source, or the image of a point light source. The theorems we will prove in the next section therefore apply to the cast shadow cases too. We can also imagine there is a virtual camera, the mirror image of the real camera, looking at the object (its location is shown as O in Fig. 2). The vanishing point of n turns out to be the epipole of the virtual camera. This thinking leads us to the more familiar multiple cameras setting. Notice that the handedness of the virtual cameras is opposite to the real camera. If the normal direction of the mirror is parallel to the imaging surface of the camera, the camera cannot see the reflected object. Evidently, the vanishing point is not defined in this case. In general, consider the view sphere centered at the object. Let the tails of the normal vectors lie on the sphere and the heads point into the inside of the sphere. The sphere is cut in half by a plane parallel to the imaging plane. The admissible mirror normal directions are on the semi-sphere that is on the opposite side of the plane to the camera. 3 Finding the orientation of the mirror In Section 2, we see that given two pairs of corresponding points (aa and bb ) we can find out the the orientation of the mirror, whose vanishing point is the intersection of the two lines. The problem is how we can find the correspondences. The difficulties arise for mainly two reasons. One is that the two views (the views from the real camera and from the virtual camera) are in general far apart. 3

5 3-D Mirror hull Mirror image Object Mirror A mirror line Figure 3: The concept of a mirror hull. Figure 4: A case that the visibility assumption is not satisfied. Finding correspondences from long baseline images is a known difficult task. The other reason is that for smooth objects with little texture, e.g, the ceramic tea cup in Fig 1, it is nearly impossible to find the correspondences even by hand, let alone automatically by computers. Here we prove that under a very simple assumption, we can find a pair of correspondences from just the silhouettes of the images of the object and its mirror reflection. Let s call the convex hull of the object and its mirror image the 3-D mirror hull. The 3-D mirror hull consists of the convex hulls of the object and its mirror image and a cylindrical surface of mirror lines connecting an object point and its corresponding mirror image (Fig. 3). Our assumption is Assumption (Visibility). The whole 3-D mirror hull is always visible by the stationary camera. The assumption is a rephrasing of the requirement that both the object and its image must be seen in the camera. This is easily satisfied and in fact a prerequisite for a full reconstruction. A case that violates the assumption is shown in Fig 4, where part of the reflected object is not visible from the camera. If we imagine that the mirror-reflected object emits light, some light rays are blocked by the mirror in this case. Go back to the 2-D image (e.g., Fig. 5). We call the region composed by the image of the object points the object region and that of its mirrored points the mirror region. The convex hull of both 4

6 Supporting line a a b Object region Mirror region b Figure 5: Finding the supporting lines from the object region and the mirror region. The regions are segmented from the right most image in Fig. 7(a). regions is the projection of the 3-D mirror hull. The visibility assumption ensures us that we can see the entirety of the convex hull. If the two regions do not overlap, the convex hull includes two straight lines that connect the two regions (Fig. 5). We call the two straight lines supporting line. A more rigorous definition of supporting lines is in [9]. They can be understood in an intuitive way. Given the object region and the mirror region, let a straight line approach the regions from infinity until the line touches both regions. This line is a supporting line. The following theorem tells us that a supporting line provides a correspondence between an object point and its mirrored point. Theorem 1. A supporting line provides a correspondence between an object point and its mirror image. Proof. The visibility assumption guarantees that the convex hull of the object region and the mirror region is the projection of the 3-D mirror hull. So the supporting line is the projection of a 3-D mirror line, which means the supporting line provides the correspondence. The theorem appears to be trivial. But it imposes a surprising constraint on the object region and its mirror region. In Fig. 5, the handle of the tea cup is not seen by the camera but is visible in the mirror image (or by the virtual camera). The image of the handle must be bounded by the supporting lines aa because of the theorem. So the handle cannot be arbitrarily large even if we do not see it in the camera view. Algorithm 1 is used to find the supporting lines. Step 1 of the algorithm is for speeding up the second step and it is not necessary for the correctness of the algorithm. The intersection of two supporting lines gives us the vanishing point of the parallel mirror lines, or the vanishing point of the normal vector of the mirror plane. The natural question to ask is that how many supporting lines we can find from one pair of object and mirror regions. The following theorem tells us that the best we can do is two supporting lines. Theorem 2. Each object-mirror pair provides at most two supporting lines. Proof. By contradiction. Assume there are three or more supporting lines, which are edges of a convex polygon. These supporting lines must intersect at a single point. However, we know that three or more edges of a convex polygon do not intersect at a single point. 5

7 Input: Labeled object region and mirror region, with the number of points in the two regions being N. Output: Supporting lines of the object region and the mirror region. 1. Find boundary points by inspecting the 4-neighbors of each point. If all its 4 neighbors are region points, detect as an inside point and remove it. Otherwise, it s a boundary point. This is an O(N) operation. The resulting number of boundary point is O( N). 2. Use Graham s algorithm [2] to find the convex hull. Graham s algorithm works by maintaining a stack of vertices of the convex hull. The time complexity is O( N log N). 3. For every pair of adjacent points in the stack, if the labels (object or mirror point) are different, the line defined by the two points is a supporting line. This is an O(h) operation, where h is number of vertices of the shadow hull. Algorithm 1: Finding supporting lines. 4 Determining the distance of the mirror We have shown how to compute the orientation of the mirror, which is given by the vanishing point of its normal vector. If we adopt the virtual camera intuition of the mirror, we know the epipole of the virtual camera. In general two-camera stereo, knowing the epipole alone is not enough to determine fully the relationship between the two cameras. But here the virtual camera is the mirror image of the real camera. By reflecting the axes of the real camera according to the mirror, we obtain the pose of the virtual camera with regard to the real camera, except the length of the baseline, which is the line connecting the two cameras. For two-camera stereo, we can set the baseline to unit length and achieve a Euclidean reconstruction up to a scale factor. For multiple-view stereo, however, we have to establish a common scale factor among all the views. In other words, we need to find out the distance between the mirror and the camera (up to a common scale factor). There are two ways to determine this distance. The first is to locate an object point in the camera view and all mirror views. In Fig. 2, let the coordinates of the point A be (x 0, y 0, z 0 ) T. The coordinates of its mirror image A are thus x = x 0 2dn x y = y 0 2dn y, z = z 0 2dn z where d is the distance from the mirror to the camera and n = (n x, n y, n z ) T is the normal vector of the mirror plane. The image of A is a = (u, v) T : u = x 0 z 0 v = y 0, z 0 and the image of A is a = (u, v ) T : u = x 0 2dn x z 0 2dn z v = y 0 2dn. y z 0 2dn z The reason that there is no f in the above equations is because we use the normalized image coordinates [4]. Solving the equations for d, we have d = uz 0 2(u n z n x ), (1) 6

8 O a C1 e C2 A c 1 a Epipolar line C1 c 2 Mirror region a Object region b e c 1 a O (a) (b) Figure 6: Computing the depth map from the mirror view. The left pane is a 2-D image of the object and the mirror reflection. The epipole e is the intersection of two supporting lines. Given a point a in the object region, the epipolar line ea intersects the mirror region at c 1 and c 2. The right pane shows the epipolar plane defined by ea. The 3-D point A whose image is a lies on the half-line Oa and is in between C 1 and C 2, which are computed from c 1 and c 2. where u = u u. The only unknown quantity is z 0. The equation says if we can locate a point A in all the views, we can determine the distances of the mirror by letting z 0 be the unit length. The other way is to mark the mirror. If we place two marker points F 1 and F 2 on the mirror surface, we have F 1 = t 1 f 1 and F 2 = t 2 f 2, where t 1,2 are the distances of the two markers from the camera center and f 1,2 are the unit vectors defined by the image points of the two markers. Because the two markers are on the mirror, we have { t 1 n f 1 + d = 0. t 2 n f 2 + d = 0 Or t 1 = d n f 1 t 2 = d n f 2 Letting the distance between the two markers be L = F 1 F 2, we have L = d f 2 n f 2 f 1 n f 1 = d f 2 n f 2 f 1 n f 1. (2) We can drop the absolute sign because d 0. So if we track the two markers in all the views, we can obtain the distances of the mirror in terms of L. Note that we do not need to know the exact positions of the markers. Nor do we need to know the actually value, say in inches, of L. 7

9 5 Multiple-view 3-D reconstruction Having determined the mirror positions, or equivalently, the poses of all the virtual cameras, we are left with a conventional multiple-view 3-D reconstruction problem. The countless 3-D reconstruction methods can be roughly divided into point-based and silhouette-based. The former compute the 3-D position of individual points by triangulation. The latter are usually some variant of volume intersection. That is, the silhouettes and their associated camera centers extend to form view cones, whose intersection gives the 3-D shape. In theory, we can plug in any reconstruction method. In the following, we show two methods that have been customized to use the fact that each virtual camera is symmetric to the real camera. 5.1 Volume intersection Volume intersection is the most straightforward method if one has a robust CSG (constructive sold geometry) package. We can easily construct a cone whose apex is at the camera center and whose bottom face is defined by the silhouette. We then perform a set intersection operation on all the cones using the CSG package. The result is usually a polygonal mesh, which can be projected back to each view to be textured. Since in our case the virtual cameras are mirror images of the real camera, the cone can be actually constructed in the real camera s view and then be reflected using the plane equation of the mirror. Given a point P = (x, y, z) T and the mirror equation n x + d = 0, its mirror image P is P = P 2(P n + d)n. (3) So after constructing the cone in the real camera s view, we use Eq. 3 to transform the cone into the virtual camera s view and then perform the intersection. 5.2 Depth intersection Implementing a robust CSG package can be very tricky and demanding. The quality of the polygonal mesh degrades rapidly as new views being added. We therefore propose another method that finds the depth of each point of the object region. Let s look at the object region and one mirror region (Fig. 6(a)). Given an object point a, its corresponding mirror point a is on the epipolar line ea and inside the mirror region. But it is generally difficult to identify which point is the real correspondence. Yet we know the it is in between the intersections of the epipolar line and the mirror region. Each such intersection gives us a depth of a along Oa (Fig. 6(b)). The mirror plane is n x + d = 0 and the distance of OO is 2d, O being where the virtual camera is. A is a 3-D point and a is its image in the real camera s view. The points c 1 and c 2 are the intersections of the epipolar line ea with the mirror region (Fig. 6(a)). The corresponding point a of a is in between c 1 and c 2. Or the 3-D point A that we are looking for is in between C 1 and C 2. In OO C 1, let β = C 1 OO and α = C 1 O O, we have cos β = n a cos α = n c 1, where a is the unit vector from O to a and c 1 from O to c 1. The second equation holds because of symmetry ( C 1 O O = C 1 OO ). Now the depth z 1 = OC 1 can be computed by the law of sines in OO C 1 z 1 sin α = 2d sin(π α β) The other depth z 2 = OC 2 is computed similarly. Each pair of the intersections give us a depth range [z 1, z 2 ] of a. If more views are added, the depth ranges computed from individual views are (4) 8

10 intersected. The result is a volumetric representation in the real camera s view. That is, for each point in the object region, there is a depth range. This volumetric representation can be easily converted to a surface representation by the marching-cubes algorithm. 6 Summary of the algorithm Putting all the pieces together, we have the following algorithms. 1. Place the mirror in a position that the visibility assumption is met. Take an image. 2. Segment the object region and mirror region in the image and find the silhouettes. 3. Compute the supporting lines using Algorithm 1. Compute the normal vector n of the mirror from the intersection of the supporting lines. 4. Extract the marker positions. From f 1,f 2 and n, compute the distance d between the mirror and the camera. 5. Extract the contours of the object region and the mirror region. Construct the view cones. Transform the cone of the mirror region using Eq Do the above for all the views. Note that the view cone extended by the object region is constructed only once. 7. Intersect all the cones and we obtain the 3-D shape. 8. Back-project the shape to each view and texture the shape. Algorithm 2: Multiple-view 3-D reconstruction using mirrors (1). If we use the depth map method, the step 5 to step 7 become 5. For each point in the object region, compute the intersections of the epipolar line of the mirror region. Compute the depth range using Eq Do the above for all the views and intersect the depth ranges. 7. Using the marching cubes algorithm to extract the surface determined by the depth map. Algorithm 3: Multiple-view 3-D reconstruction using mirrors (2). 7 Experiments The setup consists of a cardboard box that is covered with a piece of black fabric. Because of the uniform background, a simple background subtraction gives us very good segmentation results. Two markers are sticked on the mirror. PostIt notes make good material for the markers. The markers can be put anywhere on the mirror, as long as they can be seen by the camera. We used circular markers and their centroids as the fiducial points. Although centroids are not projectively invariant, in practice we found no need to correct the error caused by projection. Note that a closed-form correction is possible because the orientation of the mirror is computed without using the markers. 9

11 (a) (b) (c) Figure 7: Reconstruction using volume intersection. (a) The five images used in the reconstruction. (b) The view cones and their intersection. (c) The reconstructed cup. We track the markers and the mirror image of the object when the mirror is moved. For the volume intersection approach, the regions are saved and the view cones are formed and intersected offline. If the depth intersection algorithm is used, the reconstruction is done in near real time on a 2GHz Xeon workstation. The result is a depth map and the marching-cubes algorithm is performed to obtain a surface representation. Fig. 7 shows the reconstruction of a tea cup using volume intersection. Five images, or equivalently six views, are used. Six view cones are generated (Fig. 7(b)). The intersection is shown in (Fig. 7(c)). The reconstructed surface is not smooth because of the small number of views. Fig. 8 shows the result of using the depth map method. Nine images, five of which are shown in Fig. 8(a), are used. The ten contours used to compute the depth map are drawn together in Fig. 8(b). The center contour is from the object directly and the surrounding contours are from its mirror image. Two views of the recovered toy dog are shown in Fig 8(c). 8 Discussion and Future work We have demonstrated a method of 3-D reconstruction using a stationary camera and a planar mirror. No other devices and no calibration are needed. It thus has the potential to reach average consumers. There are two issues with the proposed method. The first is that the orientation of the 10

12 (a) (b) (c) Figure 8: Reconstruction using depth intersection. (a) Five of the nine images used in the reconstruction. (b) The contours drawn in one picture. (c) Two views of the reconstructed toy dog. 11

13 mirror is determined by the intersection of two lines and the computation of the distance relies on the orientation. It requires the estimation of the two supporting lines to be very accurate, which in turn calls for accurate segmentation. Because our setup is a highly controlled environment, we can achieve near perfect segmentation. How to make the orientation estimation more robust is our main future research topic. The second issue is that we need to accommodate the object and its mirror image in one image frame, which lowers the effective resolution of one image. But since 5-mega-pixel cameras are not uncommon these days, it is a lesser issue. It is also worth pointing out that Theorem 1 not only applies to planar mirrors, but also curved mirrors. Of course a curved mirror has more external parameters than a normal vector and a distance. For example, using the theorem, we can recover the pose of a quadric mirror with two cameras. Details can be found in [9]. We recognize that determining the pose of the mirror can be done in other ways. A minimum three points are sufficient to compute the pose of the mirror, the so-called P3P (perspective 3 points [5]) problem, though the solution is not unique and therefore not suitable in an automatic system like ours. If there are four points (P4P) on the mirror, the solution is unique. Besides involving solving non-linear equations, both P3P and P4P methods need to know the exact positions of the markers. Not a tremendous effort even for laymen, but nonetheless a hassle. Moreover, they are not inherently more accurate than the supporting line method. If more markers are available, we can compute the pose more robustly, e.g., using the venerable Tsai s method [15]. But we then have to worry about the occlusion between the markers and the mirror image. The collection of the images of the object and its mirror reflections are themselves an interesting 3-D representation, which we call the mirror-based hologram. New views can be generated directly from the images without explicit 3-D reconstruction. How to store the images efficiently and how to generate new views fast are promising research directions. 12

14 References [1] S. Chen and L. Williams. View interpolation for image synthesis. In SIGGRAPH 93, pages , [2] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms. The MIT Press, Cambridge, MA, 2 edition, [3] Petr Doubek and Tomas Svoboda. Reliable 3d reconstruction from a few catadioptric images. In Third Workshop on Omnidirectional Vision, pages 71 78, [4] O. Faugeras. Three-Dimentional Computer Vision A Geometric Viewpoint. Artificial Intelligence. MIT Press, Cambridge, MA, [5] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6): , [6] A. W. Fitzgibbon, G. Cross, and A. Zisserman. Automatic 3D model construction for turntable sequences. In R. Koch and L. Van Gool, editors, 3D Structure from Multiple Images of Large-Scale Environments, LNCS 1506, pages , June [7] Alexandre R. J Francois, Gerard G. Medioni, and Roman Waupotitsch. Reconstructing mirror symmetric scenes from a single view using 2-view stereo geometry. In ICPR 2002, [8] J. Gluckman and S. K. Nayar. Rectified catadioptric stereo sensors. In CVPR 2000, volume 2, pages , June [9] Bo Hu, Christopher Brown, and Randal Nelson. The geometry of point light source from shadows. Technical Report 810, Computer Science Department, University of Rochester, June [10] Aldo Laurentini. The visual hull concept for silhouette-based image understanding. IEEE Trans. on PAMI, 16(2): , February [11] Hiroshi Mitsumoto and Shinichi Tamura. 3-D reconstruction using mirror images based on a plane symmetry recovering method. IEEE Trans. on PAMI, 14(9): , September [12] A. Paulino and H. Araujo. Pose estimation for central catadioptric systems: an analytical approach. In ICPR 2002, volume 3, pages , [13] Steven A. Shafer. Shadows and Silhouettes in Computer Vision. Kluwer Academic Publishers, [14] R. Szeliski. Rapid octree construction from image sequences. CVGIP:Image Understanding, 58(1):23 32, July [15] Roger Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses. IEEE Journal of Robotics and Automation, RA-3(4): , [16] Matusik Wojciech, Christopher Buehler, Ramesh Raskar, Leonard McMillan, and Steven J. Gortler. Image-based visual hulls. In SIGGRAPH ACM SIGGRAPH, [17] Z. Y. Zhang and H. T. Tsui. 3D reconstruction from a single view of an object and its image in a plane mirror. In ICPR 1998, volume 2, pages ,

Reflection and Refraction

Reflection and Refraction Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,

More information

Announcements. Active stereo with structured light. Project structured light patterns onto the object

Announcements. Active stereo with structured light. Project structured light patterns onto the object Announcements Active stereo with structured light Project 3 extension: Wednesday at noon Final project proposal extension: Friday at noon > consult with Steve, Rick, and/or Ian now! Project 2 artifact

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

Geometric Optics Converging Lenses and Mirrors Physics Lab IV Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The

More information

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry.

3 Image-Based Photo Hulls. 2 Image-Based Visual Hulls. 3.1 Approach. 3.2 Photo-Consistency. Figure 1. View-dependent geometry. Image-Based Photo Hulls Greg Slabaugh, Ron Schafer Georgia Institute of Technology Center for Signal and Image Processing Atlanta, GA 30332 {slabaugh, rws}@ece.gatech.edu Mat Hans Hewlett-Packard Laboratories

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Geometry Notes PERIMETER AND AREA

Geometry Notes PERIMETER AND AREA Perimeter and Area Page 1 of 57 PERIMETER AND AREA Objectives: After completing this section, you should be able to do the following: Calculate the area of given geometric figures. Calculate the perimeter

More information

Relating Vanishing Points to Catadioptric Camera Calibration

Relating Vanishing Points to Catadioptric Camera Calibration Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

More information

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,

More information

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Optical Tracking Using Projective Invariant Marker Pattern Properties

Optical Tracking Using Projective Invariant Marker Pattern Properties Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space 11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

More information

Epipolar Geometry and Visual Servoing

Epipolar Geometry and Visual Servoing Epipolar Geometry and Visual Servoing Domenico Prattichizzo joint with with Gian Luca Mariottini and Jacopo Piazzi www.dii.unisi.it/prattichizzo Robotics & Systems Lab University of Siena, Italy Scuoladi

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Understanding astigmatism Spring 2003

Understanding astigmatism Spring 2003 MAS450/854 Understanding astigmatism Spring 2003 March 9th 2003 Introduction Spherical lens with no astigmatism Crossed cylindrical lenses with astigmatism Horizontal focus Vertical focus Plane of sharpest

More information

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface? Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] Goals How do we draw surfaces? Approximate with polygons Draw polygons

More information

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

More information

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

More information

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department

More information

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

More information

Solving Geometric Problems with the Rotating Calipers *

Solving Geometric Problems with the Rotating Calipers * Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute

More information

Geometry: Unit 1 Vocabulary TERM DEFINITION GEOMETRIC FIGURE. Cannot be defined by using other figures.

Geometry: Unit 1 Vocabulary TERM DEFINITION GEOMETRIC FIGURE. Cannot be defined by using other figures. Geometry: Unit 1 Vocabulary 1.1 Undefined terms Cannot be defined by using other figures. Point A specific location. It has no dimension and is represented by a dot. Line Plane A connected straight path.

More information

Intuitive Navigation in an Enormous Virtual Environment

Intuitive Navigation in an Enormous Virtual Environment / International Conference on Artificial Reality and Tele-Existence 98 Intuitive Navigation in an Enormous Virtual Environment Yoshifumi Kitamura Shinji Fukatsu Toshihiro Masaki Fumio Kishino Graduate

More information

Angle - a figure formed by two rays or two line segments with a common endpoint called the vertex of the angle; angles are measured in degrees

Angle - a figure formed by two rays or two line segments with a common endpoint called the vertex of the angle; angles are measured in degrees Angle - a figure formed by two rays or two line segments with a common endpoint called the vertex of the angle; angles are measured in degrees Apex in a pyramid or cone, the vertex opposite the base; in

More information

Architectural Photogrammetry Lab., College of Architecture, University of Valladolid - jgarciaf@mtu.edu b

Architectural Photogrammetry Lab., College of Architecture, University of Valladolid - jgarciaf@mtu.edu b AN APPROACH TO 3D DIGITAL MODELING OF SURFACES WITH POOR TEXTURE BY RANGE IMAGING TECHNIQUES. SHAPE FROM STEREO VS. SHAPE FROM SILHOUETTE IN DIGITIZING JORGE OTEIZA S SCULPTURES J. García Fernández a,

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

B4 Computational Geometry

B4 Computational Geometry 3CG 2006 / B4 Computational Geometry David Murray david.murray@eng.o.ac.uk www.robots.o.ac.uk/ dwm/courses/3cg Michaelmas 2006 3CG 2006 2 / Overview Computational geometry is concerned with the derivation

More information

Selected practice exam solutions (part 5, item 2) (MAT 360)

Selected practice exam solutions (part 5, item 2) (MAT 360) Selected practice exam solutions (part 5, item ) (MAT 360) Harder 8,91,9,94(smaller should be replaced by greater )95,103,109,140,160,(178,179,180,181 this is really one problem),188,193,194,195 8. On

More information

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard

Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express

More information

Computational Geometry. Lecture 1: Introduction and Convex Hulls

Computational Geometry. Lecture 1: Introduction and Convex Hulls Lecture 1: Introduction and convex hulls 1 Geometry: points, lines,... Plane (two-dimensional), R 2 Space (three-dimensional), R 3 Space (higher-dimensional), R d A point in the plane, 3-dimensional space,

More information

Angles that are between parallel lines, but on opposite sides of a transversal.

Angles that are between parallel lines, but on opposite sides of a transversal. GLOSSARY Appendix A Appendix A: Glossary Acute Angle An angle that measures less than 90. Acute Triangle Alternate Angles A triangle that has three acute angles. Angles that are between parallel lines,

More information

Intersection of a Line and a Convex. Hull of Points Cloud

Intersection of a Line and a Convex. Hull of Points Cloud Applied Mathematical Sciences, Vol. 7, 213, no. 13, 5139-5149 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ams.213.37372 Intersection of a Line and a Convex Hull of Points Cloud R. P. Koptelov

More information

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling

Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling 81 Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling Andrey Dimitrov 1 and Mani Golparvar-Fard 2 1 Graduate Student, Depts of Civil Eng and Engineering

More information

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE N. Satish Kumar 1, B L Mukundappa 2, Ramakanth Kumar P 1 1 Dept. of Information Science,

More information

Part-Based Recognition

Part-Based Recognition Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple

More information

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

Geometry and Measurement

Geometry and Measurement The student will be able to: Geometry and Measurement 1. Demonstrate an understanding of the principles of geometry and measurement and operations using measurements Use the US system of measurement for

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

GEOMETRY CONCEPT MAP. Suggested Sequence:

GEOMETRY CONCEPT MAP. Suggested Sequence: CONCEPT MAP GEOMETRY August 2011 Suggested Sequence: 1. Tools of Geometry 2. Reasoning and Proof 3. Parallel and Perpendicular Lines 4. Congruent Triangles 5. Relationships Within Triangles 6. Polygons

More information

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target

More information

Triangulation by Ear Clipping

Triangulation by Ear Clipping Triangulation by Ear Clipping David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: November 18, 2002 Last Modified: August 16, 2015 Contents

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Experiment 5: Magnetic Fields of a Bar Magnet and of the Earth

Experiment 5: Magnetic Fields of a Bar Magnet and of the Earth MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Physics 8.02 Spring 2005 Experiment 5: Magnetic Fields of a Bar Magnet and of the Earth OBJECTIVES 1. To examine the magnetic field associated with a

More information

ENGINEERING METROLOGY

ENGINEERING METROLOGY ENGINEERING METROLOGY ACADEMIC YEAR 92-93, SEMESTER ONE COORDINATE MEASURING MACHINES OPTICAL MEASUREMENT SYSTEMS; DEPARTMENT OF MECHANICAL ENGINEERING ISFAHAN UNIVERSITY OF TECHNOLOGY Coordinate Measuring

More information

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?

Limitations of Human Vision. What is computer vision? What is computer vision (cont d)? What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images

More information

John F. Cotton College of Architecture & Environmental Design California Polytechnic State University San Luis Obispo, California JOHN F.

John F. Cotton College of Architecture & Environmental Design California Polytechnic State University San Luis Obispo, California JOHN F. SO L I DMO D E L I N GAS A TO O LFO RCO N S T RU C T I N SO G LA REN V E LO PE S by John F. Cotton College of Architecture & Environmental Design California Polytechnic State University San Luis Obispo,

More information

H.Calculating Normal Vectors

H.Calculating Normal Vectors Appendix H H.Calculating Normal Vectors This appendix describes how to calculate normal vectors for surfaces. You need to define normals to use the OpenGL lighting facility, which is described in Chapter

More information

An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

More information

ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES

ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES The purpose of this lab session is to experimentally investigate the relation between electric field lines of force and equipotential surfaces in two dimensions.

More information

Thin Lenses Drawing Ray Diagrams

Thin Lenses Drawing Ray Diagrams Drawing Ray Diagrams Fig. 1a Fig. 1b In this activity we explore how light refracts as it passes through a thin lens. Eyeglasses have been in use since the 13 th century. In 1610 Galileo used two lenses

More information

How To Analyze Ball Blur On A Ball Image

How To Analyze Ball Blur On A Ball Image Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM

SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp

More information

Metric Measurements on a Plane from a Single Image

Metric Measurements on a Plane from a Single Image TR26-579, Dartmouth College, Computer Science Metric Measurements on a Plane from a Single Image Micah K. Johnson and Hany Farid Department of Computer Science Dartmouth College Hanover NH 3755 Abstract

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

Activity Set 4. Trainer Guide

Activity Set 4. Trainer Guide Geometry and Measurement of Solid Figures Activity Set 4 Trainer Guide Mid_SGe_04_TG Copyright by the McGraw-Hill Companies McGraw-Hill Professional Development GEOMETRY AND MEASUREMENT OF SOLID FIGURES

More information

How To Create A Surface From Points On A Computer With A Marching Cube

How To Create A Surface From Points On A Computer With A Marching Cube Surface Reconstruction from a Point Cloud with Normals Landon Boyd and Massih Khorvash Department of Computer Science University of British Columbia,2366 Main Mall Vancouver, BC, V6T1Z4, Canada {blandon,khorvash}@cs.ubc.ca

More information

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence

More information

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing

More information

Geometric Camera Parameters

Geometric Camera Parameters Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

More information

A unified representation for interactive 3D modeling

A unified representation for interactive 3D modeling A unified representation for interactive 3D modeling Dragan Tubić, Patrick Hébert, Jean-Daniel Deschênes and Denis Laurendeau Computer Vision and Systems Laboratory, University Laval, Québec, Canada [tdragan,hebert,laurendeau]@gel.ulaval.ca

More information

Can we calibrate a camera using an image of a flat, textureless Lambertian surface?

Can we calibrate a camera using an image of a flat, textureless Lambertian surface? Can we calibrate a camera using an image of a flat, textureless Lambertian surface? Sing Bing Kang 1 and Richard Weiss 2 1 Cambridge Research Laboratory, Compaq Computer Corporation, One Kendall Sqr.,

More information

Lesson 26: Reflection & Mirror Diagrams

Lesson 26: Reflection & Mirror Diagrams Lesson 26: Reflection & Mirror Diagrams The Law of Reflection There is nothing really mysterious about reflection, but some people try to make it more difficult than it really is. All EMR will reflect

More information

Line Segments, Rays, and Lines

Line Segments, Rays, and Lines HOME LINK Line Segments, Rays, and Lines Family Note Help your child match each name below with the correct drawing of a line, ray, or line segment. Then observe as your child uses a straightedge to draw

More information

Freehand Sketching. Sections

Freehand Sketching. Sections 3 Freehand Sketching Sections 3.1 Why Freehand Sketches? 3.2 Freehand Sketching Fundamentals 3.3 Basic Freehand Sketching 3.4 Advanced Freehand Sketching Key Terms Objectives Explain why freehand sketching

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

Interactive 3D Scanning Without Tracking

Interactive 3D Scanning Without Tracking Interactive 3D Scanning Without Tracking Matthew J. Leotta, Austin Vandergon, Gabriel Taubin Brown University Division of Engineering Providence, RI 02912, USA {matthew leotta, aev, taubin}@brown.edu Abstract

More information

Projective Geometry: A Short Introduction. Lecture Notes Edmond Boyer

Projective Geometry: A Short Introduction. Lecture Notes Edmond Boyer Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Contents 1 Introduction 2 11 Objective 2 12 Historical Background 3 13 Bibliography 4 2 Projective Spaces 5 21 Definitions 5 22 Properties

More information

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL

More information

ARC 3D Webservice How to transform your images into 3D models. Maarten Vergauwen info@arc3d.be

ARC 3D Webservice How to transform your images into 3D models. Maarten Vergauwen info@arc3d.be ARC 3D Webservice How to transform your images into 3D models Maarten Vergauwen info@arc3d.be Overview What is it? How does it work? How do you use it? How to record images? Problems, tips and tricks Overview

More information

Projective Geometry. Projective Geometry

Projective Geometry. Projective Geometry Euclidean versus Euclidean geometry describes sapes as tey are Properties of objects tat are uncanged by rigid motions» Lengts» Angles» Parallelism Projective geometry describes objects as tey appear Lengts,

More information

Using Many Cameras as One

Using Many Cameras as One Using Many Cameras as One Robert Pless Department of Computer Science and Engineering Washington University in St. Louis, Box 1045, One Brookings Ave, St. Louis, MO, 63130 pless@cs.wustl.edu Abstract We

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow 02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

More information

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

More information

Essential Mathematics for Computer Graphics fast

Essential Mathematics for Computer Graphics fast John Vince Essential Mathematics for Computer Graphics fast Springer Contents 1. MATHEMATICS 1 Is mathematics difficult? 3 Who should read this book? 4 Aims and objectives of this book 4 Assumptions made

More information

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec. Physics for Scientists and Engineers, 4e (Giancoli) Chapter 33 Lenses and Optical Instruments 33.1 Conceptual Questions 1) State how to draw the three rays for finding the image position due to a thin

More information

Integrated sensors for robotic laser welding

Integrated sensors for robotic laser welding Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of

More information

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM 3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM Dr. Trikal Shivshankar 1, Patil Chinmay 2, Patokar Pradeep 3 Professor, Mechanical Engineering Department, SSGM Engineering

More information

Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass

Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of

More information

Discovering Math: Exploring Geometry Teacher s Guide

Discovering Math: Exploring Geometry Teacher s Guide Teacher s Guide Grade Level: 6 8 Curriculum Focus: Mathematics Lesson Duration: Three class periods Program Description Discovering Math: Exploring Geometry From methods of geometric construction and threedimensional

More information

Shortest Inspection-Path. Queries in Simple Polygons

Shortest Inspection-Path. Queries in Simple Polygons Shortest Inspection-Path Queries in Simple Polygons Christian Knauer, Günter Rote B 05-05 April 2005 Shortest Inspection-Path Queries in Simple Polygons Christian Knauer, Günter Rote Institut für Informatik,

More information

ME 111: Engineering Drawing

ME 111: Engineering Drawing ME 111: Engineering Drawing Lecture # 14 (10/10/2011) Development of Surfaces http://www.iitg.ernet.in/arindam.dey/me111.htm http://www.iitg.ernet.in/rkbc/me111.htm http://shilloi.iitg.ernet.in/~psr/ Indian

More information

Application Example: Reverse Engineering

Application Example: Reverse Engineering Application Example: Reverse Engineering Use of optical measuring technology in the ceramics industry Measuring system: ATOS Keywords: Reverse Engineering, Tool and Moldmaking, Quality Assurance, Ceramic

More information

More Local Structure Information for Make-Model Recognition

More Local Structure Information for Make-Model Recognition More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification

More information