Calibrating Pan-Tilt Cameras with Telephoto Lenses
|
|
- Jordan Bradley
- 7 years ago
- Views:
Transcription
1 Calibrating Pan-Tilt Cameras with Telephoto Lenses Xinyu Huang, Jizhou Gao, and Ruigang Yang Graphics and Vision Technology Lab (GRAVITY) Center for Visualization and Virtual Environments University of Kentucky, USA gravity Abstract. Pan-tilt cameras are widely used in surveillance networks. These cameras are often equipped with telephoto lenses to capture objects at a distance. Such a camera makes full-metric calibration more difficult since the projection with a telephoto lens is close to orthographic. This paper discusses the problems caused by pan-tilt cameras with long focal length and presents a method to improve the calibration accuracy. Experiments show that our method reduces the re-projection errors by an order of magnitude compared to popular homographybased approaches. 1 Introduction A surveillance system usually consists of several inexpensive wide fields of view (WFOV) fixed cameras and pan-tilt-zoom (PTZ) cameras. The WFOV cameras are often used to provide an overall view of the scene while a few zoom cameras are controlled by pantilt unit (PTU) to capture close-up views of the subject of interest. The control of PTZ camera is typically done manually using a joystick. However, in order to automate this process, calibration of the entire camera network is necessary. One of our driving applications is to capture and identify subjects using biometric features such as iris and face over a long range. A high-resolution camera with a narrow field of view (NFOV) and a telephoto lens is used to capture the rich details of biometric patterns. For example, a typical iris image should have 100 to 140 pixels in iris radius to obtain a good iris recognition performance [1]. That means, in order to capture the iris image over three meters using a video camera ( ), we have to use a 450mm lens assuming sensor size is mm. If we want to capture both eyes (e.g., the entire face) at once, then the face image resolution could be as high as pixels well beyond the typical resolution of a video camera. In order to provide adequate coverage over a practical working volume, PTZ cameras have to be used. The simplest way to localize the region of interest (ROI) is to pan and tilt the PZT camera iteratively until the region is approximately in the center of field of view [2]. This is time-consuming and only suitable for still objects. However, if the PTZ cameras are fully calibrated, including the axes of rotation, the ROI can be localized rapidly with a single pan and tilt operation. In this paper we discuss the degeneracy caused by cameras with telephoto lenses and develop a method to calibrate such a system with significantly improved accuracy. The remaining of this paper is organized as the following. We first briefly overview the related work in section 2. In section 3, we describe our system and a calibration
2 2 Calibrating Pan-Tilt Cameras with Telephoto Lenses method for long focal length cameras. Section 4 contains experimental results. Finally, a summary is given in section 5. We also present in the appendix a simple method to calculate the pan and tilt angle when the camera coordinate is not aligned with the pan-tilt coordinate. 2 Related Work It is generally considered that camera calibration reached its maturity in the late 90 s. A lot of works have been done in this area. In the photogrammetry community, a calibration object with known and accurate geometry is required. With markers of known 3D positions, camera calibration can be done efficiently and accurately (e.g., [3], [4]). In computer vision, a planar pattern such as a checkerboard pattern is often used to avoid the requirement of a 3D calibration object with a good precision (e.g., [5], [6]). These methods estimate intrinsic and extrinsic parameters including radial distortions from homographies between the planar pattern at different positions and the image plane. Self-calibration estimates fixed or varying intrinsic parameters without the knowledge of special calibration objects and with unknown camera motions (e.g., [7], [8]). Furthermore, self-calibration can compute a metric reconstruction from an image sequence. Besides the projective camera model, the affine camera model in which camera center lies on the plane at infinity is proposed in ([9], [10]). Quan presents a self-calibration method for an affine camera in [11]. However, the affine camera model should not be used when many feature points at different depths [12]. For the calibrations of PTZ cameras, Hartley proposed a self-calibration method for stationary cameras with purely rotations in [13]. Agapito extended this method in [14] to deal with varying intrinsic parameters of a camera. Sinha and Pollefeys proposed a method for calibrating pan-tilt-zoom cameras in outdoor environments in [15]. Their method determines intrinsic parameters over the full range of zoom settings. These methods above approximate PTZ cameras as rotating cameras without translations since the translations are very small compared to the distance of scene points. Furthermore, These methods are based on computing absolute conic from a set of inter-image homographies. In [16], Wang and Kang present an error analysis of intrinsic parameters caused by translation. They suggest self-calibrate using distant scenes, larger rotation angles, and more different homographies in order to reduce effects from camera translation. The work most similar to ours is proposed in [17, 18]. In their papers, Davis proposed a general pan-tilt model in which the pan and tilt axes are arbitrary axes in 3D space. They used a bright LED to create a virtual 3D calibration object and a Kalman filter tracking system to solve the synchronization between different cameras. However, they did not discuss the calibration problems caused by telephoto lens. Furthermore, their method cannot be easily applied to digital still cameras with which it will be tedious to capture hundreds or even thousands of frames. 3 Method In this section, we first describe the purpose of our system briefly. Then, we discuss the calibration for long focal length camera in details.
3 Calibrating Pan-Tilt Cameras with Telephoto Lenses System description The goal of our system is to capture face or iris images over a long range with a resolution high enough for biometric recognitions. As shown in Fig. 1, a prototype of our system consists of two stereo cameras and a NFOV high resolution (6M pixels) still camera. The typical focal length for the pan-tilt camera is 300mm, while previous papers dealing with pan-tilt cameras have reported the use of lenses between 1mm to 70mm. When a person walks into the working area of the stereo cameras, facial features are detected in each frame and their 3D positions can be easily determined by triangulation. The pan-tilt camera is steered so that the face is in the center of the observed image. A high-resolution image with enough biometric details then can be captured. Since the field of view of pan-tilt camera is only about 8.2 degrees, the ROI (e.g., the eye or entire face) is likely to be out of the field of view if the calibration is not accurate enough. 1) Pan-tilt Camera (Nikon 300mm) 6 3 2) Laser Pointer 3) Stereo Camera (4mm) 4) Pan Axis 5) Tilt Axis 6) Flash Calibration Fig. 1. System setup with two WFOV cameras and a pan-tilt camera. In [5], given one homography H = [h 1, h 2, h 3 ] between a planar pattern at one position and the image plane, two constraints on the absolute conic ω can be formulated as in Eq.(1). h T 1 ωh 2 = 0 h T 1 ωh 1 = h T 2 ωh 2 (1) By imaging the planar pattern n times at different orientations, a linear system Ac = 0 is formed, where A is a 2n 6 matrix from the observed homographies and c represents ω as a 6 1 vector. Once c is solved, intrinsic matrix K can be solved by Cholesky factorization since ω = (KK T ) 1. Equivalently, one could rotate the camera instead of moving a planar pattern. This is the key idea in self-calibration of pan-tilt cameras ([15], [13]). First, inter-image homographies are computed robustly. Second, the absolute conic ω is estimated by a
4 4 Calibrating Pan-Tilt Cameras with Telephoto Lenses linear system ω = (H i ) T ω(h i ) 1, where H i is the homography between each view i and a reference view. Then, Cholesky decomposition of ω is applied to compute the intrinsic matrix K. Furthermore, a Maximum Likelihood Estimation (MLE) refinement could be applied using the above close-form solution as the initial guesses. However, the difference is small between the close-form solution and that from MLE refinement [12]. As mentioned in [5], the second homography will not provide any new constraints if it is parallel to the first homography. In order to avoid this degeneracy and generate a over-determined system, the planar pattern has to be imaged many times with different orientations. This is also true for the self-calibration of rotating cameras. If conditions are near singular, the matrix A formed from the observed homographies will be illconditioned, making the solution inaccurate. Generally, the degeneracy is easy to avoid when the focal length is short. For example, we only need to change the orientation for each position of planar pattern. However, this is not true for long focal length cameras. When the focal length increases and the filed-of-view decreases, the camera s projection becomes less projective and more orthographic. The observed homographies contain a lot of depth ambiguities that make the matrix A ill-conditioned and solution is very sensitive to a small perturbation. If the projection is purely orthographic, then observed homographies can not provide any depth information no matter where we put the planar pattern or how we rotate the camera. In summary, traditional calibration methods based on observed homographies are in theory not accurate for long focal length camera. We will also demonstrate this point with real data in the experiments section. X Stereo Cameras x 1 x ( y p x, p ) O World x 2 R,T PTU R * Camera (pan=0, tilt=0) Fig. 2. Pan-tilt camera model. The best way to calibrate a long focal length camera is to create 2D-3D correspondences directly. One could use a 3D calibration object but this approach is not only costly, but also un-practical given the large working volume we would like to cover. In our system 3D feature points are triangulated by stereo cameras, therefore it will not induce any ambiguities caused by the methods based on observed homographies. With a set of known 2D and 3D features, we can estimate intrinsic parameters and the relative transformation between the camera and the pan-tilt unit. The pan-tilt model is shown in
5 Calibrating Pan-Tilt Cameras with Telephoto Lenses 5 Fig. 2 and is written as x = KR 1 R tilt R pan R [R t]x (2) where K is intrinsic parameters, R and t are extrinsic parameters of pan-tilt camera at reference view that is pan = 0 and tilt = 0 in our setting. X and x are 3D and 2D feature points. R pan and R tilt are rotation matrices around pan and tilt axes. R is the rotation matrix between coordinates of the camera and the pan-tilt unit. We did not consider the pan and tilt axes as two arbitrary axes in 3D space as in [17] since translation between the two coordinates are very small (usually only a few millimeters in our setting) and a full-scale simulation shows that adding the translational offset yield little accuracy improvement. Based on the pan-tilt model in Eq.(2), we could estimate the complete set of parameters using MLE to minimize the re-projected geometric distances. This is given by the following functional: n argmin R,K,R,t i=1 j=1 m x ij ˆx ij (K, R, R pan, R tilt, R, t, X ij ) 2 (3) The method of acquiring of calibration data in [17] is not applicable in our system because that our pan-tilt camera is not a video camera that could capture a video sequence of LED points. Typically a commodity video camera does not support both long focal length and high-resolution image. Here we propose another practical method to acquire calibration data from a still camera. We attach a laser pointer close enough to the pan-tilt camera as shown in Fig.1. The laser s reflection on scene surfaces generates a 3D point that can be easily tracked. The laser pointer rotates with the pan-tilt camera simultaneously so that its laser dot can be observed by the pan-tilt camera at most of pan and tilt settings. In our set-up, we mount the laser pointer on the tilt axis. A white board is placed at several positions between the near plane and the far plane within the working area of two wide-view fixed cameras. For each pan and tilt step, three images are captured by the pan-tilt camera and two fixed cameras respectively. A 3D point is created by triangulation from the two fixed cameras. The white board does not need to be very large since we can always move it around during the data acquisition process so that 3D points cover the entire working area. The calibration method is summarized as Algorithm 1. In order to compare our method with methods based on observed homographies, we formulate a calibration framework similar to the methods in [15] and [12]. The algorithm is summarized in Algorithm 2. An alternative of step 4 in Algorithm 2 is to build a linear system ω = (H i ) T ω(h i ) 1 and solve ω. Intrinsic matrix K is solved by Cholesky decomposition ω = KK T. However, this closed-form solution often fails since infinite homography is hard to estimate with narrow fields of view. After calibration step, R, intrinsic and extrinsic parameters of three cameras are known. Hence, We can solve the pan and tilt angles easily (see Appendix A for the details) for almost arbitrary 3D points triangulated by stereo cameras.
6 6 Calibrating Pan-Tilt Cameras with Telephoto Lenses Algorithm 1 our calibration method for a pan-tilt camera with a long focal length. Input: observed laser point images by three cameras. Output: intrinsic matrix K extrinsic parameters R, t, and rotation matrix R between coordinates of camera and PTU. 1. Calibrate the stereo cameras and reference view of the pan-tilt camera using [19]. 2. Rectify stereo images such that epipolar lines are parallel with the y-axis (optional). 3. Capture laser points on a 3D plane for three cameras at each pan and tilt setting in the working area. 4. Based on blob detection and epipolar constraint, find two laser points in the stereo cameras. Generate 3D points by triangulation of two laser points. 5. Plane fitting for each plane position using RANSAC. 6. Remove outliers of 3D points based on the fitted 3D plane. 7. Estimate R, K, R, t by minimizing Eq.(3). Algorithm 2 calibration method for a pan-tilt camera with a long focal length based on homographies. Input: images captured at each pan and tilt setting. Output: intrinsic matrix K and rotation matrix R between coordinates of camera and PTU. 1. Detect features based on Scale-invariant feature transform (SIFT) in [20] and find correspondences between neighboring images. 2. Robust homography estimation using RANSAC. 3. Compute homography between each image and reference view (pan = 0, tilt = 0). 4. Estimate K using Calibration Toolbox [19]. 5. Estimate R and refine intrinsic matrix K by minimizing argmin R,K n m i=1 j=1 x i j KR 1 R tilt R panr K 1 x i ref 2 (4) where x i j and x i ref are i th feature point at j th and reference view respectively. 4 Experiments Here we present experimental results from two fixed cameras (Dragonfly2 DR2-HICOL with resolution ) and a pan-tilt still camera (Nikon D70 with resolution ). First, we compare the calibration accuracy with short and long focal length lenses using traditional homograph-based method. Then, we demonstrate that our calibration method significantly improves accuracy for telephoto lenses. In order to validate the calibration accuracy, we generate about 500 3D testing points that are randomly distributed cover the whole working area following step 2 to step 4 in Algorithm 1, i.e., tracking and triangulating the laser dot. The testing points are different from the points used for calibration. First we present the calibration results of the still camera with a short (18mm) and a long focal length (300mm) lenses. For simplicity, we assume the coordinate systems of pan-tilt camera and pan-tilt unit are aligned perfectly. This means R is an identity matrix. We use the Calibration Toolbox [19] to do the calibration for the reference view
7 Calibrating Pan-Tilt Cameras with Telephoto Lenses 7 of pan-tilt camera and stereo cameras. In order to reduce the ambiguities caused by the long focal length, we capture over 40 checkerboard patterns at different orientations for pan-tilt camera. Table 1 shows results of the intrinsic matrix K and RMS of calibration data. The uncertainties of the focal length with the 300mm lens is about 10 times larger than that with a 18mm lens although the RMS of calibration data for both cases are similar. focal length α β µ 0 ν 0 RMS (in pixels) 300mm ± ± ± ± mm ± ± ± ± Table 1. The comparison between short and long focal length cameras. α and β are focal length. µ 0 and ν 0 are principal point. The uncertainties of principal point for 300mm camera cannot be estimated by Calibration Toolbox [19]. Fig.3 shows distributions of re-projection errors for the 500 testing points with 18mm and 300mm cameras. From this figure, we find that calibration is quite accurate for short focal length camera even that we assume R is an identity matrix. Given the high resolution image, the relative errors from 18mm and 300mm cameras are about 1.3% and 30% respectively. This is computed as the ratio of the mean pixel error and the image width. Furthermore, many of the test points are out of field of view of the 300mm camera. Focal Length: 18mm Focal Length: 300mm Error in Pixels, Mean: 37.9 Variance: Error in Pixels, Mean:898.6, Variance: e+05 Fig. 3. Distributions of re-projection error (in pixels) based on 500 testing data for 18mm and 300mm pan-tilt cameras. We then recalibrate the 300mm case with methods outlined in Algorithm 1 and 2, both of which include the estimation of R. About 600 3D points are sampled for calibration over the working area in Algorithm 1. We pre-calibrate the reference view for the pan-tilt camera as the initial guess. After calibration, we validate the accuracy with 500 3D points. Fig. 4 shows the distributions of re-projection errors from the two different methods. Our method is about 25 times better than the homography-based one. The relative errors from Algorithm 1 and 2 are about 1.2% and 29% respectively. It should be noted that R can not be estimated accurately from observed homographies. Hence, the percentage error from Algorithm 2 remains very large. In fact, the improvement
8 8 Calibrating Pan-Tilt Cameras with Telephoto Lenses over assuming an identity R is little. Table 2 shows the results for intrinsic matrix K, θ x, and θ y after MLE refinement. Here we decompose R into two rotation matrices. One is the rotation around x axis for θ x degree, and the other is the rotation around y axis for θ y degree. Calibration Based on Algorithm 1 Calibration Based on Algorithm Error in Pixels, Mean: 34.6, Variance: Error in Pixels, Mean: Variance: e+05 Fig. 4. Distributions of re-projection error based on 500 testing data (in pixels) for Algorithm 1 and 2. Algorithm α β µ 0 ν 0 θ x θ y Table 2. The comparison between Algorithm 1 and 2. α and β are focal length. µ 0 and ν 0 are principal point. θ x and θ y are rotation angles between pan-tilt camera and pan-tilt unit. 5 Conclusion This paper shows that calibration methods based on observed homographies are not suitable for cameras with telephoto (long-focal-length) lenses. This is caused by the ambiguities induced by the near-orthographic projection. We develop a method to calibrate a pan-tilt camera with long focal length in a surveillance network. In stead of using a large precisely-manufactured calibration object, our key idea is to use fixed stereo cameras to create a large collection of 3D calibration points. Using these 3D points allows full metric calibration over a large area. Experimental results show that the re-projection relative error is reduced from 30% to 1.2% with our method. In future work, we plan to extend our calibration method to auto-zoom cameras and build a complete surveillance system that can adjust zoom settings automatically by estimating the object s size. References 1. Daugman, J.: How Iris Recognition Works. In: ICIP. (2002)
9 Calibrating Pan-Tilt Cameras with Telephoto Lenses 9 2. Guo, G., Jones, M., Beardsley, P.: A System for Automatic Iris Capturing. In: MERL TR (2005) 3. Tsai, R.Y.: A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology Using Off-The-Shelf TV Cameras and Lenses. IEEE Journal of Robotics and Automation 4(3) (1987) O.Faugeras: Three-Dimensional Computer Vision: a Geometric Viewpoint. MIT Press (1993) 5. Zhang, Z.: A Flexible New Technique for Camera Calibration. PAMI 22 (2000) Heikkila, J., Silven, O.: A Four-Step Camera Calibration Procedure with Implicit Image Correction. In: proceedings of CVPR. (1997) Pollefeys, M., Koch, R., Gool, L.V.: Self-Calibration and Metric Reconstruction in spite of Varying and Unknown Internal Camera Parameters. In: proceedings of ICCV. (1997) Pollefeys, M.: Self-calibration and metric 3D reconstruction from uncalibrated image sequences. PhD thesis, K.U.Leuven (1999) 9. Mundy, J., Zisserman, A.: Geometric Invariance in Computer Vision. MIT Press (1992) 10. Aloimonos, J.Y.: Perspective Approximations. In: Image and Vision Computing. Volume 8. (August 1990) Quan, L.: Self-Calibration of an Affine Camera from Multiple Views. International Journal of Computer Vision 19(1) (1996) Hartley, R.I., Zisserman, A.: Multiple View Geometry. Cambridge University Press (2000) 13. Hartley, R.I.: self-calibration of stationary cameras. International Journal of Computer Vision 1(22) (1997) de Agapito, L., Hayman, E., Reid, I.: Self-Calibration of a Rotating Camera with Varying Intrinsic Parameters. In: BMVC. (1998) 15. N.Sinha, S., Pollefeys, M.: Towards Calibrating a Pan-Tilt-Zoom Camera Network. In: OMNIVIS, the fifth Workshop on Omnidirectional Vision (in conjunction with ECCV 2004). (2004) 16. Wang, L., Kang, S.B.: Error Analysis of Pure Rotation-Based Self-Calibration. PAMI 2(26) (2004) Davis, J., Chen, X.: Calibrating pan-tilt cameras in wide-area surveillance networks. In: proceedings of ICCV. Volume 1. (2003) Chen, X., Davis, J.: Wide Area Camera Calibration Using Virtual Calibration Objects. In: proceedings of CVPR. (2000) 19. Bouguet, J.Y.: Camera Calibration Toolbox for Matlab. caltech.edu/bouguetj/calib_doc/ 20. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 20 (2003) Appendix A. Solving Pan and Tilt Angles Here we discuss how to solve the pan and tilt angles so that the projection of an arbitrary point X in the 3D space is in the center of the image plane. We assume there is a rotation between the pan-tilt coordinate system and the camera s. Because of the dependency of the pan-tilt unit, that is, the tilt axis depends on the pan axis, the solution is not as simple as it appears. In order to address this problem, we back project the image center to a line L 2. The center of projection and point X form another line L 1. After the calibration steps described in Section 3, L 1 and L 2 are transformed into L 1 and L 2 in the coordinate system of the pan-tilt unit. Hence, the problem is simplified to panning around y-axis and tilting around x-axis to make L 1 and L 2 coincident or as close as possible to each other. If L 1 and L 2 are represented by the Plüker matrices, one
10 10 Calibrating Pan-Tilt Cameras with Telephoto Lenses method to compute the transformation of an arbitrary 3D line to another line by performing only rotations around x and y axes could be a minimization of the following functional, argmin Rx,R y,λ L1 λl2 2 (5) where λ is a scalar, L 2 is a 6 1 vector of the Plüker coordinates of L 2, and L 1 is the 6 1 Plüker coordinates of the multiplication of (R yr x)l 1(R yr x) T, where R x and R y are rotation matrices around the x and y axes. Y ( x2, y2, z2 ) ϑ 2 C 2 ( a2, b2, c2 ) ϑ 1 L 2 ( a1, b1, c1 ) ϕ 2 O L 1 ϕ 1 ( x1, y1, z1) X Z C1 Fig. 5. Solve pan and tilt angles from L 1 to L 2. However, the problem can be further simplified because L 1 and L 2 are intersected in the origin of the pan-tilt unit in our model. As shown in Fig. 5, we want to pan and tilt line L 1 to coincide with another line L 2. Assuming both of the two lines have unit lengths, the tilt angles are first computed by Eq. (6). ϕ 1 = arctan ( y1 z 1 ) arctan ( y2 r ) ϕ 2 = arctan ( y1 ) arctan ( y2 z 1 r ) r = y1 2 + z2 1 y2 2 (6) If (y z 2 1 y 2 2) is less than 0, two conics C 1 and C 2 are not intersected that means no exact solution exists. However, it almost never happens in practice since the rotation between the pantilt unit and the camera is small. After tilting, (x 1, y 1, z 1) is rotated to (a 1, b 1, c 1) or (a 2, b 2, c 2). Then the pan angles are computed by Eq. (7). ϑ 1 = arctan ( z2 x 2 ) arctan ( c1 a 1 ) ϑ 2 = arctan ( z2 x 2 ) arctan ( c2 a 2 ) Hence, two solutions, (ϕ 1, ϑ 1) and (ϕ 2, ϑ 2), are obtained. We choose the minimum rotation angles as the final solution. (7)
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
More informationGeometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More informationRelating Vanishing Points to Catadioptric Camera Calibration
Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6
More information3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
More informationLecture 2: Homogeneous Coordinates, Lines and Conics
Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4
More informationDETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES
DETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES Olaf Kähler, Joachim Denzler Friedrich-Schiller-University, Dept. Mathematics and Computer Science, 07743 Jena, Germany {kaehler,denzler}@informatik.uni-jena.de
More informationInteractive 3D Scanning Without Tracking
Interactive 3D Scanning Without Tracking Matthew J. Leotta, Austin Vandergon, Gabriel Taubin Brown University Division of Engineering Providence, RI 02912, USA {matthew leotta, aev, taubin}@brown.edu Abstract
More informationTowards Auto-calibration of Smart Phones Using Orientation Sensors
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Towards Auto-calibration of Smart Phones Using Orientation Sensors Philip Saponaro and Chandra Kambhamettu Video/Image Modeling
More informationBuild Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
More informationHigh-Resolution Multiscale Panoramic Mosaics from Pan-Tilt-Zoom Cameras
High-Resolution Multiscale Panoramic Mosaics from Pan-Tilt-Zoom Cameras Sudipta N. Sinha Marc Pollefeys Seon Joo Kim Department of Computer Science, University of North Carolina at Chapel Hill. Abstract
More informationCan we calibrate a camera using an image of a flat, textureless Lambertian surface?
Can we calibrate a camera using an image of a flat, textureless Lambertian surface? Sing Bing Kang 1 and Richard Weiss 2 1 Cambridge Research Laboratory, Compaq Computer Corporation, One Kendall Sqr.,
More informationA Flexible New Technique for Camera Calibration
A Flexible New Technique for Camera Calibration Zhengyou Zhang December 2, 1998 (updated on December 14, 1998) (updated on March 25, 1999) (updated on Aug. 1, 22; a typo in Appendix B) (updated on Aug.
More informationIntroduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
More informationENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/
ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized
More information3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
More informationUnsupervised Calibration of Camera Networks and Virtual PTZ Cameras
17 th Computer Vision Winter Workshop Matej Kristan, Rok Mandeljc, Luka Čehovin (eds.) Mala Nedelja, Slovenia, February 1-3, 212 Unsupervised Calibration of Camera Networks and Virtual PTZ Cameras Horst
More informationSolution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
More informationDESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE
DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE N. Satish Kumar 1, B L Mukundappa 2, Ramakanth Kumar P 1 1 Dept. of Information Science,
More informationCamera geometry and image alignment
Computer Vision and Machine Learning Winter School ENS Lyon 2010 Camera geometry and image alignment Josef Sivic http://www.di.ens.fr/~josef INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire d Informatique,
More informationMetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
More informationEXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS
EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS Marcel Brückner, Ferid Bajramovic, Joachim Denzler Chair for Computer Vision, Friedrich-Schiller-University Jena, Ernst-Abbe-Platz, 7743 Jena,
More informationAutomated Pose Determination for Unrestrained, Non-anesthetized Small Animal Micro-SPECT and Micro-CT Imaging
Automated Pose Determination for Unrestrained, Non-anesthetized Small Animal Micro-SPECT and Micro-CT Imaging Shaun S. Gleason, Ph.D. 1, James Goddard, Ph.D. 1, Michael Paulus, Ph.D. 1, Ryan Kerekes, B.S.
More informationMultiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision Information Meeting 15 Dec 2009 by Marianna Pronobis Content of the Meeting 1. Motivation & Objectives 2. What the course will be about (Stefan) 3. Content of
More informationShape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
More informationPHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
More informationFace Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
More informationA PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir
More informationAn Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationProjective Reconstruction from Line Correspondences
Projective Reconstruction from Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Abstract The paper gives a practical rapid algorithm for doing projective reconstruction of a scene
More informationEFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
More informationAcquiring High-resolution Face Images in Outdoor Environments: A Master-slave Calibration Algorithm
Acquiring High-resolution Face Images in Outdoor Environments: A Master-slave Calibration Algorithm João C. Neves, Juan C. Moreno, Silvio Barra 2 and Hugo Proença IT - Instituto de Telecomunicações, University
More informationAutomatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,
More informationEpipolar Geometry and Visual Servoing
Epipolar Geometry and Visual Servoing Domenico Prattichizzo joint with with Gian Luca Mariottini and Jacopo Piazzi www.dii.unisi.it/prattichizzo Robotics & Systems Lab University of Siena, Italy Scuoladi
More informationUsing Many Cameras as One
Using Many Cameras as One Robert Pless Department of Computer Science and Engineering Washington University in St. Louis, Box 1045, One Brookings Ave, St. Louis, MO, 63130 pless@cs.wustl.edu Abstract We
More informationSelf-Calibrated Structured Light 3D Scanner Using Color Edge Pattern
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: ksamuel@braude.ac.il
More informationStatic Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
More informationAffine Object Representations for Calibration-Free Augmented Reality
Affine Object Representations for Calibration-Free Augmented Reality Kiriakos N. Kutulakos kyros@cs.rochester.edu James Vallino vallino@cs.rochester.edu Computer Science Department University of Rochester
More informationARTICLE. Which has better zoom: 18x or 36x?
ARTICLE Which has better zoom: 18x or 36x? Which has better zoom: 18x or 36x? In the world of security cameras, 18x zoom can be equal to 36x. More specifically, a high-resolution security camera with 18x
More informationReal-Time Automated Simulation Generation Based on CAD Modeling and Motion Capture
103 Real-Time Automated Simulation Generation Based on CAD Modeling and Motion Capture Wenjuan Zhu, Abhinav Chadda, Ming C. Leu and Xiaoqing F. Liu Missouri University of Science and Technology, zhuwe@mst.edu,
More informationTracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
More informationClassifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
More informationOptical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
More informationHigh Accuracy Articulated Robots with CNC Control Systems
Copyright 2012 SAE International 2013-01-2292 High Accuracy Articulated Robots with CNC Control Systems Bradley Saund, Russell DeVlieg Electroimpact Inc. ABSTRACT A robotic arm manipulator is often an
More informationA Prototype For Eye-Gaze Corrected
A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive
More informationDESIGNING A HIGH RESOLUTION 3D IMAGING DEVICE FOR FOOTPRINT AND TIRE TRACK IMPRESSIONS AT CRIME SCENES 1 TR-CIS-0416-12. Ruwan Egoda Gamage
DESIGNING A HIGH RESOLUTION 3D IMAGING DEVICE FOR FOOTPRINT AND TIRE TRACK IMPRESSIONS AT CRIME SCENES 1 TR-CIS-0416-12 Ruwan Egoda Gamage Abhishek Joshi Jiang Yu Zheng Mihran Tuceryan A Technical Report
More informationGeometric Optics Converging Lenses and Mirrors Physics Lab IV
Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The
More informationDifferentiation of 3D scanners and their positioning method when applied to pipeline integrity
11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16317 Differentiation of 3D scanners and their
More informationSolving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
More informationCS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!
More informationTHE problem of visual servoing guiding a robot using
582 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 13, NO. 4, AUGUST 1997 A Modular System for Robust Positioning Using Feedback from Stereo Vision Gregory D. Hager, Member, IEEE Abstract This paper
More informationWhitepaper. Image stabilization improving camera usability
Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3
More informationProjective Geometry. Projective Geometry
Euclidean versus Euclidean geometry describes sapes as tey are Properties of objects tat are uncanged by rigid motions» Lengts» Angles» Parallelism Projective geometry describes objects as tey appear Lengts,
More informationDistance measuring based on stereoscopic pictures
9th International Ph Workshop on Systems and Control: Young Generation Viewpoint 1. - 3. October 8, Izola, Slovenia istance measuring d on stereoscopic pictures Jernej Mrovlje 1 and amir Vrančić Abstract
More informationHow does the Kinect work? John MacCormick
How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured
More informationA non-contact optical technique for vehicle tracking along bounded trajectories
Home Search Collections Journals About Contact us My IOPscience A non-contact optical technique for vehicle tracking along bounded trajectories This content has been downloaded from IOPscience. Please
More informationSTUDIES IN ROBOTIC VISION, OPTICAL ILLUSIONS AND NONLINEAR DIFFUSION FILTERING
STUDIES IN ROBOTIC VISION, OPTICAL ILLUSIONS AND NONLINEAR DIFFUSION FILTERING HENRIK MALM Centre for Mathematical Sciences Mathematics Mathematics Centre for Mathematical Sciences Lund University Box
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationInteractive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects
Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects Dov Katz, Moslem Kazemi, J. Andrew Bagnell and Anthony Stentz 1 Abstract We present an interactive perceptual
More informationV-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender
V-PITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM by Ketan Surender A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science (Electrical Engineering)
More informationPan-Tilt-Zoom Camera Calibration and High-Resolution Mosaic Generation
Pan-Tilt-Zoom Camera Calibration and High-Resolution Mosaic Generation Sudipta N. Sinha and Marc Pollefeys Department of Computer Science, University of North Carolina at Chapel Hill, CB # 3175, Sitterson
More informationACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg
More informationHigh-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
More informationDevelopment of The Next Generation Document Reader -Eye Scanner-
Development of The Next Generation Document Reader -Eye Scanner- Toshiyuki Amano *a, Tsutomu Abe b, Tetsuo Iyoda b, Osamu Nishikawa b and Yukio Sato a a Dept. of Electrical and Computer Engineering, Nagoya
More informationAn Efficient Solution to the Five-Point Relative Pose Problem
An Efficient Solution to the Five-Point Relative Pose Problem David Nistér Sarnoff Corporation CN5300, Princeton, NJ 08530 dnister@sarnoff.com Abstract An efficient algorithmic solution to the classical
More information3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map
Electronic Letters on Computer Vision and Image Analysis 7(2):110-119, 2008 3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map Zhencheng
More informationCALIBRATION OF A PTZ SURVEILLANCE CAMERA USING 3D INDOOR MODEL
CALIBRATION OF A PTZ SURVEILLANCE CAMERA USING D INDOOR MODEL R.A. Persad*, C. Armenakis, G. Sohn Geomatics Engineering, GeoICT Lab Earth and Space Science and Engineering York University 7 Keele St.,
More information521493S Computer Graphics. Exercise 2 & course schedule change
521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 16-18 in TS128 Question 2.1 Given two nonparallel,
More informationWHITE PAPER. P-Iris. New iris control improves image quality in megapixel and HDTV network cameras.
WHITE PAPER P-Iris. New iris control improves image quality in megapixel and HDTV network cameras. Table of contents 1. Introduction 3 2. The role of an iris 3 3. Existing lens options 4 4. How P-Iris
More informationFace Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
More informationT-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
More informationAn Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
More informationProjection Center Calibration for a Co-located Projector Camera System
Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
More informationUniversidad de Cantabria Departamento de Tecnología Electrónica, Ingeniería de Sistemas y Automática. Tesis Doctoral
Universidad de Cantabria Departamento de Tecnología Electrónica, Ingeniería de Sistemas y Automática Tesis Doctoral CONTRIBUCIONES AL ALINEAMIENTO DE NUBES DE PUNTOS 3D PARA SU USO EN APLICACIONES DE CAPTURA
More informationStatic Multi-Camera Factorization Using Rigid Motion
Static Multi-Camera Factorization Using Rigid Motion Roland Angst and Marc Pollefeys Computer Vision and Geometry Lab, Department of Computer Science ETH Zürich, Switzerland {rangst, marc.pollefeys}@inf.ethz.ch
More informationA new Optical Tracking System for Virtual and Augmented Reality Applications
IEEE Instrumentation and Measurement Technology Conference Budapest, Hungary, May 21 23, 2001 A new Optical Tracking System for Virtual and Augmented Reality Applications Miguel Ribo VRVis Competence Center
More informationManufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique
Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit Abstract In order to reduce the
More informationTWO-DIMENSIONAL TRANSFORMATION
CHAPTER 2 TWO-DIMENSIONAL TRANSFORMATION 2.1 Introduction As stated earlier, Computer Aided Design consists of three components, namely, Design (Geometric Modeling), Analysis (FEA, etc), and Visualization
More informationVOLUMNECT - Measuring Volumes with Kinect T M
VOLUMNECT - Measuring Volumes with Kinect T M Beatriz Quintino Ferreira a, Miguel Griné a, Duarte Gameiro a, João Paulo Costeira a,b and Beatriz Sousa Santos c,d a DEEC, Instituto Superior Técnico, Lisboa,
More informationFast Matching of Binary Features
Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been
More informationCOLOR and depth information provide complementary
1 Joint depth and color camera calibration with distortion correction Daniel Herrera C., Juho Kannala, and Janne Heikkilä Abstract We present an algorithm that simultaneously calibrates two color cameras,
More informationTracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford
More informationCamera Technology Guide. Factors to consider when selecting your video surveillance cameras
Camera Technology Guide Factors to consider when selecting your video surveillance cameras Introduction Investing in a video surveillance system is a smart move. You have many assets to protect so you
More informationCS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
More informationReliable automatic calibration of a marker-based position tracking system
Reliable automatic calibration of a marker-based position tracking system David Claus and Andrew W. Fitzgibbon Department of Engineering Science, University of Oxford, Oxford OX1 3BN {dclaus,awf}@robots.ox.ac.uk
More informationAuto Head-Up Displays: View-Through for Drivers
Auto Head-Up Displays: View-Through for Drivers Head-up displays (HUD) are featuring more and more frequently in both current and future generation automobiles. One primary reason for this upward trend
More informationReflection and Refraction
Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,
More informationSubspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft
More informationMore Local Structure Information for Make-Model Recognition
More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification
More informationFlow Separation for Fast and Robust Stereo Odometry
Flow Separation for Fast and Robust Stereo Odometry Michael Kaess, Kai Ni and Frank Dellaert Abstract Separating sparse flow provides fast and robust stereo visual odometry that deals with nearly degenerate
More informationPHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
More informationLimitations of Human Vision. What is computer vision? What is computer vision (cont d)?
What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images
More informationHow To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com
More informationHands-free navigation in VR environments by tracking the head. Sing Bing Kang. CRL 97/1 March 1997
TM Hands-free navigation in VR environments by tracking the head Sing Bing Kang CRL 97/ March 997 Cambridge Research Laboratory The Cambridge Research Laboratory was founded in 987 to advance the state
More informationThe Trade-off between Image Resolution and Field of View: the Influence of Lens Selection
The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection I want a lens that can cover the whole parking lot and I want to be able to read a license plate. Sound familiar?
More informationBuilding an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
More informationMoveInspect HF HR. 3D measurement of dynamic processes MEASURE THE ADVANTAGE. MoveInspect TECHNOLOGY
MoveInspect HF HR 3D measurement of dynamic processes MEASURE THE ADVANTAGE MoveInspect TECHNOLOGY MoveInspect HF HR 3D measurement of dynamic processes Areas of application In order to sustain its own
More informationTHE EFFECTS OF TEMPERATURE VARIATION ON SINGLE-LENS-REFLEX DIGITAL CAMERA CALIBRATION PARAMETERS
Coission V Symposium, Newcastle upon Tyne, UK. 2010 THE EFFECTS OF TEMPERATURE VARIATION ON SINGLE-LENS-REFLEX DIGITAL CAMERA CALIBRATION PARAMETERS M. J. Smith *, E. Cope Faculty of Engineering, The University
More informationComparison of Film and Video Techniques for Estimating Three-Dimensional Coordinates within a Large Field
TECHNICAL NOTES INTERNATIONAL JOURNAL OF SPORT BIOMECHANICS, 1992,8, 145-151 Comparison of Film and Video Techniques for Estimating Three-Dimensional Coordinates within a Large Field Rosa M. Angulo and
More information