A Flexible New Technique for Camera Calibration


 Barrie Adams
 2 years ago
 Views:
Transcription
1 A Flexible New Technique for Camera Calibration Zhengyou Zhang December 2, 1998 (updated on December 14, 1998) (updated on March 25, 1999) (updated on Aug. 1, 22; a typo in Appendix B) (updated on Aug. 13, 28; a typo in Section 3.3) (last updated on Dec. 5, 29; a typo in Section 2.4) Technical Report MSRTR Citation: Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11): , 2. Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA zhang
2 A Flexible New Technique for Camera Calibration Zhengyou Zhang Microsoft Research, One Microsoft Way, Redmond, WA , USA zhang Contents 1 Motivations 2 2 Basic Equations Notation Homography between the model plane and its image Constraints on the intrinsic parameters Geometric Interpretation Solving Camera Calibration Closedform solution Maximum likelihood estimation Dealing with radial distortion Summary Degenerate Configurations 8 5 Experimental Results Computer Simulations Real Data Sensitivity with Respect to Model Imprecision Random noise in the model points Systematic nonplanarity of the model pattern Conclusion 17 A Estimation of the Homography Between the Model Plane and its Image 17 B Extraction of the Intrinsic Parameters from Matrix B 18 C Approximating a 3 3 matrix by a Rotation Matrix 18 D Camera Calibration Under Known Pure Translation 19 added on December 14, 1998 added on December 28, 1998; added results on systematic nonplanarity on March 25, 1998 added on December 14, 1998, corrected (based on the comments from Andrew Zisserman) on January 7,
3 A Flexible New Technique for Camera Calibration Abstract We propose a flexible new technique to easily calibrate a camera. It is well suited for use without specialized knowledge of 3D geometry or computer vision. The technique only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closedform solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use. Index Terms Camera calibration, calibration from planes, 2D pattern, absolute conic, projective mapping, lens distortion, closedform solution, maximum likelihood estimation, flexible setup. 1 Motivations Camera calibration is a necessary step in 3D computer vision in order to extract metric information from 2D images. Much work has been done, starting in the photogrammetry community (see [2, 4] to cite a few), and more recently in computer vision ([9, 8, 23, 7, 26, 24, 17, 6] to cite a few). We can classify those techniques roughly into two categories: photogrammetric calibration and selfcalibration. Photogrammetric calibration. Camera calibration is performed by observing a calibration object whose geometry in 3D space is known with very good precision. Calibration can be done very efficiently [5]. The calibration object usually consists of two or three planes orthogonal to each other. Sometimes, a plane undergoing a precisely known translation is also used [23]. These approaches require an expensive calibration apparatus, and an elaborate setup. Selfcalibration. Techniques in this category do not use any calibration object. Just by moving a camera in a static scene, the rigidity of the scene provides in general two constraints [17, 15] on the cameras internal parameters from one camera displacement by using image information alone. Therefore, if images are taken by the same camera with fixed internal parameters, correspondences between three images are sufficient to recover both the internal and external parameters which allow us to reconstruct 3D structure up to a similarity [16, 13]. While this approach is very flexible, it is not yet mature [1]. Because there are many parameters to estimate, we cannot always obtain reliable results. Other techniques exist: vanishing points for orthogonal directions [3, 14], and calibration from pure rotation [11, 21]. Our current research is focused on a desktop vision system (DVS) since the potential for using DVSs is large. Cameras are becoming cheap and ubiquitous. A DVS aims at the general public, who are not experts in computer vision. A typical computer user will perform vision tasks only from time to time, so will not be willing to invest money for expensive equipment. Therefore, flexibility, robustness and low cost are important. The camera calibration technique described in this paper was developed with these considerations in mind. 2
4 The proposed technique only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. The pattern can be printed on a laser printer and attached to a reasonable planar surface (e.g., a hard book cover). Either the camera or the planar pattern can be moved by hand. The motion need not be known. The proposed approach lies between the photogrammetric calibration and selfcalibration, because we use 2D metric information rather than 3D or purely implicit one. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques, the proposed technique is considerably more flexible. Compared with selfcalibration, it gains considerable degree of robustness. We believe the new technique advances 3D computer vision one step from laboratory environments to the real world. Note that Bill Triggs [22] recently developed a selfcalibration technique from at least 5 views of a planar scene. His technique is more flexible than ours, but has difficulty to initialize. Liebowitz and Zisserman [14] described a technique of metric rectification for perspective images of planes using metric information such as a known angle, two equal though unknown angles, and a known length ratio. They also mentioned that calibration of the internal camera parameters is possible provided at least three such rectified planes, although no experimental results were shown. The paper is organized as follows. Section 2 describes the basic constraints from observing a single plane. Section 3 describes the calibration procedure. We start with a closedform solution, followed by nonlinear optimization. Radial lens distortion is also modeled. Section 4 studies configurations in which the proposed calibration technique fails. It is very easy to avoid such situations in practice. Section 5 provides the experimental results. Both computer simulation and real data are used to validate the proposed technique. In the Appendix, we provides a number of details, including the techniques for estimating the homography between the model plane and its image. 2 Basic Equations We examine the constraints on the camera s intrinsic parameters provided by observing a single plane. We start with the notation used in this paper. 2.1 Notation A 2D point is denoted by m = [u, v] T. A 3D point is denoted by M = [X, Y, Z] T. We use x to denote the augmented vector by adding 1 as the last element: m = [u, v, 1] T and M = [X, Y, Z, 1] T. A camera is modeled by the usual pinhole: the relationship between a 3D point M and its image projection m is given by s m = A [ R t ] M, (1) where s is an arbitrary scale factor, (R, t), called the extrinsic parameters, is the rotation and translation which relates the world coordinate system to the camera coordinate system, and A, called the camera intrinsic matrix, is given by α γ u A = β v 1 with (u, v ) the coordinates of the principal point, α and β the scale factors in image u and v axes, and γ the parameter describing the skewness of the two image axes. We use the abbreviation A T for (A 1 ) T or (A T ) 1. 3
5 2.2 Homography between the model plane and its image Without loss of generality, we assume the model plane is on Z = of the world coordinate system. Let s denote the i th column of the rotation matrix R by r i. From (1), we have X u s v = A [ r 1 r 2 r 3 t ] Y 1 1 = A [ r 1 r 2 t ] X Y. 1 By abuse of notation, we still use M to denote a point on the model plane, but M = [X, Y ] T since Z is always equal to. In turn, M = [X, Y, 1] T. Therefore, a model point M and its image m is related by a homography H: s m = H M with H = A [ r 1 r 2 t ]. (2) As is clear, the 3 3 matrix H is defined up to a scale factor. 2.3 Constraints on the intrinsic parameters Given an image of the model plane, an homography can be estimated (see Appendix A). Let s denote it by H = [ h 1 h 2 h 3 ]. From (2), we have [ h1 h 2 h 3 ] = λa [ r1 r 2 t ], where λ is an arbitrary scalar. Using the knowledge that r 1 and r 2 are orthonormal, we have h T 1 A T A 1 h 2 = (3) h T 1 A T A 1 h 1 = h T 2 A T A 1 h 2. (4) These are the two basic constraints on the intrinsic parameters, given one homography. Because a homography has 8 degrees of freedom and there are 6 extrinsic parameters (3 for rotation and 3 for translation), we can only obtain 2 constraints on the intrinsic parameters. Note that A T A 1 actually describes the image of the absolute conic [16]. In the next subsection, we will give an geometric interpretation. 2.4 Geometric Interpretation We are now relating (3) and (4) to the absolute conic. It is not difficult to verify that the model plane, under our convention, is described in the camera coordinate system by the following equation: [ ] T [ r3 xy ] r T 3 t z =, w where w = for points at infinity[ and ] w = [ 1 otherwise. ] This plane intersects the plane at infinity at r1 r2 a line, and we can easily see that and are two particular points on that line. Any point on it 4
6 is a linear combination of these two points, i.e., [ ] r1 x = a + b [ ] r2 = [ ] ar1 + br 2 Now, let s compute the intersection of the above line with the absolute conic. By definition, the point x, known as the circular point, satisfies: x T x =, i.e., (ar 1 + br 2 ) T (ar 1 + br 2 ) =, or a 2 + b 2 =. The solution is b = ±ai, where i 2 = 1. That is, the two intersection points are [ ] r1 ± ir x = a 2. Their projection in the image plane is then given, up to a scale factor, by m = A(r 1 ± ir 2 ) = h 1 ± ih 2. Point m is on the image of the absolute conic, described by A T A 1 [16]. This gives (h 1 ± ih 2 ) T A T A 1 (h 1 ± ih 2 ) =. Requiring that both real and imaginary parts be zero yields (3) and (4).. 3 Solving Camera Calibration This section provides the details how to effectively solve the camera calibration problem. We start with an analytical solution, followed by a nonlinear optimization technique based on the maximum likelihood criterion. Finally, we take into account lens distortion, giving both analytical and nonlinear solutions. 3.1 Closedform solution Let B 11 B 12 B 13 B = A T A 1 B 12 B 22 B 23 B 13 B 23 B 33 = 1 α 2 γ α 2 β γ α 2 β Note that B is symmetric, defined by a 6D vector v γ u β α 2 β γ 2 α 2 β β 2 γ(v γ u β) α 2 β 2 v β 2 v γ u β α 2 β γ(v γ u β) α 2 β 2 v β 2 (v γ u β) 2 α 2 β 2 + v2 β (5) b = [B 11, B 12, B 22, B 13, B 23, B 33 ] T. (6) Let the i th column vector of H be h i = [h i1, h i2, h i3 ] T. Then, we have h T i Bh j = v T ijb (7) 5
7 with v ij = [h i1 h j1, h i1 h j2 + h i2 h j1, h i2 h j2, h i3 h j1 + h i1 h j3, h i3 h j2 + h i2 h j3, h i3 h j3 ] T. Therefore, the two fundamental constraints (3) and (4), from a given homography, can be rewritten as 2 homogeneous equations in b: [ v12 T ] (v 11 v 22 ) T b =. (8) If n images of the model plane are observed, by stacking n such equations as (8) we have Vb =, (9) where V is a 2n 6 matrix. If n 3, we will have in general a unique solution b defined up to a scale factor. If n = 2, we can impose the skewless constraint γ =, i.e., [, 1,,,, ]b =, which is added as an additional equation to (9). (If n = 1, we can only solve two camera intrinsic parameters, e.g., α and β, assuming u and v are known (e.g., at the image center) and γ =, and that is indeed what we did in [19] for head pose determination based on the fact that eyes and mouth are reasonably coplanar.) The solution to (9) is well known as the eigenvector of V T V associated with the smallest eigenvalue (equivalently, the right singular vector of V associated with the smallest singular value). Once b is estimated, we can compute all camera intrinsic matrix A. See Appendix B for the details. Once A is known, the extrinsic parameters for each image is readily computed. From (2), we have r 1 = λa 1 h 1 r 2 = λa 1 h 2 r 3 = r 1 r 2 t = λa 1 h 3 with λ = 1/ A 1 h 1 = 1/ A 1 h 2. Of course, because of noise in data, the socomputed matrix R = [r 1, r 2, r 3 ] does not in general satisfy the properties of a rotation matrix. Appendix C describes a method to estimate the best rotation matrix from a general 3 3 matrix. 3.2 Maximum likelihood estimation The above solution is obtained through minimizing an algebraic distance which is not physically meaningful. We can refine it through maximum likelihood inference. We are given n images of a model plane and there are m points on the model plane. Assume that the image points are corrupted by independent and identically distributed noise. The maximum likelihood estimate can be obtained by minimizing the following functional: n m m ij ˆm(A, R i, t i, M j ) 2, (1) i=1 j=1 where ˆm(A, R i, t i, M j ) is the projection of point M j in image i, according to equation (2). A rotation R is parameterized by a vector of 3 parameters, denoted by r, which is parallel to the rotation axis and whose magnitude is equal to the rotation angle. R and r are related by the Rodrigues formula [5]. Minimizing (1) is a nonlinear minimization problem, which is solved with the LevenbergMarquardt Algorithm as implemented in Minpack [18]. It requires an initial guess of A, {R i, t i i = 1..n} which can be obtained using the technique described in the previous subsection. 6
8 3.3 Dealing with radial distortion Up to now, we have not considered lens distortion of a camera. However, a desktop camera usually exhibits significant lens distortion, especially radial distortion. In this section, we only consider the first two terms of radial distortion. The reader is referred to [2, 2, 4, 26] for more elaborated models. Based on the reports in the literature [2, 23, 25], it is likely that the distortion function is totally dominated by the radial components, and especially dominated by the first term. It has also been found that any more elaborated modeling not only would not help (negligible when compared with sensor quantization), but also would cause numerical instability [23, 25]. Let (u, v) be the ideal (nonobservable distortionfree) pixel image coordinates, and (ŭ, v) the corresponding real observed image coordinates. The ideal points are the projection of the model points according to the pinhole model. Similarly, (x, y) and ( x, y) are the ideal (distortionfree) and real (distorted) normalized image coordinates. We have [2, 25] x = x + x[k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ] y = y + y[k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ], where k 1 and k 2 are the coefficients of the radial distortion. The center of the radial distortion is the same as the principal point. From ŭ = u + α x + γ y and v = v + β y and assuming γ =, we have ŭ = u + (u u )[k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ] (11) v = v + (v v )[k 1 (x 2 + y 2 ) + k 2 (x 2 + y 2 ) 2 ]. (12) Estimating Radial Distortion by Alternation. As the radial distortion is expected to be small, one would expect to estimate the other five intrinsic parameters, using the technique described in Sect. 3.2, reasonable well by simply ignoring distortion. One strategy is then to estimate k 1 and k 2 after having estimated the other parameters, which will give us the ideal pixel coordinates (u, v). Then, from (11) and (12), we have two equations for each point in each image: [ (u u )(x 2 +y 2 ) (u u )(x 2 +y 2 ) 2 ] [ ] [ŭ u ] k1 (v v )(x 2 +y 2 ) (v v )(x 2 +y 2 ) 2 =. v v Given m points in n images, we can stack all equations together to obtain in total 2mn equations, or in matrix form as Dk = d, where k = [k 1, k 2 ] T. The linear leastsquares solution is given by k 2 k = (D T D) 1 D T d. (13) Once k 1 and k 2 are estimated, one can refine the estimate of the other parameters by solving (1) with ˆm(A, R i, t i, M j ) replaced by (11) and (12). We can alternate these two procedures until convergence. Complete Maximum Likelihood Estimation. Experimentally, we found the convergence of the above alternation technique is slow. A natural extension to (1) is then to estimate the complete set of parameters by minimizing the following functional: n i=1 j=1 m m ij m(a, k 1, k 2, R i, t i, M j ) 2, (14) A typo was reported by Johannes Koester via on Aug. 13, 28. 7
9 where m(a, k1, k2, R i, t i, M j ) is the projection of point M j in image i according to equation (2), followed by distortion according to (11) and (12). This is a nonlinear minimization problem, which is solved with the LevenbergMarquardt Algorithm as implemented in Minpack [18]. A rotation is again parameterized by a 3vector r, as in Sect An initial guess of A and {R i, t i i = 1..n} can be obtained using the technique described in Sect. 3.1 or in Sect An initial guess of k 1 and k 2 can be obtained with the technique described in the last paragraph, or simply by setting them to. 3.4 Summary The recommended calibration procedure is as follows: 1. Print a pattern and attach it to a planar surface; 2. Take a few images of the model plane under different orientations by moving either the plane or the camera; 3. Detect the feature points in the images; 4. Estimate the five intrinsic parameters and all the extrinsic parameters using the closedform solution as described in Sect. 3.1; 5. Estimate the coefficients of the radial distortion by solving the linear leastsquares (13); 6. Refine all parameters by minimizing (14). 4 Degenerate Configurations We study in this section configurations in which additional images do not provide more constraints on the camera intrinsic parameters. Because (3) and (4) are derived from the properties of the rotation matrix, if R 2 is not independent of R 1, then image 2 does not provide additional constraints. In particular, if a plane undergoes a pure translation, then R 2 = R 1 and image 2 is not helpful for camera calibration. In the following, we consider a more complex configuration. Proposition 1. If the model plane at the second position is parallel to its first position, then the second homography does not provide additional constraints. Proof. Under our convention, R 2 and R 1 are related by a rotation around zaxis. That is, cos θ sin θ R 1 sin θ cos θ = R 2, 1 where θ is the angle of the relative rotation. We will use superscript (1) and (2) to denote vectors related to image 1 and 2, respectively. It is clear that we have h (2) 1 = λ (2) (Ar (1) cos θ + Ar (2) sin θ) = λ(2) λ (1) (h(1) h (2) 2 = λ (2) ( Ar (1) sin θ + Ar (2) cos θ) = λ(2) λ Then, the first constraint (3) from image 2 becomes: h (2) T 1 A T A 1 h (2) 2 =λ(2) cos θ sin θ(h (1) 1 1 cos θ + h (1) 2 sin θ) (1) ( h(1) λ (1) [(cos2 θ sin 2 θ)(h (1) 1 T A T A 1 h (1) 1 h (1) 2 1 sin θ + h (1) 2 cos θ). T A T A 1 h (1) 2 ) T A T A 1 h (1) 2 )], 8
10 which is a linear combination of the two constraints provided by H 1. Similarly, we can show that the second constraint from image 2 is also a linear combination of the two constraints provided by H 1. Therefore, we do not gain any constraint from H 2. The result is selfevident because parallel planes intersect with the plane at infinity at the same circular points, and thus according to Sect. 2.4 they provide the same constraints. In practice, it is very easy to avoid the degenerate configuration: we only need to change the orientation of the model plane from one snapshot to another. Although the proposed technique will not work if the model plane undergoes pure translation, camera calibration is still possible if the translation is known. Please refer to Appendix D. 5 Experimental Results The proposed algorithm has been tested on both computer simulated data and real data. The closedform solution involves finding a singular value decomposition of a small 2n 6 matrix, where n is the number of images. The nonlinear refinement within the LevenbergMarquardt algorithm takes 3 to 5 iterations to converge. 5.1 Computer Simulations The simulated camera has the following property: α = 125, β = 9, γ = (equivalent to ), u = 255, v = 255. The image resolution is The model plane is a checker pattern containing 1 14 = 14 corner points (so we usually have more data in the v direction than in the u direction). The size of pattern is 18cm 25cm. The orientation of the plane is represented by a 3D vector r, which is parallel to the rotation axis and whose magnitude is equal to the rotation angle. Its position is represented by a 3D vector t (unit in centimeters). Performance w.r.t. the noise level. In this experiment, we use three planes with r 1 = [2,, ] T, t 1 = [ 9, 12.5, 5] T, r 2 = [, 2, ] T, t 2 = [ 9, 12.5, 51] T, r 3 = 1 5 [ 3, 3, 15 ] T, t 3 = [ 1.5, 12.5, 525] T. Gaussian noise with mean and σ standard deviation is added to the projected image points. The estimated camera parameters are then compared with the ground truth. We measure the relative error for α and β, and absolute error for u and v. We vary the noise level from.1 pixels to 1.5 pixels. For each noise level, we perform 1 independent trials, and the results shown are the average. As we can see from Fig. 1, errors increase linearly with the noise level. (The error for γ is not shown, but has the same property.) For σ =.5 (which is larger than the normal noise in practical calibration), the errors in α and β are less than.3%, and the errors in u and v are around 1 pixel. The error in u is larger than that in v. The main reason is that there are less data in the u direction than in the v direction, as we said before. Performance w.r.t. the number of planes. This experiment investigates the performance with respect to the number of planes (more precisely, the number of images of the model plane). The orientation and position of the model plane for the first three images are the same as in the last subsection. From the fourth image, we first randomly choose a rotation axis in a uniform sphere, then apply a rotation angle of 3. We vary the number of images from 2 to 16. For each number, 1 trials of independent plane orientations (except for the first three) and independent noise with mean and 9
11 Relative error (%) alpha.2 beta Noise level (pixels) Absolute error (pixels) Noise level (pixels) u v Figure 1: Errors vs. the noise level of the image points Relative error (%) alpha beta Absolute error (pixels) u v number of planes number of planes Figure 2: Errors vs. the number of images of the model plane standard deviation.5 pixels are conducted. The average result is shown in Fig. 2. The errors decrease when more images are used. From 2 to 3, the errors decrease significantly. Performance w.r.t. the orientation of the model plane. This experiment examines the influence of the orientation of the model plane with respect to the image plane. Three images are used. The orientation of the plane is chosen as follows: the plane is initially parallel to the image plane; a rotation axis is randomly chosen from a uniform sphere; the plane is then rotated around that axis with angle θ. Gaussian noise with mean and standard deviation.5 pixels is added to the projected image points. We repeat this process 1 times and compute the average errors. The angle θ varies from 5 to 75, and the result is shown in Fig. 3. When θ = 5, 4% of the trials failed because the planes are almost parallel to each other (degenerate configuration), and the result shown has excluded those trials. Best performance seems to be achieved with an angle around 45. Note that in practice, when the angle increases, foreshortening makes the corner detection less precise, but this is not considered in this experiment. 5.2 Real Data The proposed technique is now routinely used in our vision group and also in the graphics group at Microsoft Research. Here, we provide the result with one example. The camera to be calibrated is an offtheshelf PULNiX CCD camera with 6 mm lens. The image resolution is The model plane contains a pattern of 8 8 squares, so there are 256 corners. The size of the pattern is 17cm 17cm. It was printed with a highquality printer and put on a glass. 1
12 Relative error (%) alpha beta Absolute error (pixels) u v Angle with the image plane (degrees) Angle with the image plane (degrees) Figure 3: Errors vs. the angle of the model plane w.r.t. the image plane Table 1: Results with real data of 2 through 5 images nb 2 images 3 images 4 images 5 images initial final σ initial final σ initial final σ initial final σ α β γ u v k k RMS Five images of the plane under different orientations were taken, as shown in Fig. 4. We can observe a significant lens distortion in the images. The corners were detected as the intersection of straight lines fitted to each square. We applied our calibration algorithm to the first 2, 3, 4 and all 5 images. The results are shown in Table 1. For each configuration, three columns are given. The first column (initial) is the estimation of the closedform solution. The second column (final) is the maximum likelihood estimation (MLE), and the third column (σ) is the estimated standard deviation, representing the uncertainty of the final result. As is clear, the closedform solution is reasonable, and the final estimates are very consistent with each other whether we use 2, 3, 4 or 5 images. We also note that the uncertainty of the final estimate decreases with the number of images. The last row of Table 1, indicated by RMS, displays the root of mean squared distances, in pixels, between detected image points and projected ones. The MLE improves considerably this measure. The careful reader may remark the inconsistency for k 1 and k 2 between the closedform solution and the MLE. The reason is that for the closedform solution, camera intrinsic parameters are estimated assuming no distortion, and the predicted outer points lie closer to the image center than the detected ones. The subsequent distortion estimation tries to spread the outer points and increase the scale in order to reduce the distances, although the distortion shape (with positive k 1, called pincushion distortion) does not correspond to the real distortion (with negative k 1, called barrel distortion). The nonlinear refinement (MLE) finally recovers the correct distortion shape. The estimated distortion parameters allow us to correct the distortion in the original images. Figure 5 displays the first two such distortioncorrected images, which should be compared with the first two images shown in Figure 4. We see clearly that the curved pattern in the original images is straightened. 11
13 relative errors (%) alpha beta absolute errors (pixels) u v noise level in model points (%) noise level in model points (%) relative error (% ) noise level in model points (%) k1 Figure 8: Sensitivity of camera calibration with respect to Gaussian noise in the model points Relative errors (% ) alpha beta aspect Systematic error in the model (%) absolute error (pixels) Systematic error in the model (%) u v Figure 9: Sensitivity of camera calibration with respect to systematic spherical nonplanarity shown in Table 1) were calculated, and are depicted in Fig. 8. Obviously, all errors increase with the level of noise added to the model points. The pixel scale factors (α and β) remain very stable: the error is less than.2%. The coordinates of the principal point are quite stable: the errors are about 2 pixels for the noise level 15%. The estimated radial distortion coefficient k 1 becomes less useful, and the second term k 2 (not shown) is even less than k 1. In our current formulation, we assume that the exact position of the points in the model plane is known. If the model points are only known within certain precision, we can reformulate the problem, and we could expect smaller errors than reported here Systematic nonplanarity of the model pattern In this section, we consider systematic nonplanarity of the model pattern, e.g., when a printed pattern is attached to a soft book cover. We used the same configuration as in Sect The model plane 15
14 Relative errors (% ) alpha beta aspect Systematic error in the model (%) absolute errors (pixels) u 4 v Systematic error in the model (%) Figure 1: Sensitivity of camera calibration with respect to systematic cylindrical nonplanarity was distorted in two systematic ways to simulate the nonplanarity: spherical and cylindrical. With spherical distortion, points away from the center of the pattern are displaced in z according to z = p x 2 + y 2, where p indicates the nonplanarity (the model points are coplanar when p = ). The displacement is symmetric around the center. With Cylindrical distortion, points are displaced in z according to z = px. Again, p indicates the nonplanarity. This simulates bending of the model pattern around the vertical axis. Four images of the model pattern were used: the first is parallel to the image plane; the second is rotated from the first around the horizontal axis by 3 degrees; the third is rotated from the first around the vertical axis by 3 degrees; the fourth is rotated from the first around the diagonal axis by 3 degrees. Although model points are not coplanar, they were treated as coplanar, and the proposed calibration technique was applied. Gaussian noise with standard deviation.5 pixels was added to the image points, and 1 independent trials were conducted. The average calibration errors of the 1 trials are shown in Fig. 9 for spherical nonplanarity and in Fig. 1 for cylindrical nonplanarity. The horizontal axis indicates the increase in the nonplanarity, which is measured as the ratio of the maximum z displacement to the size of the pattern. Therefore, 1% of nonplanarity is equivalent to maximum 2.5cm of displacement in z, which does not likely happen in practice. Several observations can be made: Systematic nonplanarity of the model has more effect on the calibration precision than random errors in the positions as described in the last subsection; Aspect ratio is very stable (.4% of error for 1% of nonplanarity); Systematic cylindrical nonplanarity is worse than systematic spherical nonplanarity, especially for the coordinates of the principal point (u, v ). The reason is that cylindrical nonplanarity is only symmetric in one axis. That is also why the error in u is much larger than in v in our simulation; The result seems still usable in practice if there is only a few percents (say, less than 3%) of systematic nonplanarity. The error in (u, v ) has been found by many researchers to have little effect in 3D reconstruction. As pointed out by Triggs in [22], the absolute error in (u, v ) is not geometrically meaningful. He proposes to measure the relative error with respect to the focal length, i.e., u /α and v /α. This is equivalent to measuring the angle between the true optical axis and the estimated one. Then, for 1% of cylindrical nonplanarity (see Fig. 1), the relative error for u is 7.6%, comparable with those of α and β. 16
15 6 Conclusion In this paper, we have developed a flexible new technique to easily calibrate a camera. The technique only requires the camera to observe a planar pattern from a few (at least two) different orientations. We can move either the camera or the planar pattern. The motion does not need to be known. Radial lens distortion is modeled. The proposed procedure consists of a closedform solution, followed by a nonlinear refinement based on maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique gains considerable flexibility. Acknowledgment Thanks go to Brian Guenter for his software of corner extraction and for many discussions, and to Bill Triggs for insightful comments. Thanks go to Andrew Zisserman for bringing his CVPR98 work [14] to my attention, which uses the same constraint but in different form, and for pointing out an error in my discussion on the case of pure translation. Thanks go to Bill Triggs and Gideon Stein for suggesting experiments described in Sect Thanks also go to the members of the Vision Group at MSR for encouragement and discussions. Anandan and Charles Loop have checked the English. A Estimation of the Homography Between the Model Plane and its Image There are many ways to estimate the homography between the model plane and its image. Here, we present a technique based on maximum likelihood criterion. Let M i and m i be the model and image points, respectively. Ideally, they should satisfy (2). In practice, they don t because of noise in the extracted image points. Let s assume that m i is corrupted by Gaussian noise with mean and covariance matrix Λ mi. Then, the maximum likelihood estimation of H is obtained by minimizing the following functional (m i ˆm i ) T Λ 1 m i (m i ˆm i ), i where ˆm i = 1 ] [ ht 1 M i h T 3 M i h T 2 M with h i, the i th row of H. i In practice, we simply assume Λ mi = σ 2 I for all i. This is reasonable if points are extracted independently with the same procedure. In this case, the above problem becomes a nonlinear leastsquares one, i.e., min H i m i ˆm i 2. The nonlinear minimization is conducted with the Levenberg Marquardt Algorithm as implemented in Minpack [18]. This requires an initial guess, which can be obtained as follows. Let x = [ h T 1, h T 2, h T 3 ]T. Then equation (2) can be rewritten as [ M T T u M T T M T v M T ] x =. When we are given n points, we have n above equations, which can be written in matrix equation as Lx =, where L is a 2n 9 matrix. As x is defined up to a scale factor, the solution is well known 17
16 to be the right singular vector of L associated with the smallest singular value (or equivalently, the eigenvector of L T L associated with the smallest eigenvalue). In L, some elements are constant 1, some are in pixels, some are in world coordinates, and some are multiplication of both. This makes L poorly conditioned numerically. Much better results can be obtained by performing a simple data normalization, such as the one proposed in [12], prior to running the above procedure. B Extraction of the Intrinsic Parameters from Matrix B Matrix B, as described in Sect. 3.1, is estimated up to a scale factor, i.e.,, B = λa T A with λ an arbitrary scale. Without difficulty, we can uniquely extract the intrinsic parameters from matrix B. v = (B 12 B 13 B 11 B 23 )/(B 11 B 22 B 2 12) λ = B 33 [B v (B 12 B 13 B 11 B 23 )]/B 11 α = λ/b 11 β = λb 11 /(B 11 B 22 B12 2 ) γ = B 12 α 2 β/λ u = γv /β B 13 α 2 /λ. C Approximating a 3 3 matrix by a Rotation Matrix The problem considered in this section is to solve the best rotation matrix R to approximate a given 3 3 matrix Q. Here, best is in the sense of the smallest Frobenius norm of the difference R Q. That is, we are solving the following problem: Since min R R Q 2 F subject to R T R = I. (15) R Q 2 F = trace((r Q) T (R Q)) = 3 + trace(q T Q) 2trace(R T Q), problem (15) is equivalent to the one of maximizing trace(r T Q). Let the singular value decomposition of Q be USV T, where S = diag (σ 1, σ 2, σ 3 ). If we define an orthogonal matrix Z by Z = V T R T U, then trace(r T Q) = trace(r T USV T ) = trace(v T R T US) 3 3 = trace(zs) = z ii σ i σ i. It is clear that the maximum is achieved by setting R = UV T because then Z = I. This gives the solution to (15). An excellent reference on matrix computations is the one by Golub and van Loan [1]. A typo was reported in formula u by Jiyong Ma via an on April 18, 22. i=1 i=1 18
17 D Camera Calibration Under Known Pure Translation As said in Sect. 4, if the model plane undergoes a pure translation, the technique proposed in this paper will not work. However, camera calibration is possible if the translation is known like the setup in Tsai s technique [23]. From (2), we have t = αa 1 h 3, where α = 1/ A 1 h 1. The translation between two positions i and j is then given by t (ij) = t (i) t (j) = A 1 (α (i) h (i) 3 α(j) h (j) 3 ). (Note that although both H (i) and H (j) are estimated up to their own scale factors, they can be rescaled up to a single common scale factor using the fact that it is a pure translation.) If only the translation direction is known, we get two constraints on A. If we know additionally the translation magnitude, then we have another constraint on A. Full calibration is then possible from two planes. 19
18 References [1] S. Bougnoux. From projective to euclidean space under any practical situation, a criticism of selfcalibration. In Proceedings of the 6th International Conference on Computer Vision, pages , Jan [2] D. C. Brown. Closerange camera calibration. Photogrammetric Engineering, 37(8): , [3] B. Caprile and V. Torre. Using Vanishing Points for Camera Calibration. The International Journal of Computer Vision, 4(2):127 14, Mar [4] W. Faig. Calibration of closerange photogrammetry systems: Mathematical formulation. Photogrammetric Engineering and Remote Sensing, 41(12): , [5] O. Faugeras. ThreeDimensional Computer Vision: a Geometric Viewpoint. MIT Press, [6] O. Faugeras, T. Luong, and S. Maybank. Camera selfcalibration: theory and experiments. In G. Sandini, editor, Proc 2nd ECCV, volume 588 of Lecture Notes in Computer Science, pages , Santa Margherita Ligure, Italy, May SpringerVerlag. [7] O. Faugeras and G. Toscani. The calibration problem for stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 15 2, Miami Beach, FL, June IEEE. [8] S. Ganapathy. Decomposition of transformation matrices for robot vision. Pattern Recognition Letters, 2:41 412, Dec [9] D. Gennery. Stereocamera calibration. In Proceedings of the 1th Image Understanding Workshop, pages 11 18, [1] G. Golub and C. van Loan. Matrix Computations. The John Hopkins University Press, Baltimore, Maryland, 3 edition, [11] R. Hartley. Selfcalibration from multiple views with a rotating camera. In J.O. Eklundh, editor, Proceedings of the 3rd European Conference on Computer Vision, volume 881 of Lecture Notes in Computer Science, pages , Stockholm, Sweden, May SpringerVerlag. [12] R. Hartley. In defence of the 8point algorithm. In Proceedings of the 5th International Conference on Computer Vision, pages , Boston, MA, June IEEE Computer Society Press. [13] R. I. Hartley. An algorithm for self calibration from several views. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages , Seattle, WA, June IEEE. [14] D. Liebowitz and A. Zisserman. Metric rectification for perspective images of planes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages , Santa Barbara, California, June IEEE Computer Society. 2
19 [15] Q.T. Luong. Matrice Fondamentale et Calibration Visuelle sur l EnvironnementVers une plus grande autonomie des systèmes robotiques. PhD thesis, Université de ParisSud, Centre d Orsay, Dec [16] Q.T. Luong and O. Faugeras. Selfcalibration of a moving camera from point correspondences and fundamental matrices. The International Journal of Computer Vision, 22(3): , [17] S. J. Maybank and O. D. Faugeras. A theory of selfcalibration of a moving camera. The International Journal of Computer Vision, 8(2): , Aug [18] J. More. The levenbergmarquardt algorithm, implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 63. SpringerVerlag, [19] I. Shimizu, Z. Zhang, S. Akamatsu, and K. Deguchi. Head pose determination from one image using a generic model. In Proceedings of the IEEE Third International Conference on Automatic Face and Gesture Recognition, pages 1 15, Nara, Japan, Apr [2] C. C. Slama, editor. Manual of Photogrammetry. American Society of Photogrammetry, fourth edition, 198. [21] G. Stein. Accurate internal camera calibration using rotation, with analysis of sources of error. In Proc. Fifth International Conference on Computer Vision, pages , Cambridge, Massachusetts, June [22] B. Triggs. Autocalibration from planar scenes. In Proceedings of the 5th European Conference on Computer Vision, pages 89 15, Freiburg, Germany, June [23] R. Y. Tsai. A versatile camera calibration technique for highaccuracy 3D machine vision metrology using offtheshelf tv cameras and lenses. IEEE Journal of Robotics and Automation, 3(4): , Aug [24] G. Wei and S. Ma. A complete twoplane camera calibration method and experimental comparisons. In Proc. Fourth International Conference on Computer Vision, pages , Berlin, May [25] G. Wei and S. Ma. Implicit and explicit camera calibration: Theory and experiments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):469 48, [26] J. Weng, P. Cohen, and M. Herniou. Camera calibration with distortion models and accuracy evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(1):965 98, Oct [27] Z. Zhang. Motion and structure from two perspective views: From essential parameters to euclidean motion via fundamental matrix. Journal of the Optical Society of America A, 14(11): ,
Relating Vanishing Points to Catadioptric Camera Calibration
Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6
More informationAdvanced Techniques for Mobile Robotics Camera Calibration. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Advanced Techniques for Mobile Robotics Camera Calibration Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz What is Camera Calibration? A camera projects 3D world points onto the 2D image
More informationWii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab  http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
More informationThe calibration problem was discussed in details during lecture 3.
1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference
More informationC4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering egomotion, applications.
C4 Computer Vision 4 Lectures Michaelmas Term 2004 1 Tutorial Sheet Prof A. Zisserman Overview Lecture 1: Stereo Reconstruction I: epipolar geometry, fundamental matrix. Lecture 2: Stereo Reconstruction
More informationIntroduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
More informationProjective Reconstruction from Line Correspondences
Projective Reconstruction from Line Correspondences Richard I. Hartley G.E. CRD, Schenectady, NY, 12301. Abstract The paper gives a practical rapid algorithm for doing projective reconstruction of a scene
More informationCamera calibration and epipolar geometry. Odilon Redon, Cyclops, 1914
Camera calibration and epipolar geometry Odilon Redon, Cyclops, 94 Review: Alignment What is the geometric relationship between pictures taken by cameras that share the same center? How many points do
More informationMetric Measurements on a Plane from a Single Image
TR26579, Dartmouth College, Computer Science Metric Measurements on a Plane from a Single Image Micah K. Johnson and Hany Farid Department of Computer Science Dartmouth College Hanover NH 3755 Abstract
More informationIMAGE FORMATION. Antonino Furnari
IPLab  Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli Studi di Catania http://iplab.dmi.unict.it IMAGE FORMATION Antonino Furnari furnari@dmi.unict.it http://dmi.unict.it/~furnari
More informationWhiteboard It! Convert Whiteboard Content into an Electronic Document
Whiteboard It! Convert Whiteboard Content into an Electronic Document Zhengyou Zhang Liwei He Microsoft Research Email: zhang@microsoft.com, lhe@microsoft.com Aug. 12, 2002 Abstract This ongoing project
More informationUsing Spatially Distributed Patterns for Multiple View Camera Calibration
Using Spatially Distributed Patterns for Multiple View Camera Calibration Martin Grochulla, Thorsten Thormählen, and HansPeter Seidel MaxPlancInstitut Informati, Saarbrücen, Germany {mgrochul, thormae}@mpiinf.mpg.de
More informationCalibration of a trinocular system formed with wide angle lens cameras
Calibration of a trinocular system formed with wide angle lens cameras Carlos RicolfeViala,* AntonioJose SanchezSalmeron, and Angel Valera Instituto de Automática e Informática Industrial, Universitat
More informationMetrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
More informationLecture 19 Camera Matrices and Calibration
Lecture 19 Camera Matrices and Calibration Project Suggestions Texture Synthesis for InPainting Section 10.5.1 in Szeliski Text Project Suggestions Image Stitching (Chapter 9) Face Recognition Chapter
More informationComponent Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jia Mao, Andrew Rabinovich LECTURE 9 Affine and Euclidean Reconstruction 9.1. Stratified reconstruction Recall that in 3D reconstruction from
More informationSolution Guide IIIC. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide IIIC 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
More informationImage Projection. Goal: Introduce the basic concepts and mathematics for image projection.
Image Projection Goal: Introduce the basic concepts and mathematics for image projection. Motivation: The mathematics of image projection allow us to answer two questions: Given a 3D scene, how does it
More informationCan we calibrate a camera using an image of a flat, textureless Lambertian surface?
Can we calibrate a camera using an image of a flat, textureless Lambertian surface? Sing Bing Kang 1 and Richard Weiss 2 1 Cambridge Research Laboratory, Compaq Computer Corporation, One Kendall Sqr.,
More informationEpipolar Geometry and Stereo Vision
04/12/11 Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverese, Steve Seitz, many figures from
More informationModule 6: Pinhole camera model Lecture 30: Intrinsic camera parameters, Perspective projection using homogeneous coordinates
The Lecture Contains: Pinhole camera model 6.1 Intrinsic camera parameters A. Perspective projection using homogeneous coordinates B. Principalpoint offset C. Imagesensor characteristics file:///d /...0(Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2030/30_1.htm[12/31/2015
More informationThe Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
More informationA Fourstep Camera Calibration Procedure with Implicit Image Correction
A Fourstep Camera Calibration Procedure with Implicit Image Correction Janne Heikkilä and Olli Silvén Infotech Oulu and Department of Electrical Engineering University of Oulu FIN957 Oulu, Finland email:
More informationGeometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? All equations we have derived for far are written in the camera reference frames. These equations are valid only when: () all distances
More informationCHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More information521493S Computer Graphics. Exercise 2 & course schedule change
521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 1618 in TS128 Question 2.1 Given two nonparallel,
More informationExperiment #1, Analyze Data using Excel, Calculator and Graphs.
Physics 182  Fall 2014  Experiment #1 1 Experiment #1, Analyze Data using Excel, Calculator and Graphs. 1 Purpose (5 Points, Including Title. Points apply to your lab report.) Before we start measuring
More informationAutomated Pose Determination for Unrestrained, Nonanesthetized Small Animal MicroSPECT and MicroCT Imaging
Automated Pose Determination for Unrestrained, Nonanesthetized Small Animal MicroSPECT and MicroCT Imaging Shaun S. Gleason, Ph.D. 1, James Goddard, Ph.D. 1, Michael Paulus, Ph.D. 1, Ryan Kerekes, B.S.
More informationLeastSquares Intersection of Lines
LeastSquares Intersection of Lines Johannes Traa  UIUC 2013 This writeup derives the leastsquares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationMinimal Projective Reconstruction for Combinations of Points and Lines in Three Views
Minimal Projective Reconstruction for Combinations of Points and Lines in Three Views Magnus Oskarsson 1, Andrew Zisserman and Kalle Åström 1 1 Centre for Mathematical Sciences Lund University,SE 1 00
More informationEpipolar Geometry Prof. D. Stricker
Outline 1. Short introduction: points and lines Epipolar Geometry Prof. D. Stricker 2. Two views geometry: Epipolar geometry Relation point/line in two views The geometry of two cameras Definition of the
More informationPolygonal Approximation of Closed Curves across Multiple Views
Polygonal Approximation of Closed Curves across Multiple Views M. Pawan Kumar Saurabh Goyal C. V. Jawahar P. J. Narayanan Centre for Visual Information Technology International Institute of Information
More informationMonocular ModelBased 3D Tracking of Rigid Objects: A Survey
Foundations and Trends R in Computer Graphics and Vision Vol. 1, No 1 (2005) 1 89 c 2005 V. Lepetit and P. Fua Monocular ModelBased 3D Tracking of Rigid Objects: A Survey Vincent Lepetit 1 and Pascal
More informationPHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
More informationStructured light systems
Structured light systems Tutorial 1: 9:00 to 12:00 Monday May 16 2011 Hiroshi Kawasaki & Ryusuke Sagawa Today Structured light systems Part I (Kawasaki@Kagoshima Univ.) Calibration of Structured light
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., RichardPlouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More informationDETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES
DETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES Olaf Kähler, Joachim Denzler FriedrichSchillerUniversity, Dept. Mathematics and Computer Science, 07743 Jena, Germany {kaehler,denzler}@informatik.unijena.de
More informationPASSENGER SAFETY is one of the most important. Calibration of an embedded camera for driverassistant systems
1 Calibration of an embedded camera for driverassistant systems Mario Bellino, Frédéric Holzmann, Sascha Kolski, Yuri Lopez de Meneses, Jacques Jacot Abstract One of the main technical goals in the actual
More informationSubspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft
More informationShape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
More informationProjective Geometry: A Short Introduction. Lecture Notes Edmond Boyer
Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Contents 1 Introduction 2 11 Objective 2 12 Historical Background 3 13 Bibliography 4 2 Projective Spaces 5 21 Definitions 5 22 Properties
More information1 Spherical Kinematics
ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body
More informationOn the general equation of the second degree
On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai  600 113 email:kesh@imscresin Abstract We give a unified treatment of the
More informationCALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
More informationImage Formation and Camera Calibration
EECS 432Advanced Computer Vision Notes Series 2 Image Formation and Camera Calibration Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 60208 yingwu@ecenorthwesternedu
More informationWhich Pattern? Biasing Aspects of Planar Calibration Patterns and Detection Methods
Which Pattern? Biasing Aspects of Planar Calibration Patterns and Detection Methods John Mallon Paul F. Whelan Vision Systems Group, Dublin City University, Dublin 9, Ireland Abstract This paper provides
More informationB4 Computational Geometry
3CG 2006 / B4 Computational Geometry David Murray david.murray@eng.o.ac.uk www.robots.o.ac.uk/ dwm/courses/3cg Michaelmas 2006 3CG 2006 2 / Overview Computational geometry is concerned with the derivation
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationInteractive 3D Scanning Without Tracking
Interactive 3D Scanning Without Tracking Matthew J. Leotta, Austin Vandergon, Gabriel Taubin Brown University Division of Engineering Providence, RI 02912, USA {matthew leotta, aev, taubin}@brown.edu Abstract
More information11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
More informationRESEARCHES ON HAZARD AVOIDANCE CAMERAS CALIBRATION OF LUNAR ROVER
ICSO International Conference on Space Optics 48 October RESERCHES ON HZRD VOIDNCE CMERS CLIBRTION OF LUNR ROVER Li Chunyan,,Wang Li,,LU Xin,,Chen Jihua 3,Fan Shenghong 3. Beijing Institute of Control
More informationSolving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
More informationVPITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM. Ketan Surender
VPITS : VIDEO BASED PHONOMICROSURGERY INSTRUMENT TRACKING SYSTEM by Ketan Surender A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science (Electrical Engineering)
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationRobot Manipulators. Position, Orientation and Coordinate Transformations. Fig. 1: Programmable Universal Manipulator Arm (PUMA)
Robot Manipulators Position, Orientation and Coordinate Transformations Fig. 1: Programmable Universal Manipulator Arm (PUMA) A robot manipulator is an electronically controlled mechanism, consisting of
More informationCameraInertial Sensor modelling and alignment for Visual Navigation
Proceedings of ICAR 23 The th International Conference on Advanced Robotics Coimbra, Portugal, June 3  July 3, 23 CameraInertial Sensor modelling and alignment for Visual Navigation João Alves Jorge
More informationTowards Autocalibration of Smart Phones Using Orientation Sensors
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Towards Autocalibration of Smart Phones Using Orientation Sensors Philip Saponaro and Chandra Kambhamettu Video/Image Modeling
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationDYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationEXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS
EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS Marcel Brückner, Ferid Bajramovic, Joachim Denzler Chair for Computer Vision, FriedrichSchillerUniversity Jena, ErnstAbbePlatz, 7743 Jena,
More informationC W T 7 T 1 C T C P. C By
Calibrating a pantilt camera head P. M. Ngan and R. J. Valkenburg Machine Vision Team, Industrial Research Limited P. O. Bo 225, Auckland, New Zealand Tel: +64 9 303 4116, Fa: +64 9 302 8106 Email: fp.ngan,
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 WeiTa Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationSCREENCAMERA CALIBRATION USING A THREAD. Songnan Li, King Ngi Ngan, Lu Sheng
SCREENCAMERA CALIBRATION USING A THREAD Songnan Li, King Ngi Ngan, Lu Sheng Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR ABSTRACT In this paper, we propose
More information3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
More informationOptical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
More information8. Linear leastsquares
8. Linear leastsquares EE13 (Fall 21112) definition examples and applications solution of a leastsquares problem, normal equations 81 Definition overdetermined linear equations if b range(a), cannot
More informationAutomatic Calibration of an Invehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an Invehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
More informationPath Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking
More informationCalibrating PanTilt Cameras with Telephoto Lenses
Calibrating PanTilt Cameras with Telephoto Lenses Xinyu Huang, Jizhou Gao, and Ruigang Yang Graphics and Vision Technology Lab (GRAVITY) Center for Visualization and Virtual Environments University of
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationVRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29October 2, 2002 DETC2002/ MECH34377 VRSPATIAL: DESIGNING SPATIAL
More informationIntegrated sensors for robotic laser welding
Proceedings of the Third International WLTConference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of
More informationWe can display an object on a monitor screen in three different computermodel forms: Wireframe model Surface Model Solid model
CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant
More informationSpatial location in 360 of reference points over an object by using stereo vision
EDUCATION Revista Mexicana de Física E 59 (2013) 23 27 JANUARY JUNE 2013 Spatial location in 360 of reference points over an object by using stereo vision V. H. Flores a, A. Martínez a, J. A. Rayas a,
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationSTUDIES IN ROBOTIC VISION, OPTICAL ILLUSIONS AND NONLINEAR DIFFUSION FILTERING
STUDIES IN ROBOTIC VISION, OPTICAL ILLUSIONS AND NONLINEAR DIFFUSION FILTERING HENRIK MALM Centre for Mathematical Sciences Mathematics Mathematics Centre for Mathematical Sciences Lund University Box
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationCritical Motions for AutoCalibration When Some Intrinsic Parameters Can Vary
Critical Motions for AutoCalibration When Some Intrinsic Parameters Can Vary Fredrik Kahl (fredrik@maths.lth.se), Bill Triggs (bill.triggs@inrialpes.fr) and Kalle Åström (kalle@maths.lth.se) Centre for
More informationTHE problem of visual servoing guiding a robot using
582 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 13, NO. 4, AUGUST 1997 A Modular System for Robust Positioning Using Feedback from Stereo Vision Gregory D. Hager, Member, IEEE Abstract This paper
More information1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two nonzero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
More informationAutocalibration of PanTilt Cameras including Radial Distortion and Zoom
Autocalibration of PanTilt Cameras including Radial Distortion and Zoom Ricardo Galego, Alexandre Bernardino, and José Gaspar Institute for Systems and Robotics, Instituto Superior Técnico / UTL, Lisboa,
More informationProjective Geometry. Projective Geometry
Euclidean versus Euclidean geometry describes sapes as tey are Properties of objects tat are uncanged by rigid motions» Lengts» Angles» Parallelism Projective geometry describes objects as tey appear Lengts,
More informationAn Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department CarnegieMellon University Pittsburgh, Pennsylvania 15213 Abstract
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationConics with a Common Axis of Symmetry: Properties and Applications to Camera Calibration
Proceedings of the TwentySecond International Joint Conference on Artificial Intelligence Conics with a Common Axis of Symmetry: Properties and Applications to Camera Calibration Zijian Zhao UJFGrenoble
More informationComputer Vision  part II
Computer Vision  part II Review of main parts of Section B of the course School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture Name Course Name 1 1 2
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationA PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin  nzarrin@qiau.ac.ir
More informationSegmentation of building models from dense 3D pointclouds
Segmentation of building models from dense 3D pointclouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
More informationProjection Center Calibration for a Colocated Projector Camera System
Projection Center Calibration for a Colocated Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,
More informationA Introduction to Matrix Algebra and Principal Components Analysis
A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra
More information2D Geometric Transformations. COMP 770 Fall 2011
2D Geometric Transformations COMP 770 Fall 2011 1 A little quick math background Notation for sets, functions, mappings Linear transformations Matrices Matrixvector multiplication Matrixmatrix multiplication
More informationCamera PanTilt EgoMotion Tracking from PointBased Environment Models
Camera PanTilt EgoMotion Tracking from PointBased Environment Models Jan Böhm Institute for Photogrammetry, Universität Stuttgart, Germany Keywords: Tracking, image sequence, feature point, camera,
More informationAP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
More information3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
More information