F R A M E S F O R C O L O U R M O R P H O L O G Y

Size: px
Start display at page:

Download "F R A M E S F O R C O L O U R M O R P H O L O G Y"

Transcription

1 F R A M E S F O R C O L O U R M O R P H O L O G Y 3 In the last chapter we introduced the problem of false colours, and argued why we believe the problem with these colours is not that they are false (different from any of the input colours), but rather that they are unexpected or unintuitive. Furthermore, we argued that invariance to certain transformations played an important role in this (also see Fig. 3.1). In this chapter we will see what happens when one imposes certain invariances on product-order-based colour morphology. In addition to developing some group-invariant frames suitable for colour morphology, this chapter will also provide a much more compact statement of the main result of the previous chapter and enlarge its scope to include certain non-orthogonal transforms (saturation scaling). Some results are given, demonstrating how enforcing invariance to changes in hue or saturation can make the false colour problem much less objectionable. Additionally, it is shown that in both noise reduction and texture classification tasks our approach gives a (further) objective improvement over product orders. Our approach thus mitigates the false colour problem and leads to better results than a traditional product order-based approach. Illustrations explain why and when invariance leads to better results. 3.1 frame constructions We first give a quick recap of the relevant theory developed in the previous chapter. So let us assume that T is a group of linear transformations on the Hilbert space R K, and that ϕ 0 is an operator on R K that is not invariant to T. If {e k } k K is the original basis used for R K, then we con- Figure 3.1: The infimum of red and blue is black (in the RGB colour space). In contrast, the infimum of magenta and cyan is blue. If we desaturate red and blue, then their infimum suddenly becomes grey. To an observer (who does not know about RGB colour spaces), it is not immediately obvious why this should be so. Why is the infimum of red and blue not also some in-between (purplish) colour for example? 35

2 frames for colour morphology sider the set {f i } i I, with I = T K and f τ,k = τ e k (where τ is the adjoint of τ ). Under certain assumptions this set forms a frame that can be used to construct a T-invariant version of ϕ 0. As we saw before, the above choice of frame has the interesting property that the analysis operator essentially corresponds to taking the components of all transformed versions of a vector: (Fa) τ,k = (τ e k ) a = e k (τ a) = (τ a) k. As a consequence, (Fa) τ is considered equal to τ a. Similarly, if u R I, then u τ (with τ T) denotes the vector in R K described by the coefficients u τ,k, with k K. To construct an operator ϕ : R K R K that is invariant to T, based on an operator ϕ 0, we first define an operator ψ : R I R I such that ψ (u) τ = ϕ 0 (u τ ) for all τ T and u R I. So ψ (Fa) computes ϕ 0 (τ a) for all transformations τ T. We then simply define ϕ as F + ψ F, which Theorem 3.1 shows to be invariant to T under a mild condition on the norm on R I. Theorem 3.1. Assume ϕ 0 is an operator on R K, and that ψ : R I R I is defined by ψ (u) τ = ϕ 0 (u τ ) (for all τ T and u R I ). The operator ϕ : R K R K, defined as ϕ = F + ψ F, is then invariant to the transformation group T on the Hilbert space R K, provided the norm on R I is invariant to a permutation of T (in the sense that if P is a permutation operator mapping each index (τ,k) I to some index (p(τ ),k) I, then u = Pu for any choice of u R I ) 1. Proof. It should be clear that (Fτ a) σ = σ τ a = (Fa) σ τ for all τ,σ T and a R K. Thus, ψ (Fτ a) σ = ψ (Fa) σ τ. In other words, ψ (Fτ a) is a permuted version of ψ (Fa). Denote the effect of this permutation by the operator P, so that F τ = P F and ψ (Fτ a) = P ψ (Fa). Similarly, P 1 is used to denote the inverse permutation, and we naturally have F τ 1 = P 1 F. What remains, is to show that F + P = τ F +. By definition, F + u (with u R I ) is the least-squares solution a to Fa = u, while F + Pu is the least-squares solution b to Fb = Pu. As these linear systems are over-determined, these least-squares solutions are unique. It is thus sufficient to show that b must equal τ a. This follows directly from the invariance of the norm to any permutation. Due to this, the least-squares solution to F b = Pu must equal the least-squares solution to (P 1 F )b = u, or equivalently, Fτ 1 b = u. We can now see that b must indeed equal τ a, and thus F + P must equal τ F +. We thus have ϕ(τ a) = F + ψ (Fτ a) = F + P ψ (Fa) = τ F + ψ (Fa) = τ ϕ(a). This concludes the proof. 36

3 3.1 frame constructions Basis Hue invariant Figure 3.2: The top and bottom bars cycle through all possible hues. The middle two bars compute the infimum over a range of hues at each position (indicated by the white rectangles). The top-middle bar uses a product order on the RGB basis, giving a brighter and more saturated response at the primary colours than elsewhere. In contrast, the bottom-middle bar uses the hue-invariant frame, giving a similar result for all hues Hue invariance The previous chapter argued that the problem with false colours is not that they are different from the original colours, but rather that they do not make any sense. Why should the infimum of two colours that are both (perceptually) close to magenta be black or grey, while the infimum of two colours similarly close to green is green(ish)? Here we show how to construct a frame-dual pair that is invariant to hue rotations, resulting in a qualitatively similar result in both cases, see Fig Example 2.3 showed that the set of all hue rotations can be considered a transformation group T on the RGB colour space. The group T is (group) isomorphic to the two-dimensional rotation group SO(2). A rotation of θ degrees around the grey axis will be denoted by R д,θ, but instead of writing u Rд,θ we will use the more readable u θ. For simplicity, the RGB colour space is considered to be R 3 (R K with K = {1,2,3}), with an orthonormal basis corresponding to the red, green and blue components. The index set I can then be considered to be [0,2π ) {1,2,3} and f ϕ,k = R д,θ ek. Taking u,v R I, we now define the inner product on R I by using the inner product on R 3 : u v = 1 2π 2π 0 u θ v θ dθ. Clearly, this inner product is invariant to the group T. The set of vectors {f i } i I as defined above forms a tight frame with frame constants A = B = 1: 2π Fa 2 = Fa Fa = 1 2π 0 = 1 2π a 2 dθ 2π 0 = a 2. R д,θ a 2 dθ (rotations are orthogonal) 1 This condition is connected to the concept of a Haar measure [143]. 37

4 frames for colour morphology (a) v : (224, 120, 16) (b) v = F v (c) v w Figure 3.3: Illustration of taking the infimum in a hue-invariant representation. (a) Colours are often represented using their red, green and blue components. As established before, taking the infimum is not invariant to changes in hue in that representation. (b) The same colour as in (a), but now in the hue-invariant frame. The colour is shown using a polar plot of the inner product with the frame vectors. (c) The infimum of two colours (indicated using orange and magenta polar region plots) in a hue-invariant frame, with w = FR д, 60 v. This means that the frame is its own (canonical) dual. Also, since the frame itself is invariant to T (as illustrated in Fig. 3.3), it forms a T- invariant frame-dual pair (with itself), by Theorem The synthesis operator corresponding to the hue-invariant frame found above can be given explicitly. By the definitions above and the linearity of the inner product, we must have: F u a = u Fa = 1 2π 2π = 1 2π = ( 1 2π 0 2π 0 2π 0 u θ (R д,θ a) dθ (R д,θ u θ ) a dθ ) R д,θ u θ dθ a. So F u = 1 2π 2π R 0 д,θ u θ dθ. Note that the above is geared towards single colour values, rather than colour images. For treating colour images we simply apply the same technique per-pixel. So to filter a colour image using the rotationinvariant frame discussed here, we would first compute a greyscale image for every vector in the frame, then apply some greyscale morphological operator on each of these images, and then finally combine all the greyscale images (according to the canonical dual frame) Rotation Invariance Here we will see how to construct operators on RGB colour images that are invariant to all rotations of the colour space. This is particularly use- 38

5 3.1 frame constructions ful in the context of filters that should act on differences in colour regardless of the direction of that difference. For example, you might want to supress colours that are outliers in their neighbourhood, regardless of the direction (in the colour space) in which they are an outlier. The group T of all 3D rotations is SO(3). The image/range of the frame analysis operator can thus be considered to be R SO (3) 3. Elements of R SO (3) 3 should be interpreted as vectors whose components can be indexed by elements from I = SO(3) {1,2,3}. The next task is to define a suitable inner product on R SO (3) 3. We will build our inner product on top of the one for R 3. As we saw in Theorem 3.1, the inner product should be invariant with respect to permutations of SO(3). Take f (R) dr to be the integral of f (R) over SO (3) the entire group of rotations (the rotations represented by r), weighing all rotations equally 2. We assume that 1 dr is some finite SO (3) non-zero constant A. The inner product can then simply be defined as follows for u and v in R SO (3) 3 : u v = u R v R dr. SO (3) It can be verified that this inner product is invariant to the permutations alluded to in Theorem 3.1 (which makes sense, given that we weigh all rotations equally). Now we need to find F +. This proves particularly easy, since the frame is tight (A equals B in Eq. (1.1), see Section 1.3.2): Fa 2 = (Fa) (Fa) = (Fa) R 2 dr = Ra 2 dr = A a 2. SO (3) SO (3) So F + = 1 A F. The last step above is valid because rotations are orthogonal Saturation Invariance As shown in Section 2.4.2, the construction used above breaks down when the transformations have eigenvalues with non-unit magnitude. However, in some cases, we can still construct frames invariant to such transformations. We will illustrate this by constructing a frame invariant to scaling the saturation of a colour in the RGB colour space. The saturation, or colourfulness, is taken to be the distance to the grey axis, which is the line through both white and black. As group T we take the set of linear transformations {S s s R and 0 < s}, with S s defined by S s a = s a + (1 s) a g g gg for a R3 and g representing grey (it is not particularly important which shade of grey). These transformations only scale the component of a that is 2 Formally, we take the Haar integral [143]. 39

6 frames for colour morphology Grey axis Frame Original Frame Original: e 1 = (1,0). Grey vector: g = (1,1). Transformation: S s e 1 = s (1,0) + (1 s) 1 2 (1,1) = 1 2 (1 + s,1 s). (1+s,1 s) Frame: f 0,1 = lim s 0 = 1 (1,1), 2(1+s) 2 2 (1+s,1 s) f,1 = lim s = lim s (s, s) = 1 (1, 1),... 2(1+s) 2 2 s 2 2 Figure 3.4: Illustration of the process for constructing a saturation-invariant frame. If the original basis vector is scaled perpendicularly to the grey axis, it traces the horizontal line. If the operator ϕ 0 is invariant to scalar multiplication, then we can avoid vectors of arbitrarily large magnitude by normalizing them. This maps the horizontal line (of infinite extent) onto the quarter circle. For the frame we only take the limit vectors on either end; these are eigenvectors of all scalings. perpendicular to the grey axis, and can thus be interpreted as scaling the saturation. The next step is to assume that the operator ϕ 0 is already invariant to a uniform scaling of the colour space (so scaling all components equally). This is typically the case for morphological operators, and allows us to normalize all vectors. Now the trick is to realize that scaling (and normalizing) one of the original basis vectors by zero and by infinity (in the limit) produces two vectors that are eigenvectors of all saturation scalings: one corresponding to a purely achromatic part of the colour space and one corresponding to a purely chromatic part of the colour space (as illustrated in Fig. 3.4). So instead of creating a frame indexed by T {1,2,3}, we create a frame indexed by I = {0, } {1,2,3}. Noting that S s = S s for all s R, the six frame vectors {f i } i I are then defined as f 0,k = lim s 0 S s e k S s e k and S s e k f,k = lim s S s e k. Alternatively, using the concept of Haar measure [143], this construction can be seen as a natural generalization of the original construction. Although the Haar measure on the group of saturation scalings (and normalizations) is not finite, one can see that it causes the extreme vectors (those scaled by zero, or the limit cases of scaling by infinity) to have an infinitely higher measure in the frame than all other vectors. A frame containing those extreme vectors with measure one and the rest with measure zero naturally follows. 40

7 3.2 implementation details In practice, this means we get a frame consisting of a grey vector and three vectors that are perpendicular to that grey vector. These vectors are all eigenvectors of all S s T. In particular, for some S s T, the grey vector (f 0,1 = f 0,2 = f 0,3 ) has eigenvalue 1, while the other three vectors (f,1, f,2 and f,3 ) have eigenvalue s. Recalling that S s = S s, this means that (FS s a) 0 = (Fa) 0 and (FS s a) = s (Fa) for any S s T. Since we assumed that ϕ 0 is invariant to multiplication by a scalar, we can easily show that F + ψ F is invariant to T, withψ as in Theorem 3.1. It should be noted that the above approach is not terribly useful for much more general scalings. For example, illumination changes can be modelled by multiplying the red, green and blue channels with independent weights. The eigenvectors of such a transformation are in general only the red, green and blue vectors, thus the only possible frame would be the original basis. Clearly, a different method would be needed to combine invariance to illumination changes and other (non-linear) transformations, such as those discussed by Angulo [4, 3.4], with invariance to rotations. 3.2 implementation details The above considers frames containing infinitely many (even uncountably many) vectors, this can obviously not be implemented directly. In practice, the easiest solution is to use a finite set of vectors to approximate the frame. One method is to sample the group of transformations used to create the frame, and construct a frame (and canonical dual) based on this. Alternatively, we can also take a sample of (uniformly distributed) vectors in the ideal frame, and construct a discrete frame based on these. We can then take the Moore-Penrose pseudoinverse of the sampled frame to find its canonical dual frame. A practical implementation of an operator on a lifted image is fairly straightforward using Theorem 3.1: the original operator simply has to be called multiple times, on different, transformed, versions of the original (if we use a finite sample of the group of transformations). Alternatively, if frame is sampled rather than the group, a greyscale operator can be applied to all the (scalar) channels corresponding to the different frame vectors. In the (2D) examples shown here the sampled frames were about ten to fifty times as large as the original basis. Note that if we use this construction, memory can be saved by first computing the channel corresponding to the first frame vector, applying the operator and projecting back only that channel (this works because both F and F + are linear). The running time of our prototype frame-based operators is simply a (large) constant factor more than that of the operators based on bases. However, we expect that in practice many optimizations could be made to decrease the runtime impact of our method. For example, in addition to not always needing the absolute best quality possible, we expect that Code and online demo available via Bitly links MwNF3F and YarcHY (the first for the noise reduction and texture classification tasks, the other for the rest). 41

8 frames for colour morphology R R R R G G G G (a) Original (b) Basis (c) Vector (d) Frame Figure 3.5: The original (a) is filtered using a median filter applied independently to the channels using the original basis (b), a vector median filter (c), and a median filtered applied independently to the channels in a rotation-invariant frame (d). In each case, both a plot of the red (R) and green (G) channels, as well as the actual (colour) image are shown. The median filter has a (centered) support of five pixels. The spike in the red channel in (a) is assumed to be noise. the choice of frame vectors could be optimized for specific images to further reduce the number of directions to sample (in the degenerate case of greyscale images encoded as RGB images one or two directions would suffice for example). Finally, some frame-based filters can be computed directly on the original representation (see Chapter 5). 3.3 results Aptoula and Lefèvre [13] evaluated different orders for use with mathematical morphology. We use the same noise reduction and texture classification tasks to demonstrate the objective benefits of group-invariant frames. In our comparison we consider the RGB basis, the hue-invariant frame and the (fully) rotation-invariant frame. However, we first explore two more theoretical (and illustrative) examples of historical significance Median filters Astola et al. [19] gave two examples to illustrate the problem with marginal processing (/product-order-based operators). In the first example (Fig. 3.5), the per-channel median filter does not pick up on the fact that yellow is a different colour from both red and green, and thus finds that just before the switch to red, there are three out of five pixels that have a non-zero red component, instead of seeing one yellow pixel, two green pixels and two red pixels. In contrast, the other two filters indeed give much less importance to the yellow pixel, resulting in a more natural transition from red to green. The second example by Astola et al. (see Fig. 3.6) shows how processing channels independently can result in an unnatural bias towards filtering only along the axes. In particular, when processing the channels independently there is a clear flattening of the result in the direction of 42

9 3.3 results (a) Original (b) Basis (c) Vector (d) Frame Figure 3.6: The original (a) is filtered using a median filter applied independently to the channels in the original basis (b), a vector median filter (c), and a median filter applied to the channels in a rotationinvariant frame (d). The 1D signals are plotted as strings of points in the 2D value(/colour) space. the axes, and a simple 45 rotation of the input results in a completely different output. In contrast, neither the vector median filter, nor the median filter using the rotation-invariant frame, shows such a directional bias. Astola et al. [19] suggested solving both these issues by creating a vector median filter. This was based on minimizing the sum of the distances to all vectors. As can be seen in Figs. 3.5 and 3.6, our method gives very similar results 3. However, in contrast to the vector median filter, our approach generalizes easily to any operator (not just the median filter), and simply follows logically from enforcing certain constraints Noise reduction Aptoula and Lefèvre [13] used a noise reduction task to compare the objective utility of different ordering schemes. This section replicates their results as they relate to product orders, while showing a (slight) improvement for hue and rotation invariant frames. For this, Gaussian noise was added to an image (Lena), the noisy image was lifted to one of three different frames, filtered, and then projected back to the original colour space. Only the RGB colour space was used, as this makes the most sense with this particular test (after all, the noise is added in this space), and led to best performance in the original experiments. Also, the aim is primarily to demonstrate the potential impact of using groupinvariant frames. 3 It should be noted that the original vector median filter always picked the result from one of the input values. We have chosen not to do this, as this forces an arbitrary decision to be made in ambiguous cases (like the one shown in Fig. 3.5). 43

10 frames for colour morphology ρ RGB Hue Rot. Lex Normalised MSE ( 100) RMSE remaining (%) Figure 3.7: The Lena image, with image values in the range [0, 255], was corrupted with Gaussian noise (σ = 32), with different correlation coefficients (ρ). The OCCO operator was applied to the corrupted images in the original RGB basis, the hue-invariant frame and the rotationinvariant frame. For comparison purposes, the same filter was also applied using a lexicographical order (ordering first on red, then on green, then on blue). The values in the top part of the table show the normalised mean squared error (multiplied by 100), while the bottom part uses the RMSE remaining. As can be seen, using a groupinvariant frame can give a clear improvement. The noise was Gaussian, with the covariance matrix *1 Σ = σ 2...ρ,ρ ρ 1 ρ ρ+ / ρ //. 1- Here ρ denotes the correlation coefficient. To generate Gaussian ran1 dom numbers with the above covariance matrix: first find a matrix Σ such that Σ 2 (Σ 2 )T = Σ; the correlated noise is then generated by gen1 erating normally distributed samples and multiplying them by Σ 2. The noise reduction filter was the OCCO filter. This filter is built from a structural opening γb and a structural closing ϕb, both using the same structuring element b. In this case a 3 3 cross was used (the origin and its four nearest neighbours). Both filters operate on L = Fun(E,C) using a product order, with C the (RGB) colour space. Following Aptoula and Lefèvre [13], the operator is defined by (a L) OCCOb (a) = 1 (γb (ϕb (a)) + ϕb (γb (a))). 2 (3.1) The OCCO operator was lifted to the group-invariant frames by taking OCCOb (u)τ = OCCOb (uτ ). An image was then filtered by lifting it to M = Fun(E,C T ), applying the (lifted) OCCO operator, and then projecting back to L = Fun(E,C) using the canonical dual frame. The results of the noise reduction task are summarized in Fig The error metric used originally was the normalised mean squared error 44

11 3.3 results Figure 3.8: The effect of rotation invariance on the OCCO filter. Each diagram shows part of a one-dimensional signal in which each value is a 2D vector. The signals are plotted as in Fig The error-free signals are not shown, but assumed to be linear ramps of the form u[i] = (1/2, 1/2)+i (cos(a), sin(a)). Shown in grey are the perturbed signals, where the value at i = 0 is offset at a right angle to the original signal. Shown in black are the OCCO filtered signals (the structuring element has a width of 3). The top row shows the results for using the OCCO filter directly on the 2D basis, the bottom row shows the results for the rotation-invariant frame (in 2D). (MSE), which is just the squared L 2 -norm of the error in the filtered image divided by the squared L 2 -norm of the original image (see [13] for details). Unfortunately, this error metric is quite difficult to interpret when comparing results for different images. Therefore, we also use the root mean squared error (RMSE) remaining: the RMSE in the filtered image divided by the RMSE in the noisy image. For example, looking at the table in Fig. 3.7 we see that noise is reduced to approximately 40% (or less) of its original strength. The group-invariant frames clearly lead to better noise reduction 4. This makes sense: the OCCO filter essentially removes bumps. When using the RGB basis it only removes bumps in the red, green and blue directions. So yellow noise is less likely to be suppressed than red or green noise. Using the hue-invariant frame remedies this, but still not covers all possible directions (for example the direction (1, 1, 0)). This could explain why the rotation-invariant frame (whose frame vectors point in all directions) performs best. In fact, as can be seen in Fig. 3.8, the OCCO filter on a 2D basis performs perfectly for signals that are aligned with one of the basis directions, but quite badly for signals that are even slightly misaligned. In contrast, the OCCO filter on the rotation-invariant frame gives a much more consistent result, effectively the average behaviour of the filter on the basis, averaged over all rotations of the space (this is better than the RGB basis result in most cases). The effect of correlation is also interesting. As can be seen in Fig. 3.7, increasing correlation decreases the benefit of using group-invariant frames. This likely has to do with the fact that in many images the main 4 The experiment was repeated using a set of 20 images, with similar results. 45

12 frames for colour morphology ρ RGB Rot (40.3) 36.5 (36.5) (40.2) 38.0 (39.2) RMSE remaining (%) Figure 3.9: For the rotation-invariant frame, inverting the green channel of Lena leads to improved performance when filtering images corrupted with correlated noise. Presumably this effect is a result of having less of the data aligned with the noise. The performance for the RGB basis remains essentially the same (as the filter is self-dual and applied per channel). For comparison purposes, the values from Fig. 3.7 are shown between parentheses. variation is along (or close to) the grey axis. As the correlated noise is strongest along the grey axis, noise and data values line up. As points on a line can only be ordered in two ways, rotational invariance does not help here. This is corroborated by repeating the experiment using a version of Lena where the green channel has been inverted. In that case the predominant gradient directions are no longer aligned with the grey axis, and the effect is less pronounced (see Fig. 3.9). This shows the ability of group-invariant frames to do better than a basis, while gracefully degrading to similar performance when it cannot do better Texture classification In this task a number of textures are grouped into classes. Part of the textures serve as training data and the rest is used for testing. The textures are taken from Outex13 [147]. Nearest-neighbour classification is used, based on the Euclidean distance between feature vectors. Again we limit ourselves to the RGB colour space (and derived group-invariant frames), as using different colour spaces had hardly any effect on the performance in the original tests using a product order [13]. The feature vectors are based on morphological covariance. To compute the morphological covariance of an image (for a specific shift), a point-wise infimum is computed of the image and a shifted version of itself. The result is then summed over all positions (but independently for each channel ), and divided by the sum of the pixel values of the original image. This is done for a fixed set of shifts, and when using group-invariant frames, for a finite (fixed) subset of the frame. Shifts in four different directions (0, 45, 90 and 135 ), and in each direction by L distances of 1 to 49 pixels (in step sizes of two), were used in all cases. In total this gives 25 feature values per direction and 100 per channel. In the RGB case 3 channels were used, for the groupinvariant frames subsets of 150 frame vectors were used to create

13 3.3 results RGB Hue Rot Classification rate (%) Figure 3.10: The classification rates for a simple nearest-neighbour scheme based on features computed using different frames. Clearly, the group-invariant frames again lead to better performance than the original RGB basis. On the left a couple of the textures used in the test are shown. channels. (Note that in this task we do not need to project back to the original colour space.) This follows the description given by Aptoula and Lefèvre [13]. The results summarized in Fig again show improved performance using the group-invariant frames. However, there are some complications. First of all, the classification results are better than in the original work, even using the RGB basis. This is probably due to slight differences in implementation. Secondly, we need some tricks to make sure that the features can be computed in a sensible manner when using the group-invariant frames. Specifically, for each direction we make sure that the projection of the entire RGB cube is non-negative. Otherwise some directions give much less stable results than others (due to division by numbers close to zero). Still, that the group-invariant frames lead to improved performance is not surprising. As an extreme case, consider a texture that simply oscillates between pure red and green. When analysing this using just the original RGB basis such a texture is indistinguishable from a texture that oscillates between black and yellow (a combination of red and green), as we effectively have no information about the relative phase of the red and blue components. On the other hand, using the rotationinvariant frame we also look at the projection onto the vector ( 1,0,1), giving excellent discrimination. This is illustrated in Fig Further examples The effect of saturation invariance is illustrated in Fig Without saturation invariance, the infimum of pure red and blue is black, while the infimum of desaturated red and blue is grey. With a saturation-invariant frame, the infimum of pure red and blue is grey as well. Conceptually, this makes a lot more sense; both colours are pretty light. Using a hue and saturation-invariant frame provides an interesting alternative to filtering in the HSL colour space. Instead of representing saturation and hue using a magnitude and an angle (which is hard to order sensibly) we effectively represent them as a function of the angle. 47

14 frames for colour morphology (a) (b) (c) (d) (e) (f) Figure 3.11: The two signals in (a) and (b) are plotted using the original basis in (c) and (d), respectively. In (c) to (f), the signals are plotted against time (top) and in the 2D value space (bottom), like in Fig The basis vectors used for the middle row are shown as arrows in the bottom row. Figure 3.12: Three different pairs of colours and their infima using different frames. From left to right: the original basis (same as in Fig. 3.1), a saturation-invariant frame and a saturation and hue-invariant frame. Gamma correction (srgb) was used; this avoids big differences in lightness between the saturated and desaturated colours. Figures 3.13 and 3.14 show some results on more natural images. Figure 3.13 shows the result of applying a dilation using the original basis, as well as using a hue-invariant frame and a rotation-invariant frame. Figure 3.14 does the same for the OCCO operator. Since the OCCO operator is a self-dual operator 5, it is an excellent candidate for use with a rotation-invariant frame. In fact, the OCCO operator can be derived from taking the opening of a closing as ϕ 0 in Theorem 3.1, with T the group consisting of the identity mapping and the inversion mapping. 3.4 summary This chapter provided a more compact statement of the main result of the previous chapter. Furthermore, it was shown that this result can be extended to certain non-orthogonal transformations. Using several examples this chapter illustrated how using the newly created frames can lead to more intuitive and higher quality results. 5 A self-dual operator is taken to be invariant to inverting the image. 48

15 3.4 summary Figure 3.13: Dilations (by a square) of the originals (top row, both pixels) using different frames, from left to right: the RGB basis, a hue-invariant frame and a rotation-invariant frame. Rows three and five are zoomed versions of rows two and four. A pink haze appears between orange and blue patches on the colour chart when using the RGB basis. The same kind of effect is quite visible all over the parrot. Using a hue-invariant frame solves these problems (middle column). Using the rotation-invariant frame (right column) is similar to averaging a dilation and an erosion, but otherwise shows no significant colour artefacts. (The parrot image is based on a photograph by Luc Viatour / used under the CC BY 2.0 license.) 49

16 frames for colour morphology Figure 3.14: Similar to Fig. 3.13, except that now the self-dual OCCO operator is used, with a square structuring element. Again colour artefacts are visible in the left column, especially between the orange and blue patches on the colour chart (the transition region becomes green). Using a hue-invariant frame (middle column), and especially using a rotation-invariant frame (right column), eliminates these artefacts. Similarly, in the left column the back of the neck of the parrot is suddenly bright green, even though it is originally blueish, and there is only yellow and dark green in the neighbourhood. Again this artefact is largely gone in the middle and right columns. 50

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

CBIR: Colour Representation. COMPSCI.708.S1.C A/P Georgy Gimel farb

CBIR: Colour Representation. COMPSCI.708.S1.C A/P Georgy Gimel farb CBIR: Colour Representation COMPSCI.708.S1.C A/P Georgy Gimel farb Colour Representation Colour is the most widely used visual feature in multimedia context CBIR systems are not aware of the difference

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs Correlation and Convolution Class otes for CMSC 46, Fall 5 David Jacobs Introduction Correlation and Convolution are basic operations that we will perform to extract information from images. They are in

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v, 1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu) 6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Color Balancing Techniques

Color Balancing Techniques Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction Color balancing refers to the process of removing an overall color bias from an image. For example, if an image appears

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

The purposes of this experiment are to test Faraday's Law qualitatively and to test Lenz's Law.

The purposes of this experiment are to test Faraday's Law qualitatively and to test Lenz's Law. 260 17-1 I. THEORY EXPERIMENT 17 QUALITATIVE STUDY OF INDUCED EMF Along the extended central axis of a bar magnet, the magnetic field vector B r, on the side nearer the North pole, points away from this

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Mechanics 1: Vectors

Mechanics 1: Vectors Mechanics 1: Vectors roadly speaking, mechanical systems will be described by a combination of scalar and vector quantities. scalar is just a (real) number. For example, mass or weight is characterized

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

3 An Illustrative Example

3 An Illustrative Example Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

Let s first see how precession works in quantitative detail. The system is illustrated below: ... lecture 20 Topics: Precession of tops Nutation Vectors in the body frame The free symmetric top in the body frame Euler s equations The free symmetric top ala Euler s The tennis racket theorem As you know,

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Section V.3: Dot Product

Section V.3: Dot Product Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,

More information

Geometric Transformations

Geometric Transformations Geometric Transformations Definitions Def: f is a mapping (function) of a set A into a set B if for every element a of A there exists a unique element b of B that is paired with a; this pairing is denoted

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

More information

Rotation Matrices and Homogeneous Transformations

Rotation Matrices and Homogeneous Transformations Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n

More information

Principal components analysis

Principal components analysis CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is Linear algebra? Overview of course What is a vector? What is a vector space? Examples of vector spaces Vector subspaces Span, linear

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds

Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds Isosceles Triangle Congruent Leg Side Expression Equation Polynomial Monomial Radical Square Root Check Times Itself Function Relation One Domain Range Area Volume Surface Space Length Width Quantitative

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

1. Introduction to image processing

1. Introduction to image processing 1 1. Introduction to image processing 1.1 What is an image? An image is an array, or a matrix, of square pixels (picture elements) arranged in columns and rows. Figure 1: An image an array or a matrix

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information