Creating and Detecting Doctored and Virtual Images: Implications to The Child Pornography Prevention Act
|
|
|
- Bryce Mills
- 10 years ago
- Views:
Transcription
1 TR , Dartmouth College, Computer Science Creating and Detecting Doctored and Virtual Images: Implications to The Child Pornography Prevention Act Hany Farid Department of Computer Science and Center for Cognitive Neuroscience Dartmouth College Hanover NH Abstract The 1996 Child Pornography Prevention Act (CPPA) extended the existing federal criminal laws against child pornography to include certain types of virtual porn. In 2002, the United States Supreme Court found that portions of the CPPA, being overly broad and restrictive, violated First Amendment rights. The Court ruled that images containing an actual minor or portions of a minor are not protected, while computer generated images depicting a fictitious computer generated minor are constitutionally protected. In this report I outline various forms of digital tampering, placing them in the context of this recent ruling. I also review computational techniques for detecting doctored and virtual (computer generated) images. Hany Farid, 6211 Sudikoff Lab, Computer Science Department, Dartmouth College, Hanover, NH USA ( [email protected]; tel/fax: / ). This work was supported by an Alfred P. Sloan Fellowship, a National Science Foundation CAREER Award (IIS ), a departmental National Science Foundation Infrastructure Grant (EIA ), and under Award No DT-CS-K001 from the Office for Domestic Preparedness, U.S. Department of Homeland Security (points of view in this document are those of the author and do not necessarily represent the official position of the U.S. Department of Homeland Security). 1
2 Contents 1 Introduction 2 2 Digital Tampering Composited Morphed Re-touched Enhanced Computer Generated Painted The Child Pornography Prevention Act 5 4 Ashcroft v. Free Speech Coalition 5 5 Is it Real or Virtual? Morphed & Re-touched Color Filter Array Duplication Computer Generated Statistical Model Classification Results Painted Modern Technology Image-Based Rendering D Laser Scanning D Motion Capture Discussion 11 A Exposing Digital Composites 12 A.1 Re-Sampling A.2 Double JPEG Compression A.3 Signal to Noise A.4 Gamma Correction Introduction The past decade has seen remarkable growth in our ability to capture, manipulate, and distribute digital images. The average user today has access to highperformance computers, high-resolution digital cameras, and sophisticated photo-editing and computer graphics software. And while this technology has led to many exciting advances in art and science, it has also led to some complicated legal issues. In 1996, for example, the Child Pornography Prevention Act (CPPA) extended the existing federal criminal laws against child pornography to include certain types of virtual porn. In 2002 the United States Supreme Court found that portions of the CPPA, being overly broad and restrictive, violated First Amendment rights. The Court ruled that images containing an actual minor or portions of a minor are not protected, while computer generated depicting a fictitious minor are constitutionally protected. This ruling naturally leads to some important and complex technological questions given an image how can we determine if it is authentic, has been tampered with, or is computer generated? In this report I outline various forms of digital tampering, and review computational techniques for detecting digitally doctored and virtual (computer generated) images. I also describe more recent and emerging technologies that may further complicate the legal issues surrounding digital images and video. 2 Digital Tampering It probably wasn t long after Nicéphore Niépce created the first permanent photographic image in 1826 that tampering with photographs began. Some of the most notorious examples of early photographic tampering were instigated by Lenin, when he had enemies of the people removed from photographs, Figure 1. This type of photographic tampering required a high degree of technical expertise and specialized equipment. Such tampering is, of course, much easier today. Due to the inherent malleablilty of digital images, the advent of low-cost and high-performance computers, high-resolution digital cameras, and sophisticated photo-editing and computer graphics software, the average user today can create, manipulate and alter digital images with relative ease. There are many different ways in which digital images can be manipulated or altered. I describe below six different categories of digital tampering the distinction between these will be important to the subsequent discussion of the U.S. Supreme Court s ruling on the CPPA. 2
3 Figure 1: Lenin and Trotsky (top) and the result of photographic tampering (bottom) that removed, among others, Trotsky. 2.1 Figure 2: An original image (top) and a composited image (bottom). The original images were downloaded from freefoto.com. Composited 2.2 Compositing is perhaps the most common form of digital tampering, a typical example of which is shown in Figure 2. Shown in the top panel of this figure is an original image, and shown below is a doctored image. In this example, the tampering consisted of overlaying the head of another person (taken from an image not shown here), onto the shoulders of the original kayaker. Beginning with the original image to be altered, this type of compositing was a fairly simple matter of: (1) finding a second image containing an appropriately posed head; (2) overlaying the new head onto the original image; (3) removing any background pixels around the new head; and (4) re-touching the pixels between the head and shoulders to create a seamless match. These manipulations were performed in Adobe Photoshop, and took approximately 30 minutes to complete. The credibility of such a forgery will depend on how well the image components are matched in terms of size, pose, color, quality, and lighting. Given a well matched pair of images, compositing, in the hands of an experienced user, is fairly straight-forward. Morphed Image morphing is a digital technique that gradually transforms one image into another image. Shown in Figure 3, for example, is the image of a person (the source image) being morphed into the image of an alien doll (the target image). As shown, the shape and appearance of the source slowly takes on the shape and appearance of the target, creating intermediate images that are part human, part alien. This morphed sequence is automatically generated once a user establishes a correspondence between similar features in the source and target images (top panel of Figure 3). Image morphing software is commercially and freely available the software (xmorph) and images used in creating Figure 3 are available at xmorph.sourceforge.net. It typically takes approximately 20 minutes to create the required feature correspondence, although several iterations may be needed to find the feature correspondence that yields the visually desired morphing effect. 3
4 Figure 3: Shown on top are two original images overlayed with the feature correspondence required for morphing. Shown below are five images from a morphed sequence. Figure 5: An original image (top left) and the image enhanced to alter the color (top right), contrast (bottom left) and blur of the background cars (bottom right). The original image was downloaded from freefoto.com. Figure 4: An original image of the actor Paul Newman (left), and a digitally re-touched image of a younger Newman (right) Enhanced Shown in Figure 5 are an original image (top left), and three examples of image enhancement: (1) the blue motorcycle was changed to cyan and the red van in the background was changed to yellow; (2) the contrast of the entire image was increased, making the image appear to have been photographed on a bright sunny day; (3) the parked cars were blurred creating a narrower depth of focus as might occur when photographing with a wide aperture. This type of manipulation, unlike compositing, morphing or re-touching is often no more than a few mouse clicks away in Photoshop. While this type of tampering cannot fundamentally alter the appearance or meaning of an image (as with compositing, morphing and re-touching), it can still have a subtle effect on the interpretation of an image for example, simple enhancements can obscure or exaggerate image details, or alter the time of day in which the image appears to have been taken. Re-touched The term morphing has been applied by non-specialist to refer to a broader class of digital tampering which I will refer to as re-touching. Shown in Figure 4, for example, is an original image of the actor Paul Newman, and a digitally re-touched younger Newman. This tampering involved lowering the hairline, removing wrinkles, and removing the darkness under the eyes. These manipulations were a simple matter of copy and pasting small regions from within the same image e.g., the wrinkles were removed by duplicating wrinklefree patches of skin onto the wrinkled regions. While this form of tampering can, in the hands of an experienced user, shave a few years off of a person s appearance, it cannot create the radical changes in facial structure needed to produce a, for example, 12-year old Newman. 4
5 Starting with a blank screen, photo-editing software, such as Photoshop, allows a user to create digital works of art, similar to the way a painter would paint or draw on a traditional canvas. Unlike the forms of tampering described in the previous sections, this technique requires a high degree of artistic and technical talent, is very time consuming and is unlikely to yield particularly realistic images. 3 The Child Pornography Prevention Act Figure 6: A computer generated model (left) and the resulting rendered image (right), by Alceu Baptistão. 2.5 Computer Generated Composited, morphed, re-touched and enhanced images, as described in the previous sections, share the property that they typically alter the appearance of an actual photograph (either from a digital camera, or a film camera that was then digitally scanned). Computer generated images, in contrast, are generated entirely by a computer and a skilled artist/programmer (see Sections 6.1 and 6.2 for possible exceptions to this). Such images are generated by first constructing a threedimensional model of an object (or person) that embodies the desired object shape. The model is then augmented to include color and texture. This complete model is then illuminated with a virtual light source(s), which can approximate a range of indoor or outdoor lighting conditions. This virtual scene is then rendered through a virtual camera to create a final image. Shown in Figure 6, for example, is a partially textured threedimensional model of a person s head and the final rendered image. Once the textured three-dimensional model is constructed, an image of the same object from any viewpoint can be easily generated (e.g., we could view the character in Figure 6 from the side, above, or behind). Altering the pose of the object, however, is more involved as it requires changing the underlying threedimensional model (e.g., animating the character in Figure 6 to nod her head requires updating the model for each head pose realistic animation of the human form, however, is very difficult, see Section 6.3). 2.6 Painted The 1996 Child Pornography Prevention Act (CPPA) extended the existing federal criminal laws against child pornography to include certain types of digital images [1]. The CPPA banned, in part, two different forms of virtual porn : 2256(8) child pornography means any visual depiction, including any photograph, film, video, picture, or computer or computergenerated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where: (B) such visual depiction is, or appears to be, of a minor engaging in sexually explicit conduct; (C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct; Part (B) bans virtual child pornography that appears to depict minors but was produced by means other than using real children, e.g., computer generated images, Section 2.5. Part (C) prohibits virtual child pornography generated using the more common compositing and morphing techniques, Sections 2.1 and 2.2. Composited, morphed and re-touched images typically originate in photographs of an actual individual. Computer generated images, on the other hand, typically are generated in their entirety from a computer model and thus do not depict any actual individual (see Sections 6.1 and 6.2 for possible exceptions to this). As we will see next, this distinction was critical to the United States Supreme Court s consideration of the constitutionality of the CPPA. 4 Ashcroft v. Free Speech Coalition In 2002, the United States Supreme Court considered the constitutionality of the CPPA in Ashcroft v. Free Speech Coalition [2]. In their 6-3 ruling, the Court found 5
6 that portions of the CPPA, being overly broad and restrictive, violated First Amendment rights. Of particular importance was the Courts ruling on the different forms of virtual porn as described above. With respect to 2256(8)(B) the Court wrote, in part: Virtual child pornography is not intrinsically related to the sexual abuse of children. While the Government asserts that the images can lead to actual instances of child abuse, the causal link is contingent and indirect. The harm does not necessarily follow from the speech, but depends upon some unquantified potential for subsequent criminal acts. and went on to strike down this provision. With respect to 2256(8)(C) the Court wrote, in part: Although morphed images may fall within the definition of virtual child pornography, they implicate the interests of real children and are in that sense closer to the images in Ferber 1 Respondents do not challenge this provision, and we do not consider it. thus allowing this provision to stand. The Court, therefore, ruled that images containing a minor or portions of a minor are not protected, while computer generated depicting a fictitious minor is constitutionally protected. With respect to the various forms of digital tampering outlined in Section 2, we need to consider the extent to which painted, computer generated and certain types of morphed and re-touched images can appear, to the casual eye, as authentic. We will also consider what technology is available to expose such images. 5 Is it Real or Virtual? While the technology to alter digital media is developing at break-neck speeds, the technology to contend with the ramifications is lagging seriously behind. My students and I have been, for the past few years, developing a number of mathematical and computational tools to detect various types of tampering in digital media. Below I review some of this work (see also Appendix A). 1 In New York v. Ferber (1982), United States Supreme Court upheld a New York statute prohibiting the production, exhibition or selling of any material that depicts any performance by a child under the age of 16 that includes actual or simulated sexual intercourse, deviate sexual intercourse, sexual bestiality, masturbation, sado-masochistic abuse or lewd exhibitions of the genitals. 5.1 Morphed & Re-touched We have developed some general techniques that can be used to detect certain types of morphed and retouched 2 images. These tools work best on high-quality and high-resolution digital images. I only briefly describe these techniques below see the referenced full papers for complete details Color Filter Array Most digital cameras capture color images using a single sensor in conjunction with an array of color filters. As a result, only one third of the samples in a color image are captured by the camera, the other two thirds being interpolated. This interpolation introduces specific correlations between the samples of a color image. When morphing or re-touching an image these correlations may be destroyed or altered. We have described the form of these correlations, and developed a method that quantifies and detects them in any portion of an image [11]. We have shown the general effectiveness of this technique in detecting traces of digital tampering, and analyzed its sensitivity and robustness to simple counter-attacks Duplication A common manipulation when altering an image is to copy and paste portions of the image to conceal a person or object in the scene. If the splicing is imperceptible, little concern is typically given to the fact that identical (or virtually identical) regions are present in the image. We have developed a technique that can efficiently detect and localize duplicated regions in an image [8]. This technique works by first applying a principal component analysis (PCA) on small fixedsize image blocks to yield a reduced dimension representation. This representation is robust to minor variations in the image due to additive noise or lossy compression. Duplicated regions are then detected by lexicographically sorting all of the image blocks. We have shown the efficacy of this technique on credible forgeries, and quantified its robustness and sensitivity to additive noise and lossy JPEG compression. 2 In the hands of an experienced user digital re-touching, as shown in Figure 4, can shave a few years off of a person s appearance. These same techniques cannot, however transform an adult (e.g., years old) into a child (e.g., 4 12 years old). These techniques are simply insufficient to create the radical changes in facial and body structure needed to produce such a young child. Even though such images are currently constitutionally protected, it is unreasonable, given the current technology, to assume that such images do not depict actual minors. 6
7 5.2 Computer Generated Computer graphics rendering software is capable of generating highly photorealistic images that are often very difficult to differentiate from photographic images. We have, however, developed a method for differentiating between photographic and computer generated (photorealistic) images. Specifically, we have shown that a statistical model based on first- and higherorder wavelet statistics reveals subtle but significant differences between photographic and photorealistic images 3. I will review this technique below, and direct the interested reader to [4, 7] for more details Statistical Model We begin with an image decomposition based on separable quadrature mirror filters (QMFs). The decomposition splits the frequency space into multiple orientations (a vertical, a horizontal and a diagonal subband) and scales. For a color (RGB) image, the decomposition is applied independently to each color channel. The resulting vertical, horizontal and diagonal subbands for scale i are denoted as Vi c(x, y), Hc i (x, y), and Di c (x, y) respectively, where c {r, g, b}. The first component of the statistical model consists of the first four order statistics (mean, variance, skewness and kurtosis) of the subband coefficient histograms at each orientation, scale and color channel. While these statistics describe the basic coefficient distributions, they are unlikely to capture the strong correlations that exist across space, orientation and scale. For example, salient image features such as edges tend to orient spatially in certain direction and extend across multiple scales. These image features result in substantial local energy across many scales, orientations and spatial locations. The local energy can be roughly measured by the magnitude of the QMF decomposition coefficients. As such, a strong coefficient in a horizontal subband may indicate that its left and right spatial neighbors in the same subband will also have a large value. Similarly, if there is a coefficient with large magnitude at scale i, it is also very likely that its parent at scale i + 1 will also have a large magnitude. In order to capture some of these higher-order statistical correlations, we collect a second set of statistics that are based on the errors in a linear predictor of coefficient magnitude. For the purpose of illustration, consider first a vertical band of the green channel at scale i, V g i (x, y). A linear predictor for the magnitude of these coefficients in a subset of all possible spatial, 3 We have used a similar technique to detect messages hidden within digital images (steganography) [3, 5, 6]. orientation, scale and color neighbors is given by: V g i (x, y) = w 1 V g i (x 1, y) + w 2 V g i + w 3 V g i (x, y 1) + w 4 V g i + w 5 V g i+1 (x/2, y/2) + w 6 D g i + w 7 D g i+1 (x/2, y/2) + w 8 V r i (x + 1, y) (x, y + 1) (x, y) (x, y) + w 9 Vi b (x, y), (1) where denotes absolute value and w k are the weights. This linear relationship can be expressed more compactly in matrix form as: v = Q w, (2) where v contains the coefficient magnitudes of V g i (x, y) strung out into a column vector, the columns of the matrix Q contain the neighboring coefficient magnitudes as specified in Equation (1), and w = (w 1... w 9 ) T. The weights w are determined by minimizing the following quadratic error function: E( w) = [ v Q w] 2. (3) This error function is minimized by differentiating with respect to w: de( w) d w = 2Q T ( v Q w), (4) setting the result equal to zero, and solving for w to yield: w = (Q T Q) 1 Q T v, (5) Given the large number of constraints (one per pixel) in only nine unknowns, it is generally safe to assume that the 9 9 matrix Q T Q will be invertible. Given the linear predictor, the log error between the actual coefficient and the predicted coefficient magnitudes is: p = log( v) log( Q w ), (6) where the log( ) is computed point-wise on each vector component. This log error quantifies the correlation of a subband with its neighbors. The mean, variance, skewness and kurtosis of this error are collected to characterize its distribution. This process is repeated for scales i = 1,..., n 1, and for the subbands Vi r and, where the linear predictors for these subbands are V b i of the form: Vi r (x, y) = w 1 Vi r (x 1, y) + w 2 Vi r (x + 1, y) + w 3 Vi r 4 Vi r (x, y + 1) + w 5 Vi+1(x/2, r y/2) + w 6 Di r (x, y) + w 7 Di+1(x/2, r y/2) + w 8 V g i (x, y) + w 9 Vi b (x, y), (7) 7
8 and Vi b (x, y) = w 1 Vi b (x 1, y) + w 2 Vi b (x + 1, y) + w 3 Vi b 4 Vi b (x, y + 1) + w 5 Vi+1(x/2, b y/2) + w 6 Di(x, b y) + w 7 Di+1 b (x/2, y/2) + w 8 Vi r (x, y) + w 9 V g i (x, y). (8) A similar process is repeated for the horizontal and diagonal subbands. As an example, the predictor for the green channel takes the form: H g i (x, y) = w 1 H g i (x 1, y) + w 2 H g i (x + 1, y) + w 3 H g i (x, y 1) + w 4 H g i (x, y + 1) + w 5 H g i+1 (x/2, y/2) + w 6 D g i (x, y) + w 7 D g i+1 (x/2, y/2) + w 8 Hi r (x, y) and + w 9 H b i (x, y), (9) D g i (x, y) = w 1 D g i (x 1, y) + w 2 D g i (x + 1, y) + w 3 D g i (x, y 1) + w 4 D g i (x, y + 1) + w 5 D g i+1 (x/2, y/2) + w 6 H g i (x, y) + w 7 V g i (x, y) + w 8 Di r (x, y) + w 9 Di(x, b y). (10) For the horizontal and diagonal subbands, the predictor for the red and blue channels are determined in a similar way as was done for the vertical subbands, Equations (7)-(8). For each oriented, scale and color subband, a similar error metric, Equation(6), and error statistics are computed. For a multi-scale decomposition with scales i = 1,..., n, the total number of basic coefficient statistics is 36(n 1) (12(n 1) per color channel), and the total number of error statistics is also 36(n 1), yielding a grand total of 72(n 1) statistics. These statistics form the feature vector to be used to discriminate between photographic and photorealistic images Classification From the measured statistics of a training set of images labeled as photographic or photorealistic, our goal is to build a classifier that can determine to which category a novel test image belongs. To this end, a support vector machine (SVM) is employed. I will briefly describe, in increasing complexity, three classes of SVMs. The first, linear separable case is mathematically the most straight-forward. The second, linear non-separable case, contends with situations in which a solution cannot be found in the former case. The third, non-linear case, affords the most flexible classification scheme and often gives the best classification accuracy. Linear Separable SVM: Denote the tuple ( x i, y i ), i = 1,..., N as exemplars from a training set of photographic and photorealistic images. The column vector x i contains the measured image statistics as outlined in the previous section, and y i = 1 for photorealistic images, and y i = 1 for photographic images. The linear separable SVM classifier amounts to a hyperplane that separates the positive and negative exemplars. Points which lie on the hyperplane satisfy the constraint: w t x i + b = 0, (11) where w is normal to the hyperplane, b / w is the perpendicular distance from the origin to the hyperplane, and denotes the Euclidean norm. Define now the margin for any given hyperplane to be the sum of the distances from the hyperplane to the nearest positive and negative exemplar. The separating hyperplane is chosen so as to maximize the margin. If a hyperplane exists that separates all the data then, within a scale factor: w t x i + b 1, if y i = 1 (12) w t x i + b 1, if y i = 1. (13) These pair of constraints can be combined into a single set of inequalities: ( w t x i + b) y i 1 0, i = 1,..., N. (14) For any given hyperplane that satisfies this constraint, the margin is 2/ w. We seek, therefore, to minimize w 2 subject to the constraints in Equation (14). For largely computational reasons, this optimization problem is reformulated using Lagrange multipliers, yielding the following Lagrangian: L( w, b, α 1,..., α N ) = 1 2 w 2 + α i ( w t x i + b) y i α i, (15) where α i are the positive Lagrange multipliers. This error function should be minimized with respect to w and b, while requiring that the derivatives of L( ) with respect to each α i is zero and constraining α i 0, for all i. Because this is a convex quadratic programming problem, a solution to the dual problem yields 8
9 the same solution for w, b, and α 1,..., α N. In the dual problem, the same error function L( ) is maximized with respect to α i, while requiring that the derivatives of L( ) with respect to w and b are zero and the constraint that α i 0. Differentiating with respect to w and b, and setting the results equal to zero yields: w = α i x i y i (16) α i y i = 0. (17) Substituting these equalities back into Equation (15) yields: L D = α i 1 α i α j x t 2 i x j y i y j. (18) j=1 Maximization of this error function may be realized using any of a number of general purpose optimization packages that solve linearly constrained convex quadratic problems. A solution to the linear separable classifier, if it exists, yields values of α i, from which the normal to the hyperplane can be calculated as in Equation (16), and from the Karush-Kuhn-Tucker (KKT) condition: b = 1 N ( ) yi w t x i, (19) for all i, such that α i 0. From the separating hyperplane, w and b, a novel exemplar, z, can be classified by simply determining on which side of the hyperplane it lies. If the quantity w t z + b is greater than or equal to zero, then the exemplar is classified as photographic, otherwise the exemplar is classified as photorealistic. Linear Non-Separable SVM: It is possible, and even likely, that the linear separable SVM will not yield a solution when, for example, the training data do not uniformly lie on either side of a separating hyperplane. Such a situation can be handled by softening the initial constraints of Equation (12) and (13). Specifically, these constraints are modified with slack variables, ξ i, as follows: w t x i + b 1 ξ i, if y i = 1 (20) w t x i + b 1 + ξ i, if y i = 1, (21) with ξ i 0, i = 1,..., N. A training exemplar which lies on the wrong side of the separating hyperplane will have a value of ξ i greater than unity. We seek a hyperplane that minimizes the total training error, i ξ i, while still maximizing the margin. A simple error function to be minimized is w 2 /2 + C i ξ i, where C is a user selected scalar value, whose chosen value controls the relative penalty for training errors. Minimization of this error is still a quadratic programming problem. Following the same procedure as the previous section, the dual problem is expressed as maximizing the error function: L D = α i 1 α i α j x t 2 i x j y i y j, (22) j=1 with the constraint that 0 α i C. Note that this is the same error function as before, Equation (18) with the slightly different constraint that α i is bounded above by C. Maximization of this error function and computation of the hyperplane parameters are accomplished as described in the previous section. Non-Linear SVM: Fundamental to the SVMs outlined in the previous two sections is the limitation that the classifier is constrained to a linear hyperplane. It is often the case that a non-linear separating surface greatly improves classification accuracy. Non-linear SVMs afford such a classifier by first mapping the training exemplars into a higher (possibly infinite) dimensional Euclidean space in which a linear SVM is then employed. Denote this mapping as: Φ : L H, (23) which maps the original training data from L into H. Replacing x i with Φ( x i ) everywhere in the training portion of the linear separable or non-separable SVMs of the previous sections yields an SVM in the higherdimensional space H. It can, unfortunately, be quite inconvenient to work in the space H as this space can be considerably larger than the original L, or even infinite. Note, however, that the error function of Equation (22) to be maximized depends only on the inner products of the training exemplars, x t i x j. Given a kernel function such that: K( x i, x j ) = Φ( x i ) t Φ( x j ), (24) an explicit computation of Φ can be completely avoided. There are several choices for the form of the kernel function, for example, radial basis functions or polynomials. Replacing the inner products Φ( x i ) t Φ( x j ) with the kernel function K( x i, x j ) yields an SVM in the space H with minimal computational impact over working in the original space L. 9
10 With the training stage complete, recall that a novel exemplar, z, is classified by determining on which side of the separating hyperplane (specified by w and b) it lies. Specifically, if the quantity w t Φ( z) + b is greater than or equal to zero, then the exemplar is classified as photographic, otherwise the exemplar is classified photorealistic. The normal to the hyperplane, w, of course now lives in the space H, making this testing impractical. As in the training stage, the classification can again be performed via inner products. From Equation (16): w t Φ( z) + b = = α i Φ( x i ) t Φ( z)y i + b α i K( x i, z)y i + b. (25) Thus both the training and classification can be performed in the higher-dimensional space, affording a more flexible separating hyperplane and hence better classification accuracy. We next show the performance of a non-linear SVM in the classification of images as photographic or photorealistic. The SVMs classify images based on the statistical feature vector as described in Section Results We have constructed a database of 40, 000 photographic and 6, 000 photorealistic images 4. All of the images consist of a broad range of indoor and outdoor scenes, and the photorealistic images were rendered using a number of different software packages (e.g., 3D Studio Max, Maya, SoftImage 3D, PovRay, Lightwave 3D and Imagine). All of the images are color (RGB), JPEG compressed (with an average quality of 90%), and typically on the order of pixels in size. From this database of 46,000 images, statistics as described in Section were extracted. To accommodate different image sizes, only the central region of each image was considered. For each image region, a four-level three-orientation QMF pyramid was constructed for each color channel, from which a 216- dimensional feature vector (72 per color channel) of coefficient and error statistics was collected. From the 46,000 feature vectors, 32,000 photographic and 4,800 photorealistic feature vectors were used to train a non-linear SVM. The remaining feature vectors were used to test the classifier. In the results presented 4 The photographic images were downloaded from the photorealistic images were downloaded from and here, the training/testing split was done randomly the average testing classification accuracy over 100 such splits is reported. With a 0.5% false-negative rate (a photorealistic image mis-classified as photographic), the SVM correctly correctly classified 72% of the photographic images. 5.3 Painted As described in Section 2.6, realistic digitally painted images are extremely difficult to create. I believe, nevertheless, that the technique described in the previous section for differentiating between photographic and computer generated images will also able to differentiate between photographic and painted images. I have not, however, tested this directly for lack of the appropriate data (i.e., realisticly painted images). 6 Modern Technology As computer and imaging technology continues to develop, the distinction between real and virtual will become increasingly more difficult to make. These days computer generated images, as described in Section 2.5, are typically generated entirely within the confines of a computer and the imagination of the artist/programmer. The creation of these images, therefore, do not involve photographs of actual people. With respect to child pornography, such images are currently constitutionally protected. I describe below three emerging technologies that employ people in various stages of computer generated imaging. These technologies have emerged in order to allow for the creation of more realistic images. While not yet readily available, I believe that these technologies will eventually make it increasingly more difficult to determine if a person was involved in any stage of the creation of a computer generated image. 6.1 Image-Based Rendering The appearance of computer generated images, as described in Section 2.5, is typically dictated by the color and texture that is mapped onto the virtual object or person. Image-based rendering allows for actual photographs to be mapped directly onto the rendered object or person. For example, in creating a virtual person, the three-dimensional model may be created by the computer, and a photograph of a real person overlayed onto that model. The resulting rendered image is part real and part virtual though the underlying three-dimensional shape of the person is virtual, the 10
11 they undergo complex motions. Motion capture systems, such as that shown in Figure 8, measure the threedimensional position of several key points on the human body, typically at the joints. This data can then be used to animate a completely computer generated person. The resulting animation is part real and part virtual while the character does not depict a real person, the underlying motion is that of a real person. 7 Discussion Figure 8: The 3-D motion capture system by Meta- Motion. image may contain recognizable features of a real person D Laser Scanning Computer generated images, as described in Section 2.5, are generated by first constructing a three-dimensional model of an object or person. Computer graphics software provides a number of convenient tools to aid in creating such models. A number of commercially available scanners directly generate a three-dimensional model of an actual object or person. The Cyberware whole body scanner, for example, captures (in less than 17 seconds) the three-dimensional shape and color of the entire human body, Figure 7. Shown in this figure is the scanner and five views of a full-body scan. This scanned model can then be imported into a computer graphics software and texture mapped as desired. The resulting rendered image is part real and part virtual though the image may not be recognized as a real person, the underlying three-dimensional model is that of a real person. Today s technology allows digital media to be altered and manipulated in ways that were simply impossible 20 years ago. Tomorrow s technology will almost certainly allow for us to manipulate digital media in ways that today seem unimaginable. And as this technology continues to evolve it will become increasingly more difficult for the courts, the media and, in general, the average person to keep pace with understanding its power and it limits. I have tried in this report to review some of the current digital technology that allows for images to be created and altered, with the hope that it will help the courts and others grapple with some difficult technical and legal issues currently facing us. It is also my hope that the mathematical and computational techniques that we have developed (and continue to develop) will help the courts contend with this exciting and at times puzzling digital age D Motion Capture If you have seen any computer animated movie (e.g., Toy Story, Shrek, etc.), you may have noticed that the motion of human characters often looks stilted and awkward. At least two reasons for this are that the biomechanics of human motion are very difficult to model and recreate, and we seem to have an intensely acute sensitivity to human motion (e.g., long before we can clearly see their face, we can often recognize a familiar person from their gait). A number of commercially available systems can capture the three-dimensional motion of a human as 11
12 Figure 7: The whole body 3-D scanner by Cyberware and five views of a full-body scan. A Exposing Digital Composites In addition to the development of a technique to differentiate between photographic and computer generated images, we have developed techniques to detect traces of tampering in photographic images that would result from digital image compositing, Section 2.1. These approaches work on the assumption that although digital forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. Described below are four techniques for detecting various forms of digital tampering (also applicable are the two techniques described in Section 5.1). Provided are only brief descriptions see the referenced full papers for complete details. A.1 Re-Sampling Consider the creation of a digital image that shows a pair of famous movie stars, rumored to have a romantic relationship, walking hand-in-hand. Such an image could be created by splicing together individual images of each movie star and overlaying the digitally created composite onto a sunset beach. In order to create a convincing match, it is often necessary to re-size, rotate, or stretch portions of the images. This process requires re-sampling the original image onto a new sampling lattice. Although this re-sampling is often imperceptible, it introduces specific correlations into the image, which when detected can be used as evidence of digital tampering. We have described the form of these correlations and how they can be automatically detected in any portion of an image [9]. We have shown the general effectiveness of this technique and analyzed its sensitivity and robustness to simple counter-attacks. A.2 Double JPEG Compression Tampering with a digital image requires the use of a photo-editing software such as Adobe PhotoShop. In the making of digital forgeries an image is loaded into the editing software, some manipulations are performed and the image is re-saved. Since most images are stored in JPEG format (e.g., a majority of digital cameras store images directly in JPEG format), it is likely that both the original and forged images are stored in this format. Notice that in this scenario the forged image is double JPEG compressed. Double JPEG compression introduces specific artifacts not present in singly compressed images [10]. These artifacts can be used as evidence of digital tampering. Note, however, that double JPEG compression does not necessarily prove malicious tampering. For example, it is possible for a user to simply re-save a high quality JPEG image with a lower quality. The authenticity of a double JPEG compressed image should, nevertheless, be called into question. A.3 Signal to Noise Digital images have an inherent amount of noise introduced either by the imaging process or digital compression. The amount of noise is typically uniform across the entire image. If two images with different noise levels are spliced together, or if small amounts of noise are locally added to conceal traces of tampering, then variations in the signal to noise ratio (SNR) across the image can be used as evidence of tampering. Measuring the SNR is non-trivial in the absence of the original signal. We have shown how a blind SNR estimators can be employed to locally measure noise variance [10]. Differences in the noise variance across the image can be used as evidence of digital tampering. 12
13 A.4 Gamma Correction In order to enhance the perceptual quality of digital images, digital cameras often introduce some form of luminance non-linearity. The parameters of this nonlinearity are usually dynamically chosen and depend on the camera and scene dynamics these parameters are, however, typically held constant within an image. The presence of several distinct non-linearities in an image is a sign of possible tampering. For example, imagine a scenario where two images are spliced together. If the images were taken with different cameras or in different lightning conditions, then it is likely that different non-linearities are present in the composite image. It is also possible that local non-linearities are applied in the composite image in order to create a convincing luminance match. We have shown that a non-linear transformation introduces specific correlations in the Fourier domain [10]. These correlations can be detected and estimated using tools from polyspectral analysis. This technique is employed to detect if an image contains multiple non-linearities, as might result from digital tampering. References [1] publications/cppa96.html. [2] publications/ashcroft.v.freespeechcoalition.pdf. [3] H. Farid. Detecting hidden messages using higher-order statistical models. In International Conference on Image Processing, Rochester, New York, [4] H. Farid and S. Lyu. Higher-order wavelet statistics and their application to digital forensics. In IEEE Workshop on Statistical Analysis in Computer Vision, Madison, Wisconsin, [5] S. Lyu and H. Farid. Detecting hidden messages using higher-order statistics and support vector machines. In 5th International Workshop on Information Hiding, [6] S. Lyu and H. Farid. Steganalysis using color wavelet statistics and one-class support vector machines. In SPIE Symposium on Electronic Imaging, [7] S. Lyu and H. Farid. How realistic is photorealistic? IEEE Transactions on Signal Processing, (in press). [8] A.C. Popescu and H. Farid. Exposing digital forgeries by detecting duplicated image regions. Technical Report TR , Department of Computer Science, Dartmouth College, [9] A.C. Popescu and H. Farid. Exposing digital forgeries by detecting traces of re-sampling. IEEE Transactions on Signal Processing, (in press). [10] A.C. Popescu and H. Farid. Statistical tools for digital forensics. In Proceedings of the 6 th Information Hiding Workshop, May [11] A.C. Popescu and H. Farid. Exposing digital forgeries in color filter array interpolated images. IEEE Transactions on Signal Processing, (in review). 13
Detecting Hidden Messages Using Higher-Order Statistics and Support Vector Machines
Detecting Hidden Messages Using Higher-Order Statistics and Support Vector Machines Siwei Lyu and Hany Farid Dartmouth College, Hanover NH 03755, USA, {lsw,farid}@cs.dartmouth.edu, www.cs.dartmouth.edu/~{lsw,farid}
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Support Vector Machine (SVM)
Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
Exposing Digital Forgeries Through Chromatic Aberration
Exposing Digital Forgeries Through Chromatic Aberration Micah K. Johnson Department of Computer Science Dartmouth College Hanover, NH 03755 [email protected] Hany Farid Department of Computer Science
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Digital Imaging and Image Editing
Digital Imaging and Image Editing A digital image is a representation of a twodimensional image as a finite set of digital values, called picture elements or pixels. The digital image contains a fixed
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
Image Normalization for Illumination Compensation in Facial Images
Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines
Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval
Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp
SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS Nitin Khanna and Edward J. Delp Video and Image Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,
Combating Anti-forensics of Jpeg Compression
IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 6, No 3, November 212 ISSN (Online): 1694-814 www.ijcsi.org 454 Combating Anti-forensics of Jpeg Compression Zhenxing Qian 1, Xinpeng
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
Image Authentication Scheme using Digital Signature and Digital Watermarking
www..org 59 Image Authentication Scheme using Digital Signature and Digital Watermarking Seyed Mohammad Mousavi Industrial Management Institute, Tehran, Iran Abstract Usual digital signature schemes for
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
Algebra 1 2008. Academic Content Standards Grade Eight and Grade Nine Ohio. Grade Eight. Number, Number Sense and Operations Standard
Academic Content Standards Grade Eight and Grade Nine Ohio Algebra 1 2008 Grade Eight STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express
Automotive Applications of 3D Laser Scanning Introduction
Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc
Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear
Composite Video Separation Techniques
TM Composite Video Separation Techniques Application Note October 1996 AN9644 Author: Stephen G. LaJeunesse Introduction The most fundamental job of a video decoder is to separate the color from the black
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
Computer-Generated Photorealistic Hair
Computer-Generated Photorealistic Hair Alice J. Lin Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA [email protected] Abstract This paper presents an efficient method for
EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
Develop Computer Animation
Name: Block: A. Introduction 1. Animation simulation of movement created by rapidly displaying images or frames. Relies on persistence of vision the way our eyes retain images for a split second longer
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Big Data - Lecture 1 Optimization reminders
Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics
MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013
INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
Adobe Illustrator CS5 Part 1: Introduction to Illustrator
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES Adobe Illustrator CS5 Part 1: Introduction to Illustrator Summer 2011, Version 1.0 Table of Contents Introduction...2 Downloading
2695 P a g e. IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India
Integrity Preservation and Privacy Protection for Digital Medical Images M.Krishna Rani Dr.S.Bhargavi IV Semester M.Tech (DCN) SJCIT Chickballapur Karnataka India Abstract- In medical treatments, the integrity
Support Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France [email protected] Massimiliano
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
Design Elements & Principles
Design Elements & Principles I. Introduction Certain web sites seize users sights more easily, while others don t. Why? Sometimes we have to remark our opinion about likes or dislikes of web sites, and
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.
B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER
A Feature Selection Methodology for Steganalysis
A Feature Selection Methodology for Steganalysis Yoan Miche 1, Benoit Roue 2, Amaury Lendasse 1, Patrick Bas 12 1 Laboratory of Computer and Information Science Helsinki University of Technology P.O. Box
The Delicate Art of Flower Classification
The Delicate Art of Flower Classification Paul Vicol Simon Fraser University University Burnaby, BC [email protected] Note: The following is my contribution to a group project for a graduate machine learning
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners
Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners Background The lateral response artifact (LRA) in radiochromic film images from flatbed scanners was first pointed
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,
WATERMARKING FOR IMAGE AUTHENTICATION
WATERMARKING FOR IMAGE AUTHENTICATION Min Wu Bede Liu Department of Electrical Engineering Princeton University, Princeton, NJ 08544, USA Fax: +1-609-258-3745 {minwu, liu}@ee.princeton.edu ABSTRACT A data
Support Vector Machines
Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric
Big Data: Rethinking Text Visualization
Big Data: Rethinking Text Visualization Dr. Anton Heijs [email protected] Treparel April 8, 2013 Abstract In this white paper we discuss text visualization approaches and how these are important
Assessment of Camera Phone Distortion and Implications for Watermarking
Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION
HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
Common Core Unit Summary Grades 6 to 8
Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Support Vector Machines for Dynamic Biometric Handwriting Classification
Support Vector Machines for Dynamic Biometric Handwriting Classification Tobias Scheidat, Marcus Leich, Mark Alexander, and Claus Vielhauer Abstract Biometric user authentication is a recent topic in the
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Chapter 1. Animation. 1.1 Computer animation
Chapter 1 Animation "Animation can explain whatever the mind of man can conceive. This facility makes it the most versatile and explicit means of communication yet devised for quick mass appreciation."
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Digital Image Requirements for New Online US Visa Application
Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
VISUAL ALGEBRA FOR COLLEGE STUDENTS. Laurie J. Burton Western Oregon University
VISUAL ALGEBRA FOR COLLEGE STUDENTS Laurie J. Burton Western Oregon University VISUAL ALGEBRA FOR COLLEGE STUDENTS TABLE OF CONTENTS Welcome and Introduction 1 Chapter 1: INTEGERS AND INTEGER OPERATIONS
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
1. Classification problems
Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification
Making Sense of the Mayhem: Machine Learning and March Madness
Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University [email protected] [email protected] I. Introduction III. Model The goal of our research
Face Recognition For Remote Database Backup System
Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM
Mean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and
Machine Learning Final Project Spam Email Filtering
Machine Learning Final Project Spam Email Filtering March 2013 Shahar Yifrah Guy Lev Table of Content 1. OVERVIEW... 3 2. DATASET... 3 2.1 SOURCE... 3 2.2 CREATION OF TRAINING AND TEST SETS... 4 2.3 FEATURE
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
White Paper. "See" what is important
Bear this in mind when selecting a book scanner "See" what is important Books, magazines and historical documents come in hugely different colors, shapes and sizes; for libraries, archives and museums,
Active Learning SVM for Blogs recommendation
Active Learning SVM for Blogs recommendation Xin Guan Computer Science, George Mason University Ⅰ.Introduction In the DH Now website, they try to review a big amount of blogs and articles and find the
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Digimarc for Images. Best Practices Guide (Chroma + Classic Edition)
Digimarc for Images Best Practices Guide (Chroma + Classic Edition) Best Practices Guide (Chroma + Classic Edition) Why should you digitally watermark your images? 3 What types of images can be digitally
Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:
Chapter 3 Data Storage Objectives After studying this chapter, students should be able to: List five different data types used in a computer. Describe how integers are stored in a computer. Describe how
Video Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
PHOTO RESIZING & QUALITY MAINTENANCE
GRC 101 INTRODUCTION TO GRAPHIC COMMUNICATIONS PHOTO RESIZING & QUALITY MAINTENANCE Information Sheet No. 510 How to resize images and maintain quality If you re confused about getting your digital photos
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
Introduction. C 2009 John Wiley & Sons, Ltd
1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective geometry techniques relevant
