Accurate Blur Models vs. Image Priors in Single Image Super-Resolution
|
|
|
- Naomi Allison
- 10 years ago
- Views:
Transcription
1 Accurate Blur Models vs. Image Priors in Single Image Super-Resolution Netalee Efrat, Daniel Glasner, Alexander Apartsin, Boaz Nadler, Anat Levin Dept. of Computer Science and Applied Math The Weizmann Institute of Science,ISRAEL Abstract Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and downsampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance of the image prior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to oversmoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw low- res images acquired by an actual camera. 1. Introduction Single image Super-Resolution (SR) is the problem of estimating a High-Resolution (HR) image x from a single input Low-Resolution (LR) image y. Mathematically, the relation between these two images is typically modeled (in vectorized form) as a linear transformation y = Ax+n, (1) where n denotes imaging noise and the matrix A encodes the processes of blurring and downsampling. Recovering x from y involves two non-trivial challenges. First, since the matrixahas far fewer rows than columns, the SR problem is under-constrained, with an infinite number of solutions. Thus, to recover a visually pleasing HR image, SR algorithms typically employ some image prior. The second challenge is to enforce the reconstruction constraint Ax y, which implies that up to imaging noise, the recovered HR image should be consistent with the LR input. This constraint requires knowledge of the matrix A, which, in turn, relies on an accurate estimation of the camera s optical blur. Developing sophisticated image priors has been the focus of much single image SR research in the past decade, with many significant successes [7, 20, 23, 8, 6, 9, 21, 27, 25, 5, 19, 22, 1, 8, 13]. In contrast, the reconstruction constraint has received relatively little attention. Some algorithms do not enforce Ax y at all. Those that do often assume a predefined blur kernel. Examples include antialiasing with bicubic interpolation (Matlab s default imresize function) [8, 27], Gaussian blur [3], Gaussian blur followed by bicubic interpolation [7], simple pixel averaging [5], and sampling without any pre-smoothing [15]. A critical concern in applying such SR algorithms to real images is how well these synthetic forward models approximate real camera blur. For example, the bicubic interpolation used by many algorithms is generally not physically feasible, as it involves negative weights. Furthermore, in most SR algorithms, the blur kernel is not an input parameter: it is coupled to various internal components which are not easily adjusted. A few single-image SR works which do attempt to estimate or take the unknown kernel into account include [3, 24, 17, 10, 11, 12]. This state of affairs naturally raises the following questions, which are the focus of our paper: i) what is the effect of an incorrect blur model on SR algorithms? ii) what is the importance of the reconstruction constraint compared to that of the image prior? We make several contributions towards answering these questions. First, we argue that the reconstruction constraint is at least as important as the image prior. In particular, we demonstrate that combining a simple prior, anl 2 penalty on image gradients, with an accurate reconstruction constraint, provides SR results almost as good as those produced by state-of-the-art SR algorithms with sophisticated priors. Second, we empirically examine the sensitivity of several SR algorithms to the accuracy of the estimated blur ker- 1
2 nel. We show that incorporating an accurate estimate into these algorithms improves their output, by allowing them to take full advantage of the reconstruction constraint. In contrast, when the SR algorithms utilize an inaccurate blur kernel, the resulting images are either too blurred or contain over-sharpening artifacts. These results are consistent with results from previous work on performance bounds in multiframe SR [2, 18]. For the L 2 prior case, we also present a theoretical analysis explaining these phenomena, via a frequency analysis of the kernel mismatch. Finally, we demonstrate the importance of accurately estimating camera blur when applying SR to raw images captured with a real camera. We show that the default kernels used by many algorithms are not sufficiently close to the camera blur, and produce over-smoothed results. Moreover, we show that incorporating a more accurate estimate of the camera blur improves the results. Our findings highlight the importance of modeling and estimating the camera blur, a topic which has not received much attention in the context of single-image SR. 2. Problem Setup Letxbe a HR image of size n 1 n 2 pixels, and lety be its corresponding LR image of size(n 1 /s) (n 2 /s), where s > 1 is the downsampling factor. The relation between x and y is typically expressed as y = k x s +n (2) where k denotes a blur kernel (low pass filter), s denotes subsampling by factor s, and n is imaging noise. Further details of this model can be found in [2]. In vectorized form, we can express the above relation as a linear transformation y = Ax+n (3) whereais a matrix of size(n 1 n 2 /s 2 ) (n 1 n 2 ),y,n are (n 1 n 2 /s 2 ) 1 vectors andxis(n 1 n 2 ) 1. The SR problem, recovering x from y, poses two challenges. First, we need to knowa, that is, to have an accurate estimate of the blur k. Second, the linear system in Eq. (3) is underconstrained. To recover a visually plausible x, it is thus common practice to use some natural image prior. Perhaps the simplest prior is a penalty on the image gradients. It leads to the following optimization problem, similar to total variation [1], ˆx = argmin x Ax y 2 +λ i (ρ(g hi (x))+ρ(g vi (x))) (4) where g hi (x),g vi (x) denote horizontal and vertical derivatives at the i-th pixel, and ρ is a penalty function. For example, ρ(z) = z 2 gives a Gaussian (L 2 ) prior, while ρ(z) = z α forα 1 yields a sparse prior. Most modern SR algorithms are not expressed explicitly as the minimizer of a functional as in Eq. (4), for example when their prior is non-parametric. Moreover, some methods are patch-based and either do not enforce the global reconstruction constraint Ax y at all, or apply it only in a separate post-processing step, with a default blur kernel. In this paper we study, both qualitatively and quantitatively, the importance of an accurate blur model and the corresponding reconstruction constraint in SR algorithms. We start with an empirical evaluation in Sec. 3 and provide a theoretical analysis in Sec. 4. Additional results to the ones presented in the paper, and a supplementary file, are available online at our projct page [4]. 3. Empirical Evaluation of SR Algorithms Evaluation Metric: As our goal is to evaluate various aspects of SR, it is important to distinguish between two different SR problems: i) the computer graphics problem of image hallucination the addition of visually plausible HR details, even if these are not actually present; and ii) the task of image reconstruction the recovery of accurate data for some application, such as seismic measurements, or car license plates captured by a security camera. Since visual plausibility is somewhat subjective, we focus on the reconstruction task, which can be evaluated numerically against ground truth. We adopt two measures, PSNR and SSIM. While the correlation between numerical and visual measures of success is not perfect, the former method is useful to objectively compare various SR algorithms and their variants. To numerically evaluate an SR algorithm, we calculate its Mean Squared Error (MSE) over all test image pixels, MSE = 1 x 0 (i) x(i) 2, (5) N HR imagesx i where x(i),x 0 (i) are the i-th pixel in the HR image and in the algorithm output, respectively. Scaling image intensity to be in the [0,1] range, we then compute PSNR = 10log 10 MSE, and SSIM as described in [26]. To avoid boundary artifacts, for both measures we discarded all pixels at distance up to 60 pixels from the image boundary. Experimental Setup: We performed experiments on images synthetically downsampled with a known kernel, as well as on raw LR images acquired by a real camera. First, we compared several SR algorithms on the same set of HR/LR image pairs, using 50 test images from the Berkeley Segmentation Dataset [16] (BSDS). Each LR image was synthesized from a corresponding HR image, thus providing a ground truth solution. The algorithms we considered are: Freeman et al. [7], Kim&Kwon [13], and Yang et al. [27] whose implementations are available online, as well as Glasner et al. [8], whose code was kindly provided by the authors. In addition, we also tested the simple gradient regularization approach of Eq. (4) with both a Gaussian
3 η = 10 2 k A = camera k A = b k A = b g 1 k A = b g 2 original k T = camera / / / / / k T = b / / / / / k T = b g / / / / / k T = b g / / / / / η = 10 4 k A = camera k A = b k A = b g 1 k A = b g 2 original k T = camera / / / / / k T = b / / / / / k T = b g / / / / / k T = b g / / / / / Table 1. Kernel sensitivity of Glasner et al [8]. Rows indicate the kernel with which the image was downsampled. Columns correspond to the kernel the algorithm assumes. The rightmost column presents results of the unmodified algorithm with its default bi-cubic kernel. Noise level η = 0.01 on the top andη = on the bottom. In each cell the first number is a PSNR value and the second is SSIM. k A k T Figure 1. Sensitivity to kernel mismatch. On the diagonal, the algorithm uses k A = k T. On the upper right off-diagonal images, the assumed kernel is smoother than the true one, leading to over sharpening artifacts. For the lower left, the assumed kernel is sharper than the correct one, leading to over-smoothed results. and a sparse prior. The sparse prior was optimized using the iterative re-weighted least squares algorithm [14], with parameters chosen using a separate training set. We denote by k T the true kernel used to synthesize a test LR image and byk A the kernel assumed by a SR algorithm. To test the role of different kernels we prepared4sets of test images, each set blurred synthetically with a different kernel k T : 1) antialiased bicubic interpolation k T = b, (Matlab s imresize); 2-3) Gaussian smoothing followed by bicubic interpolation, denoted as k T = b g 1,k T = b g 2, where g l is a Gaussian with std of l pixels; 4) a real camera blur kernel, estimated by capturing a known calibration target with a Canon 5D Mark II camera, shown in Fig. 2. We added to the test images Gaussian zero mean i.i.d. noise with variance. We tested both realistic and nearly noisefree images with η = 0.01 and η = Our experiments study what happens when the two kernelsk T,k A mismatch. Furthermore, if an algorithm is given an exact estimate of the kernel, can it be modified to take this into account and what are the benefits of doing so? The algorithms of Yang, Kim, and Glasner use as default the bicubic kernel, k A = b, whereas Freeman uses k A = b g 1. Unfortunately, these algorithms do not accept a kernel as one of their input parameters and adjusting them to use a different kernel is not straightforward. For example, Glasner et al. upsample the image in gradual steps, and defining a kernel for each intermediate step is non-trivial. As a simple modification, we introduced two kerneldependent changes to the algorithms. First, (if applicable) the LR/HR training patch pairs were prepared with a desired blur. Second, following [27], we introduced a reconstruction constraint with the desired kernel in post-processing. To this end, we found the imagex 1 which minimizes x 1 = argmin x λ Ax y 2 + x x 0 2 (6) where x 0 is the original algorithm output 1. The weighting factor λ was selected to minimize the empirical error on a separate set of training images. Eq. (6) is not needed for the simple regularization algorithms which already optimize the reconstruction constraint in Eq. (4) Evaluation Results Kernel sensitivity: Table 1 reports numerical results, with the method of Glasner et al., on a set of 4 5 SR experiments. SR was applied to the 4 test sets (prepared with different kernels k T ), each time adjusting the algorithm to use a different kernel k A in reconstruction. The fifth column shows the original authors results, unmodified, using the default bicubic kernel. 1 The algorithms of Yang et al. and of Glasner et al. already include a back-projection process which minimizes a constraint similar to Eq. (6). We replaced their back-projection procedures with Eq. (6), where x 0 is their output before the final back-projection stage. Note that Eq. (6) can modify an algorithm s results, even when applied with its default kernel.
4 Camera blur Bicubic blur cross section primal Fourier primal Fourier Figure 2. Estimated camera blur vs. bicubic kernel. Despite the seemingly similar shape, the kernels frequency content is different. The camera blur attenuates high frequencies more than the bicubic one. This highlights the importance of accurate kernel estimation. Yang et al. [27] org Yang et al. [27] k A =camera Glasner et al. [8] org Glasner et al. [8] k A = camera Sparse k A = b Sparse k A =camera Figure 3. Results on real images: The default implementation of SR algorithms (assuming a bicubic kernel) produces over-smooth results. Sharpness is improved by adjusting the algorithm to incorporate the camera kernel. Images embedded at 96PPI. This table leads us to several observations. First, incorporating the reconstruction constraint with the true kernel improves accuracy. One can see this by comparing the fifth column (the original algorithm with its default kernel), with the diagonal entries, for which the reconstruction constraint usesk A = k T. Similar trends hold for all other algorithms, as shown in [4]. Moreover, SR using an incorrect kernel drastically increases reconstruction error. The diagonal entries in Table 1, in terms of both PSNR and SSIM, are larger than the off-diagonal ones. In particular, when the assumed kernel is smoother than the true kernel, the recovered image is blurred. On the other hand, when the assumed kernel is sharper than the true kernel, high frequency ringing artifacts appear, as illustrated in Fig. 1. Though the first type of error may be less visually disturbing than the second, if we are aiming to increase resolution, over-smoothing constitutes an equally undesirable effect. We provide a theoretical analysis of these observations in Sec. 4. Real images: Given the sensitivity of SR algorithms to the assumed blur kernel, it is interesting to assess their performance on raw LR images acquired by an actual camera. To this end, we captured images with a Canon 5D Mark II camera, and estimated its blur using a known calibration target (calibration details can be found in [4]). Fig. 2 compares our estimated camera blur with the bicubic kernel. Despite their seemingly similar shapes in the primal domain, their frequency content is quite different. As seen in the Fourier domain, the camera kernel attenuates high frequencies more than the bicubic one. Our theoretical analysis (Sec. 4) predicts that using the sharper bicubic kernel will result in over-smoothed SR images. Fig. 3 indeed demonstrates this on actual camera images (for additional images, see [4]). The default implementation of various algorithms that assume a bicubic kernel indeed yields over-smoothed results, while adjusting the algorithms to incorporate the camera kernel sharpens them. This highlights the sensitivity of SR algorithms to correctly modeling the kernel. Such modeling is a promising venue for future improvements, if SR is to be applied to real camera data. Image prior vs. reconstruction constraint: Next, we consider the relative importance of the assumed image prior vs. the reconstruction constraint, with the correct kernel. To this end, Table 2 compares all algorithms 2 and their modified versions, which incorporate the reconstruction constraint, on two test sets - blurred with k T = b and with k T = b g 1. As rows 2 and 4 show, using simple gradient 2 For a fair comparison, where applicable, all algorithms were trained on the same separate training dataset.
5 k T = b k T = b k T = b g 1 k T = b g 1 Bicubic L 2 Sparse Kim et al. [13] Yang et al. [27] Glasner et al. [8] Figure 4. Image prior vs. reconstruction constraint. The left columns show SR results of various algorithms on LR images downsampled with k T = b. Columns on the right show results on images downsampled with k T = b g 1. All LR images were corrupted by noise at level η = One can see the of a more sophisticated prior by comparing rows 2-3 to rows 4-6 in the left columns. Comparing the same rows in the right columns shows the effect of using the correct blur kernel instead of the default one. k A = k T images are marked in blue, k A k T in red. The effect of using the exact blur kernel is more dominant than that of the prior. Images embedded at 96PPI. regularization produces results which are roughly comparable to sophisticated SR algorithms, both in terms of PSNR and SSIM values. The first 2 columns of Fig. 4 show representative visual results, when all algorithms use the true kernel. While visual comparison is somewhat subjective and one may argue in favor of one algorithm or the other, overall the results of all algorithms (except the baseline bicubic interpolation) are not significantly different. Second, the influence of different image priors is much smaller than the effect of kernel mismatch. This can be seen from Table 2. Contrast the changes along a row (0.1dB- 0.5dB), which correspond to different image priors, to the difference of almost 2dB between the third and fourth rows, which capture the effect of using a correct kernel instead of the default bicubic one. Visually, this is seen by contrasting the changes along the rows of Fig. 4. In the two left columns all algorithms use the true kernel and hence produce comparable results. In contrast, in the two right columns, only the sparse and L 2 algorithms use the correct kernel, others use their default one. This kernel mismatch leads to inferior results. This emphasizes that an accurate reconstruction constraint can be more important than a sophisticated prior. The fact that simple regularization give results comparable to those of sophisticated priors seems surprising at first sight. One possible explanation relates back to the reconstruction constraint. Example-based algorithms process the image locally using small patches. While each LR/HR patch pair satisfies the reconstruction constraint, this property is not retained when the patches are fused into a global solution. Since this is not a trivial task most algorithms only impose the global reconstruction constraint in post-processing. In contrast, simple gradient regularization methods explicitly optimize a functional which jointly accounts for the global reconstruction constraint and the prior.
6 L 2 Sparse Freeman [7] Kim [13] Yang [27] Glasner [8] k T = b, original / / / / 0.86 k T = b, RC withk A = b / / / / / / 0.88 k T = b g 1, original / / / / 0.78 k T = b g 1, RC withk A = b g / / / / / / 0.86 Table 2. Kernel accuracy is more important than choice of image prior. The original implementation of each algorithm with its default kernel is compared with a modified one which accepts a kernel as a parameter, when applicable learns a dictionary with it, and also enforces the reconstruction constraint (RC). In the first two rows the image was downsampled withk T = b, in the bottom rows withk T = b g 1. In each cell the first number is a PSNR value and the second is SSIM. Most algorithms use k A = b as default, thus in the 3rd row algorithms process images with k A k T and in the 4th row adjust to the correct kernel k A = k T. The difference between the 3rd and 4th rows is much larger than the gap between different algorithms on the same row. 4. Theoretical Analysis To gain further insight into the detrimental effect of kernel mismatch, let us examine SR with a simple L 2 gradient regularization (Gaussian prior) as presented in Eq. (4). For notational simplicity we consider SR of 1D signals with a fixed upsampling factor s = 2. We first rewrite the problem in the frequency domain. Neglecting image boundaries, convolution k x translates to multiplication, and sampling translates to aliasing of the HR signal. Denoting Fourier transforms by capital letters, we can express Eq. (2) as Y ω = K T,ωX ω +K T,ω X ω +N ω, (7) where the discrete HR signal has a frequency range of [0,Ω], the LR one [0,Ω/2], and ω = ω + Ω/2 denotes the aliased replica of the frequencyω. Recall that if K T is an ideal low pass filter (K T,ω = 0 for all ω > Ω/2), the second term in Eq. (7) disappears. Otherwise, at all frequencies where K T,ω > 0, the LR observationy ω includes aliasing. SR algorithms that assume a kernel K A in fact assume thaty andx are related via Y ω = K A,ωX ω +K A,ω X ω +N ω. (8) A standard approach to reconstructing the signal X from the measurements in Eq. (7) is via MAP estimation. The following lemma characterizes the resulting estimator and the relation between the estimated signal ˆX and the true signalx in case of kernel mismatch (K A K T ). Lemma 1 Let Y be given by Eq. (7), with i.i.d. zero mean Gaussian noise of variance. The MAP SR estimate ˆX, assuming a Gaussian prior on X and a kernel K A as in, Eq. (8), can be written as K T,ω ( ) K ˆXω A,ω X ω K = H A,ω T,ω ˆX X ω K A,ω ω, (9) N ω where H A,ω = 1 σω 2 K A,ω 2 +σω 2 K A,ω 2 + (10) [ σω 2 K A,ω 2 σω 2K A,ω K ] A,ω σ2 ω K A,ω σω 2 K A,ω K A,ω σω 2 K A,ω 2 σω 2 K A,ω and σ 2 w is inversely related to G h, the Fourier transform of the derivative operatorg h, viaσ 2 w = 1/ G h,ω 2. Proof: Up to constants, the log-likelihood withk A is logp(y X)= 1 Ω/2 K A,ωX ω+k 2 A,ω X ω Y ω 2. (11) w=0 Next, the assumed gradient prior in Eq. (4) can be written as a function of the convolution of the signal with derivative filters. For a 1-D signal with only horizontal gradients, logp(x) = g h x 2 +const. (12) From Parseval s theorem, g h x 2 = ω G h,ω X ω 2. In the frequency domain, the prior thus becomes diagonal: Ω X ω 2 logp(x) = +const, σ 2 2σω 2 ω = G h,ω 2. (13) ω=0 In other words, an L 2 prior on image gradients is a diagonal Gaussian prior in the Fourier domain, whose variance at each frequency ω is the power of the derivative filter. Hence, σ 2 ω decays to zero as ω increases, in agreement with wellstudied properties of natural image statistics. Combining Eqs. (11) and (13), The MAP estimate is ˆX = argmaxlogp(x Y) = argmax(logp(y X)+logP(X)) = argmin Ω/2 w=0 K ω X ω +K ω X ω Y ω 2 + X ω 2 σ 2 ω + X ω 2 σ 2 ω. (14) A short calculation shows that the solution ˆX is given by ( ) ˆXω = F ωy ω, (15) ˆX ω
7 wheref ω is a generalized Wiener filter ) F ω = ( K A,ω KA,ω (K A,ω,K A,ω )+ σω = σω K 2 A,ω 2 +σω 2 K A,ω ( ) K A,ω KA,ω σ 2 ω ( σ 2 ω KA,ω ) σω 2. K A,ω Blur Sharpening WhenK T =K A, combining the above with Eq. (7) yields Eq. (9). The case K T K A follows by writing Eq. (7) as: ) ( ) KT,ω +K A,ω +N ω. (16) Y ω = K A,ω ( KT,ω K A,ω X ω X ω K A,ω We now study the implications of this lemma. First, note that when K T = K A, the recovered ˆX ω, ˆX ω are shrunk versions of the input Y w. For instance, consider an ideal low pass filter with K A,ω = 0, with noise-free data, η = 0. We find that at the high frequency ω, which did not contribute to Y ω (since K A,ω = 0), X ω = 0. At the low frequency, meanwhile, ˆXω is exactly the true value, ˆXω = Y ω /K A,ω = X ω. In the presence of noise with variance, at frequencies where σω 2 η2, the elements of F ω are close to zero. This makes sense because at these frequencies, the measurement Y ω is dominated by noise, and the actual signalx ω cannot be accurately recovered. We should instead estimate it as some quantity significantly shrunk towards zero. If the filter includes aliasing, i.e. K A,ω > 0, the recovered signal includes aliasing as well. That is, some of the high frequency component X ω contributes to the low frequency reconstruction at ˆXω, and visa versa. Kernel Mismatch: Next, we study the implication of the lemma when the signal was blurred withk T while the MAP estimator assumed a different kernelk A K T. According to Eq. (9), H A now acts on the signal K T,ω /K A,ω X ω instead of on the truex ω. At high frequencies, where signal variance is lower than noise level, H A,ω 0, and kernel mismatch has little effect on the output. In contrast, at other frequencies, an incorrect kernel may have strong detrimental effects. As in deconvolution, if the true K T blurs more than the assumed K A, then typically K T,ω < K A,ω, and thus X ω is more attenuated. If K T is sharper thank A, K T,ω > K A,ω, andx ω is amplified. In the first case K T,ω /K A,ω acts as a blurring filter and in the second as a sharpening filter (see Fig. 1). As an illustration of this effect, Fig. 5 simulates two Gaussian filters. If K T,ω = exp ( 12 ) βt ω 2, K A,ω = exp ( 12 ) βa ω 2 then (17) K T,ω K A,ω = exp ( (β T β A) ω 2). (18) Figure 5. Effect of an inaccurate kernel. β T > β A yields a Gaussian blur filter (left);β T < β A gives a sharpening filter (right). For β T > β A we obtain a Gaussian blur filter, and for β T < β A a sharpening filter (the exponent is positive). Assuming for simplicity K A,ω = 0, the red curve of Fig. 5 demonstrates the expected power scaling of each filter, namelye[ ˆX ω 2 / X ω 2 ]. Forβ T > β A the amplitude is decreased (red curve below 1) while for the β T < β A case the amplitude is magnified at the middle band of frequencies. High frequencies whose expected power is below the noise variance are not amplified by the reconstruction filter. The off-diagonal panels in Fig. 1 illustrate these phenomena. An assumed wider kernel results in ringing, and a narrow kernel in over-blurring. As mentioned earlier, for practical SR, both effects are undesirable. Kernel Uncertainty: In practice, SR algorithms may be applied to real images whose blur kernel is not precisely known. One approach, taken by several SR algorithms, is to ignore the true blur kernel, and utilize some default, such as bicubic. As noted previously, when the assumed default kernel is narrower than the true kernel, as occurs with our actual camera (see Fig. 2), this results in somewhat smoother SR images, but without ringing artifacts. Hence, the resulting images are visually acceptable. A second approach taken by various SR algorithms to cope with imprecise knowledge of the true kernel, is to give more weight to the image prior and reduce the weight of the reconstruction constraint (e.g. smaller λ in Eq. (6)). This makes such algorithms less sensitive to kernel mismatch, but also reduces the quality of their results. We now derive a principled estimation strategy to take into account kernel uncertainty. As expected, this comes at the price of reduced accuracy. Let us denote by q(k) the density of possible kernels K. We denote by µ K,Σ K the 2 1 and 2 2 matrices of mean and covariance of (K ω,k ω ) according to the distributionq(k). We see below that these first and second order moments serve as sufficient statistics. Extending Eq. (14) we seek ˆX minimizing q(k) K ωx ω +K ω X ω Y ω 2 + Xω 2 + X ω 2 dk. σ 2 w ω σ 2 ω (19)
8 A short calculation shows that the minimum is obtained by ( ) ˆXω = µ ˆX K (µ K ) T +Σ K + ω 0 σ 2 ω 0 σ 2 ω 1 (µ K ) T Y ω. (20) Comparing the above formula to Eq. (16), if q(k) had no uncertainty, then µ K = (K ω,k ω ) and Σ K is the zero matrix, and Eq. (20) reduces to the Wiener filter of Eq. (16). The effect of kernel uncertainty is similar to that of noise: the kernel covarianceσ K adds to the noise covariance part in Eq. (20), and the resulting estimator has a larger shrinkage factor. Hence the accuracy of the estimate is reduced. In conclusion, one can derive SR algorithms which are more robust to uncertainty in the kernel, but this comes at the price of estimation quality. If an accurate model of k can be found (a smaller-norm certainty covarianceσ K ), the reconstruction accuracy is improved. 5. Discussion In this paper, we examined the effect of two components of SR: the natural image prior and the reconstruction constraint. We showed that an accurate blur model and its corresponding reconstruction constraint are crucial to the success of SR algorithms. The influence of an accurate estimate of the blur kernel is significantly larger than that of a sophisticated prior. These observations suggest that to advance SR algorithms, in particular for applications to real images, future research should place more emphasis on the recovery of real camera blur. While existing blind motion deblurring methods can be adapted to this task, we note that SR kernel recovery is a simpler task, since the blur is a property of the sensor and is fixed for all images captured by the same camera under similar imaging conditions. As described in [4], this camera blur can be calibrated using two images of the same calibration target. Acknowledgements: funding this research. References We thank Intel, ERC and BSF for [1] H. Aly and E. Dubois. Image up-sampling using totalvariation regularization with a new observation model. IEEE Trans. Img. Proc., [2] S. Baker and T. Kanade. Limits on super-resolution and how to break them. PAMI, [3] I. Bégin and F. Ferrie. Blind super-resolution using a learning-based approach. In ICPR, [4] N. Efrat, D. Glasner, A. Apartsin, B. Nadler, and A. Levin. Accurate blur models vs. image priors in single image superresolution - supplementary results weizmann.ac.il/ levina/papers/supres. [5] R. Fattal. Image upsampling via imposed edge statistics. SIGGRAPH, [6] G. Freedman and R. Fattal. Image and video upscaling from local self-examples. ACM Trans. Graph., [7] W. Freeman and C. Liu. Markov random fields for superresolution and texture synthesis. Advances in Markov Random Fields for Vision and Image Processing, [8] D. Glasner, S. Bagon, and M. Irani. Super-resolution from a single image. In ICCV, [9] Y. HaCohen, R. Fattal, and D. Lischinski. Image upsampling via texture hallucination. In ICCP, [10] H. He and W. Siu. Single image super-resolution using gaussian process regression. In CVPR, [11] Y. He, K. Yap, L. Chen, and L. Chau. A soft map framework for blind super-resolution image reconstruction. Image Vision Comput., [12] N. Joshi, R. Szeliski, and D. Kriegman. Psf estimation using sharp edge prediction. In CVPR, [13] K. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. PAMI, [14] A. Levin, R. Fergus, F. Durand, and W. Freeman. Image and depth from a conventional camera with a coded aperture. SIGGRAPH, [15] S. Mallat and G. Yu. Super-resolution with sparse mixing estimators. IEEE Trans. Img. Proc., [16] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, [17] M. Protter, M. Elad, H. Takeda, and P. Milanfar. Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Trans. Img. Proc., [18] D. Robinson and P. Milanfar. Statistical performance analysis of super-resolution. Image Processing, IEEE Transactions on, 15(6): , [19] J. Sun, Z. Xu, and H. Shum. Image super-resolution using gradient profile prior. In CVPR, [20] J. Sun, N. Zheng, H. Tao, and H. Shum. Image hallucination with primal sketch priors. In CVPR, [21] L. Sun and J. Hays. Super-resolution from internet-scale scene matching. In ICCP, [22] Y. Tai, S. Liu, M. Brown, and S. Lin. Super resolution using edge prior and single image detail synthesis. In CVPR, [23] M. Tappen, B. Russell, and W. Freeman. Efficient graphical models for processing images. In CVPR, [24] Q. Wang, X. Tang, and H. Shum. Patch based blind image super resolution. In ICCV, [25] S. Wang, L. Zhang, Y. Liang, and Q. Pan. Semi-coupled dictionary learning with applications in image super-resolution and photo-sketch synthesis. In CVPR. IEEE, [26] Z. Wang, A. Bovik, H. Sheikh., and E. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Img. Proc., [27] J. Yang, J. Wright, T. Huang, and Y. Ma. Image superresolution via sparse representation. IEEE Trans. Img. Proc., 2010.
Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification
1862 JOURNAL OF SOFTWARE, VOL 9, NO 7, JULY 214 Super-resolution Reconstruction Algorithm Based on Patch Similarity and Back-projection Modification Wei-long Chen Digital Media College, Sichuan Normal
Context-Constrained Hallucination for Image Super-Resolution
Context-Constrained Hallucination for Image Super-Resolution Jian Sun Xi an Jiaotong University Xi an, P. R. China [email protected] Jiejie Zhu Marshall F. Tappen EECS, University of Central Florida
Super-Resolution from a Single Image
Super-Resolution from a Single Image Daniel Glasner Shai Bagon Michal Irani Dept. of Computer Science and Applied Mathematics The Weizmann Institute of Science Rehovot 76100, Israel Abstract Methods for
NEIGHBORHOOD REGRESSION FOR EDGE-PRESERVING IMAGE SUPER-RESOLUTION. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo
NEIGHBORHOOD REGRESSION FOR EDGE-PRESERVING IMAGE SUPER-RESOLUTION Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,
SUPER-RESOLUTION (SR) has been an active research
498 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013 A Self-Learning Approach to Single Image Super-Resolution Min-Chun Yang and Yu-Chiang Frank Wang, Member, IEEE Abstract Learning-based approaches
Single Image Super-Resolution using Gaussian Process Regression
Single Image Super-Resolution using Gaussian Process Regression He He and Wan-Chi Siu Department of Electronic and Information Engineering The Hong Kong Polytechnic University {07821020d, [email protected]}
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering
Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering Jun Xie 1, Cheng-Chuan Chou 2, Rogerio Feris 3, Ming-Ting Sun 1 1 University
High Quality Image Deblurring Panchromatic Pixels
High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by Bong-Seok Choi School of Electrical
Computational Optical Imaging - Optique Numerique. -- Deconvolution --
Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
The Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
Bayesian Image Super-Resolution
Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
Redundant Wavelet Transform Based Image Super Resolution
Redundant Wavelet Transform Based Image Super Resolution Arti Sharma, Prof. Preety D Swami Department of Electronics &Telecommunication Samrat Ashok Technological Institute Vidisha Department of Electronics
Couple Dictionary Training for Image Super-resolution
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Couple Dictionary Training for Image Super-resolution Jianchao Yang, Student Member, IEEE, Zhaowen Wang, Student Member, IEEE, Zhe Lin, Member, IEEE, Scott Cohen,
High Quality Image Magnification using Cross-Scale Self-Similarity
High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg
Super-resolution from Internet-scale Scene Matching
Super-resolution from Internet-scale Scene Matching Libin Sun Brown University [email protected] James Hays Brown University [email protected] Abstract In this paper, we present a highly data-driven approach
Image Super-Resolution as Sparse Representation of Raw Image Patches
Image Super-Resolution as Sparse Representation of Raw Image Patches Jianchao Yang, John Wright, Yi Ma, Thomas Huang University of Illinois at Urbana-Champagin Beckman Institute and Coordinated Science
Multidimensional Scaling for Matching. Low-resolution Face Images
Multidimensional Scaling for Matching 1 Low-resolution Face Images Soma Biswas, Member, IEEE, Kevin W. Bowyer, Fellow, IEEE, and Patrick J. Flynn, Senior Member, IEEE Abstract Face recognition performance
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES
SUPER RESOLUTION FROM MULTIPLE LOW RESOLUTION IMAGES ABSTRACT Florin Manaila 1 Costin-Anton Boiangiu 2 Ion Bucur 3 Although the technology of optical instruments is constantly advancing, the capture of
Image Super-Resolution via Sparse Representation
1 Image Super-Resolution via Sparse Representation Jianchao Yang, Student Member, IEEE, John Wright, Student Member, IEEE Thomas Huang, Life Fellow, IEEE and Yi Ma, Senior Member, IEEE Abstract This paper
RSA algorithm for blurred on blinded deconvolution technique
RSA algorithm for blurred on technique *Kirti Bhadauria 1, *U. Dutta 2, **Priyank gupta 3 *Computer Science and Engineering Department, Maharana Pratap College of technology Putli Ghar Road, Near Collectorate,
Performance Verification of Super-Resolution Image Reconstruction
Performance Verification of Super-Resolution Image Reconstruction Masaki Sugie Department of Information Science, Kogakuin University Tokyo, Japan Email: [email protected] Seiichi Gohshi Department
High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications
High Performance GPU-based Preprocessing for Time-of-Flight Imaging in Medical Applications Jakob Wasza 1, Sebastian Bauer 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab, Friedrich-Alexander University
Image super-resolution: Historical overview and future challenges
1 Image super-resolution: Historical overview and future challenges Jianchao Yang University of Illinois at Urbana-Champaign Thomas Huang University of Illinois at Urbana-Champaign CONTENTS 1.1 Introduction
Low-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,
Face Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
Video stabilization for high resolution images reconstruction
Advanced Project S9 Video stabilization for high resolution images reconstruction HIMMICH Youssef, KEROUANTON Thomas, PATIES Rémi, VILCHES José. Abstract Super-resolution reconstruction produces one or
Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Pritam P. Patil 1, Prof. M.V. Phatak 2 1 ME.Comp, 2 Asst.Professor, MIT, Pune Abstract The face is one of the important
Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016
Optical Flow Shenlong Wang CSC2541 Course Presentation Feb 2, 2016 Outline Introduction Variation Models Feature Matching Methods End-to-end Learning based Methods Discussion Optical Flow Goal: Pixel motion
RESOLUTION IMPROVEMENT OF DIGITIZED IMAGES
Proceedings of ALGORITMY 2005 pp. 270 279 RESOLUTION IMPROVEMENT OF DIGITIZED IMAGES LIBOR VÁŠA AND VÁCLAV SKALA Abstract. A quick overview of preprocessing performed by digital still cameras is given
High Quality Image Reconstruction from RAW and JPEG Image Pair
High Quality Image Reconstruction from RAW and JPEG Image Pair Lu Yuan Jian Sun Microsoft Research Asia {luyuan, jiansun}@microsoft.com Abstract A camera RAW file contains minimally processed data from
Superresolution images reconstructed from aliased images
Superresolution images reconstructed from aliased images Patrick Vandewalle, Sabine Süsstrunk and Martin Vetterli LCAV - School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
Numerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
How to assess the risk of a large portfolio? How to estimate a large covariance matrix?
Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Image Super-Resolution Using Deep Convolutional Networks
1 Image Super-Resolution Using Deep Convolutional Networks Chao Dong, Chen Change Loy, Member, IEEE, Kaiming He, Member, IEEE, and Xiaoou Tang, Fellow, IEEE arxiv:1501.00092v3 [cs.cv] 31 Jul 2015 Abstract
Latest Results on High-Resolution Reconstruction from Video Sequences
Latest Results on High-Resolution Reconstruction from Video Sequences S. Lertrattanapanich and N. K. Bose The Spatial and Temporal Signal Processing Center Department of Electrical Engineering The Pennsylvania
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS
TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS 1. Bandwidth: The bandwidth of a communication link, or in general any system, was loosely defined as the width of
Super-Resolution Methods for Digital Image and Video Processing
CZECH TECHNICAL UNIVERSITY IN PRAGUE FACULTY OF ELECTRICAL ENGINEERING DEPARTMENT OF RADIOELECTRONICS Super-Resolution Methods for Digital Image and Video Processing DIPLOMA THESIS Author: Bc. Tomáš Lukeš
To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)
Polytechnic University, Dept. Electrical and Computer Engineering EL6123 --- Video Processing, S12 (Prof. Yao Wang) Solution to Midterm Exam Closed Book, 1 sheet of notes (double sided) allowed 1. (5 pt)
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAIN
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAIN ABSTRACT Sapan Naik 1, Nikunj Patel 2 1 Department of Computer Science and Technology, Uka Tarsadia University, Bardoli, Surat, India [email protected]
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
Assessment of Camera Phone Distortion and Implications for Watermarking
Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Image Normalization for Illumination Compensation in Facial Images
Image Normalization for Illumination Compensation in Facial Images by Martin D. Levine, Maulin R. Gandhi, Jisnu Bhattacharyya Department of Electrical & Computer Engineering & Center for Intelligent Machines
Super-resolution method based on edge feature for high resolution imaging
Science Journal of Circuits, Systems and Signal Processing 2014; 3(6-1): 24-29 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:
Java Modules for Time Series Analysis
Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series
SINGLE FACE IMAGE SUPER-RESOLUTION VIA SOLO DICTIONARY LEARNING. Felix Juefei-Xu and Marios Savvides
SINGLE FACE IMAGE SUPER-RESOLUTION VIA SOLO DICTIONARY LEARNING Felix Juefei-Xu and Marios Savvides Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA ABSTRACT In this work, we have proposed
PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM
PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM Apurva Sinha 1, Mukesh kumar 2, A.K. Jaiswal 3, Rohini Saxena 4 Department of Electronics
jorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Adaptive Coded Aperture Photography
Adaptive Coded Aperture Photography Oliver Bimber, Haroon Qureshi, Daniel Danch Institute of Johannes Kepler University, Linz Anselm Grundhoefer Disney Research Zurich Max Grosse Bauhaus University Weimar
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected]
Machine Learning for Computer Vision 1 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos [email protected] Department of Applied Mathematics Ecole Centrale Paris Galen
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Conditioned Regression Models for Non-Blind Single Image Super-Resolution
Conditioned Regression Models for Non-Blind Single Image Super-Resolution Gernot Riegler Samuel Schulter Matthias Rüther Horst Bischof Institute for Computer Graphics and Vision, Graz University of Technology
Tracking performance evaluation on PETS 2015 Challenge datasets
Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,
JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Algorithms for the resizing of binary and grayscale images using a logical transform
Algorithms for the resizing of binary and grayscale images using a logical transform Ethan E. Danahy* a, Sos S. Agaian b, Karen A. Panetta a a Dept. of Electrical and Computer Eng., Tufts University, 161
ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM
ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM Theodor Heinze Hasso-Plattner-Institute for Software Systems Engineering Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, Germany [email protected]
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Analysis of Bayesian Dynamic Linear Models
Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main
7 Time series analysis
7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Chapter 1 Simultaneous demosaicing and resolution enhancement from under-sampled image sequences
Chapter Simultaneous demosaicing and resolution enhancement from under-sampled image sequences SINA FARSIU Duke University Eye Center Durham, NC Email: [email protected] DIRK ROBINSON Ricoh Innovations
The primary goal of this thesis was to understand how the spatial dependence of
5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial
Image Interpolation by Pixel Level Data-Dependent Triangulation
Volume xx (200y), Number z, pp. 1 7 Image Interpolation by Pixel Level Data-Dependent Triangulation Dan Su, Philip Willis Department of Computer Science, University of Bath, Bath, BA2 7AY, U.K. mapds,
Character Image Patterns as Big Data
22 International Conference on Frontiers in Handwriting Recognition Character Image Patterns as Big Data Seiichi Uchida, Ryosuke Ishida, Akira Yoshida, Wenjie Cai, Yaokai Feng Kyushu University, Fukuoka,
Parametric Comparison of H.264 with Existing Video Standards
Parametric Comparison of H.264 with Existing Video Standards Sumit Bhardwaj Department of Electronics and Communication Engineering Amity School of Engineering, Noida, Uttar Pradesh,INDIA Jyoti Bhardwaj
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
The continuous and discrete Fourier transforms
FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1
Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
Video compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
