Examplebased Learning for SingleImage Superresolution


 Letitia Berry
 1 years ago
 Views:
Transcription
1 Examplebased Learning for SingleImage Superresolution Kwang In Kim 1 and Younghee Kwon 2 1 MaxPlanckInstitute für biologische Kybernetik, Spemannstr. 38, D Tübingen, Germany 2 Korea Advanced Institute of Science and Technology, Kusongdong, YusongKu, Taejon, Korea Abstract. This paper proposes a regressionbased method for singleimage superresolution. Kernel ridge regression (KRR) is used to estimate the highfrequency details of the underlying highresolution image. A sparse solution of KRR is found by combining the ideas of kernel matching pursuit and gradient descent, which allows timecomplexity to be kept to a moderate level. To resolve the problem of ringing artifacts occurring due to the regularization effect, the regression results are postprocessed using a prior model of a generic image class. Experimental results demonstrate the effectiveness of the proposed method. 1 Introduction Singleimage superresolution refers to the task of constructing a highresolution enlargement of a given lowresolution image. This problem is inherently illposed as there are generally multiple highresolution images that can produce the same lowresolution image. Accordingly, prior information is required to approach this problem. Often, this prior information is available either in the explicit form of an energy functional defined on the image class [9, 10], or in the implicit form of example images leading to examplebased superresolution [1 3, 5]. Previous examplebased superresolution algorithms can be characterized as nearest neighbor (NN)based estimations [1 3] : during the training phase, pairs of lowresolution and the corresponding highresolution image patches (subwindows of images) are collected. Then, in the superresolution phase, each patch of the given lowresolution image is compared to the stored lowresolution patches, and the highresolution patch corresponding to the nearest lowresolution patch is selected as the output. For instance, Freeman et al. [2] posed the image superresolution as the problem of estimating missing highfrequency details by interpolating the input lowresolution image into the desired scale (which results in a blurred image). Then, the superresolution was performed by the NNbased estimation of highfrequency patches based on the corresponding patches of input lowfrequency image. Although this method (and also other NNbased methods) has already shown an impressive performance, there is still room for improvement if one views the
2 2 image superresolution as a regression problem, i.e., finding a map f from the space of lowresolution image patches X to the space of target highresolution patches Y. It is well known in the machine learning community that NNbased estimation suffers from overfitting where one obtains a function which explains the training data perfectly yet cannot be generalized to unknown data. In the superresolution, this can result in noisy reconstructions at complex image regions (cf. Sect. 3). Accordingly, it is reasonable to expect that NNbased methods can be improved by adopting learning algorithms with regularization capability to avoid overfitting. Based on the framework of Freeman et al. [2], Kim et al. posed the problem of estimating the highfrequency details as a regression problem which is then resolved by support vector regression (SVR) [6]. Meanwhile, Ni and Nguyen utilized SVR in the frequency domain and posed the superresolution as a kernel learning problem [7]. While SVR produced a significant improvement over existing examplebased methods, it has several drawbacks in building a practical system: 1. As a regularization framework, SVR tends to smooth the sharp edges and produce an oscillation along the major edges. This might lead to low reconstruction error on average, but is visually implausible; 2. SVR results in a dense solution, i.e., the regression function is expanded in the whole set of training data points and accordingly is computationally demanding both in training and in testing. 3 The current work extends the framework of Kim et al. [6]. A kernel ridge regression (KRR) is utilized for the regression. Due to the observed optimality of ɛ at (nearly) 0 for SVR in our previous study, the only difference between SVR and KRR in the proposed setting is their loss functions (L 1  and L 2  loss, respectively). The L 2 loss adopted by KRR is differentiable and facilitates gradientbased optimization. To reduce the time complexity of KRR, a sparse basis is found by combining the idea of the kernel matching pursuit (KMP) [11] and gradient descent such that the time complexity and the quality of superresolution can be traded. As the regularizer of KRR is the same as that of SVR, the problem of oscillation along the major edges still remains. This is resolved by exploiting a prior over image structure proposed by Tappen et al. [9]. 2 Regressionbased Image Superresolution Base System. Adopting the framework of Freeman et al. [2], for the superresolution of a given image, we estimate the corresponding missing highfrequency details based on its interpolation into the desired scale, which in this work is obtained by the bicubic interpolation. Furthermore, based on the conditional independence assumption of high and lowfrequency components given midfrequency components of an image [2], the estimation of highfrequency components (Y ) is performed based on the Laplacian of the bicubic interpolation (X). The Y is then added to the bicubic to produce the superresolved image Z. 3 In our simulation, the optimum value of ɛ for the ɛinsensitive loss function of SVR was close to zero.
3 3 To retain the complexity of the resulting regression problem at a moderate level, a patchbased approach is taken where the estimation of the values of Y at specific locations N N (Y (x, y)) is performed based on only the values of X at corresponding locations N M (X(x, y)), where N G (S(x, y)) represents a Gsized square window (patch) centered at the location (x, y) of the image S. Then, during the superresolution, X is scanned with a small window (of size M) to produce a patchvalued regression result (of size N) for each pixel. This results in a set of candidate pixels for each location of Z (as the patches are overlapping with their neighbors), which are then combined to make the final estimation (details will be provided later). The training images for the regressor are obtained by blurring and subsampling (by bicubic resampling) a set of highresolution images to constitute a set of low and highresolution image pairs. The training image patch pairs are randomly sampled therein. To increase the efficiency of the training set, the data are contrastnormalized ([2]): during the construction of the training set both the input image patch and corresponding desired patches are normalized by dividing them by the L 1 norm of the input patch. For an unseen image patch, the input is again normalized before the regression and the corresponding output is inverse normalized. For a given set of training data points {(x 1, y 1 ),..., (x l, y l )} IR M IR N, we minimize the following regularized cost functional O({f 1,..., f N }) = ( 1 (f i (x j ) y i 2 j) ) 2 λ f i 2 H, (1) i=1,...,n j=1,...,l where y j = [yj 1,..., yn j ] and H is a reproducing kernel Hilbert space (RKHS). Due to the reproducing property, the minimizer of above functional is expanded in kernel functions: f i ( ) = a i jk(x j, ), for i = 1,..., N (2) j=1,...,l where k is the generating kernel for H which, we choose as a Gaussian kernel (k(x, y) = exp ( x y 2 /σ k ) ). Equation (1) is the sum of individual convex cost functionals for each scalarvalued regressor and can be minimized separately. However, by tying the regularization parameter λ and the kernel k we can reduce the time complexity of training and testing down to the case of scalarvalued regression, as in this case the kernel matrix can be shared: plugging (2) into (1) and noting the convexity of (1) yields A = (K + λi) 1 Y, (3) where Y = [y 1,..., y l ] and the ith column of A constitutes the coefficient vector a i = [a i 1,..., a i l ] for the ith regressor. Sparse Solution. As evident from (2) and (3), the training and testing time of KRR is O(l 3 ) and O(M l), respectively, which becomes prohibitive even for a relatively small number of training data points (e.g., l > 10, 000). One
4 4 way of reducing the time complexity is to trade it off with the optimality of the solution by finding the minimizer of (1) only within the span of a basis set {k(b 1, ),..., k(b lb, )} (l b l): f i ( ) = a i jk(b j, ), for i = 1,..., N. (4) j=1,...,l b In this case, the solution is obtained by A = (K bx K bx + λk bb ) 1 K bx Y, (5) where [K bx(i,j) ] lb,l = k(b i, x j ) and [K bb(i,j) ] lb,l b = k(b i, b j ), and accordingly the time complexity reduces to O(M l b ) for testing. For a given fixed basis points B = {b 1,..., b lb }, the time complexity of computing the coefficient matrix A is O(lb 3 + l l b M). In general, the total training time depends on the method of finding B. In KMP [11, 4], the basis points are selected from the training data points in an incremental way: for given n 1 basis points, the nth basis is chosen such that the cost functional (1) is minimized when the A is optimized accordingly. The exact implementation of KMP costs O(l 2 )time for each step. Another possibility is to note the differentiability of the cost functional (4) which leads to gradientbased optimization to construct B. Assuming that the evaluation of the derivative of k with respect to a basis vector takes O(M)time, the evaluation of derivative of (1) with respect to B and corresponding coefficient matrix A takes O(M l l b + l lb 2 )time. Because of the increased flexibility, in general, gradientbased methods can lead to a better optimization of the cost functional (1) than selection methods as already demonstrated in the context of sparse Gaussian process (GP) regression [8]. However, due to the nonconvexity of (1) with respect to B, it is susceptible to local minima and accordingly a good heuristic is required to initialize the solution. In this paper, we use a combination of KMP and gradient descent. The basic idea is to assume that at the nth step of KMP, the chosen basis point b n plus the accumulation of basis points obtained until the (n 1)th step (B n 1 ) is a good initial point. Then, at each step of KMP, B n can be subsequently optimized by gradient descent. Naive implementation of this idea is still very expensive. To reduce further the complexity, the following simplifications are adopted: 1. In the KMP step, instead of evaluating the whole training set for choosing b n, only l c (l c l) points are considered; 2. Gradient descent of B n (M) and corresponding A 4 (1:n,:) are performed only at the every rth KMP step. Instead, for each KMP step, only b n and A (n,:) are optimized. In this case, the gradient can be evaluated at O(M l). 5 4 With a slight abuse of the Matlab notation, A (m:n,:) stands for the submatrix of A obtained by extracting the rows of A from m to n. 5 Similarly to [4], A(n) can be analytically calculated at O(M l)cost: A (n) = K bx(n,:)(y K bx(1:n 1,:)A (1:n 1,:) ) λk nb A (1:n 1,:) K bx(n,:) K bx(n,:) + λ. (6)
5 5 At the nth step, the l c candidate basis points for KMP is selected based on a rather cheap criterion: we use the difference between the function output obtained at the (n 1)th step and the estimated desired response of full KRR for each training data points which is then approximated by the localized KRR: for a training data point x i, its NNs are collected in the training set and the full KRR is trained based on only these NNs. The output of this localized KRR for x i gives the estimation of the desired response for x i. It should be noted that these local KRRs cannot be directly applied for regression as they might interpolate poorly on nontraining data points. Once computed at the beginning, the estimated desired responses are fixed throughout the whole optimization process. To gain an insight into the performances of different sparse solution method, a set of preliminary experiments has been performed with KMP, gradient descent (with basis initialized by kmeans algorithm), and the proposed combination of KMP and gradient descent with 10,000 training data points. Figure 1 summarizes the results. Both gradient descent methods outperform KMP, while the combination with KMP provides a better performance. This could be attributed to the better initialization of the solution for the subsequent gradient descent step. 58 KMP Gradient descent KMP+gradient descent cost # basis points Fig. 1. Performance of the different sparse solution methods evaluated in terms of the cost functional (1). A fixed set of hyperparameters were used such that the comparison can be made directly in (1). Combining Candidates. It is possible to construct a superresolved image based on only the scalarvalued regression (i.e., N = 1). However, we propose to predict a patchvalued output such that for each pixel, N different candidates are generated. These candidates constitutes a 3D image Z where the third dimension corresponds the candidates. This setting is motivated by the observation that 1. by sharing the hyperparameters, the computational complexity of resulting patchvalued learning reduces to the scalarvalued learning; 2. the candidates contain information of different input image locations which are actually diverse enough such that the combination can boost the performance: in our preliminary experiments, constructing an image by choosing the best and the worst
6 6 (in terms of the distance to the ground truth) candidates from each 2D location of Z resulted in an average signaltonoise ratio (SNR) difference of 8.24dB. Certainly, the ground truth is not available at actual superresolution stage and accordingly a way of constructing a single pixel out of N candidates is required. One straightforward way is to construct the final estimation as a convex combination of candidates based on a certain confidence measure. For instance, by noting that the (sparse) KRR corresponds to the maximum a posteriori estimation with the (sparse) GP prior [8], one could utilize the predictive variance as a basis for the selection. In the preliminary experiments this resulted in an improvement over the scalarvalued regression. However, a better prediction was obtained when the confidence estimation is obtained based not only on the input patches but also on the context of neighbor reconstructions. For this, a set of linear regressors is trained such that for each location (x, y), they receive a patch of output images Z (NL (x,y),:) and produce the estimation of differences ({d 1 (x, y),..., d N (x, y)}) between the unknown desired output and each candidate. The final estimation of pixel value for an image location (x, y) is then obtained as the convex combination of candidates given in the form of a softmax: Y (x, y) = w i (x, y)z(x, y, i), (7) i=1,...,n where w i (x, y) = exp ( di(x,y) ) [ dj(x,y) σ C / j=1,...,n exp( σ C ) ]. For the experiments in this paper, we set M = 49(7 7), N = 25(5 5), L = 49(7 7), σ k = 0.025, σ C = 0.03, and λ = The values are obtained based on a set of separate validation images. The number of basis points for KRR (l b ) is determined to be 300 as the trade off between the accuracy and the time complexity. In the superresolution experiments, the combination of candidates based on these parameters resulted in an average SNR increase of 0.43dB over the scalarvalued regression. Postprocessing Based on Image Prior. As demonstrated in Fig. 2.b, the result of the proposed regressionbased method is significantly better than the bicubic interpolation. However, detailed visual inspection along the major edges (edges showing rapid and strong change of pixel values) reveals ringing artifacts (oscillation occurred along the edges). In general, regularization methods (depending on the specific class of regularizer) including KRR and SVR tend to fit the data with a smooth function. Accordingly, at the sharp changes of the function (edges in the case of images) oscillation occurs to compensate the resulting loss of smoothness. While this problem can indirectly be resolved by imposing less regularization at the vicinity of edges, more direct approach is to rely on the prior knowledge of discontinuity of images. In this work, we use a modification of the natural image prior (NIP) framework proposed by Tappen et al. [9]: P ({x} {y}) = 1 [ ( ) α ] ˆxi ˆx j exp [ ( ) ] 2 ˆxi y i exp, (8) C (i,j N S (i)) σ N where {y} represents the observed variables corresponding to the pixel values of Y, {x} represents the latent variable, and N S (i) stands for the 8connected i σ R
7 7 neighbors of the pixel location i. While the second product term has the role of preventing the final solution flowing far away from the input regression result Y, the first product term tends to smooth the image based on the costs ˆx i ˆx j. The role of α(< 1) is to reweight the costs such that the largest difference is stressed relatively less than the others such that large changes of pixel values are relatively less penalized. Furthremore, the cost term ˆx i ˆx j α becomes piecewise concave with extreme points at N S (i) such that if the second term is removed, the maximum probability for a pixel i is achieved by assigning it with the value of a neighbor, rather than a certain weighted average of neighbors which might have been the case when α > 1. Accordingly, this distribution prefers a strong edge rather than a set of small edges and can be used to resolve the problem of smoothing around major edges. The optimization of (8) is performed by belief propagation (BP) similarly to [9]. To facilitate the optimization, we reuse the candidate set generated from the regression step such that the best candidates are chosen by BP. a b c d e f Fig. 2. Example of super resolution: a. bicubic, b regression result. c. postprocessed result of b based on NIP, d. Laplacian of bicubic with major edges displayed as green pixels, and e and f. enlarged portions of ac from left to right. Optimizing (8) throughout the whole image region can lead to degraded results as it tends to flatten the textured area, especially, when the contrast is low such that the contribution of the second term is small. 6 This problem is resolved by applying the (modification of) NIP only at the vicinity of major edges. Based on the observation that the input images are blurred and accordingly very high spatial frequency components are removed, the major edges are found by thresholding each pixel of Laplacian of the input image using L 2 and L norms of the local patches encompassing it. It should be noted that the major edge is in general different from the object contour. For instance, in Fig. 2.d, the bound 6 In original work of Tappen et al. [9], this problem does not happen as the candidates are 2 2size image patches rather than individual pixels.
8 8 ary between the chest of the duck and water is not detected as major edges as the intensity variations are not significant across the boundary. In this case, no visible oscillation of pixel values are observed in the original regression result. The parameters α, σ N, and σ R are determined at 0.85, 200 and 1, respectively. While the improvement in terms of SNR is less significant (on average 0.04dB from the combined regression result) the improved visual quality at major edges demonstrate the effectiveness of NIP (Fig. 2). 3 Experiments The proposed method was evaluated based on a set of high and lowresolution image pairs (Fig. 3) which is disjoint from the training images. The desired resolution is twice the input image along each dimension. The number of training data points is 200,000 where it took around a day to train the sparse KRR on a 2.5GHz PC. For comparison, several different examplebased image superresolution methods were evaluated, which include Freeman et al. s NNbased method [2], Tappen et al. s NIP [9], 7 and Kim et al. s SVRbased method [6] (trained based on only 10,000 data points). Fig. 3. Thumbnails of test images: the images are indexed by numbers arranged in the raster order. Figure 4 shows examples of superresolution results. All the examplebased superresolution methods outperform the bicubic interpolation in terms of visual plausibility. The NNbased method and the original NIP produced sharper images at the expense of introducing noise which, even with the improved visual quality, lead to lower SNR values than the bicubic interpolations. The SVR produced less noisy images. However it generated smoothed edges and perceptually distracting ring artifacts which have disappeared for the proposed method. Disregarding the postprocessing stage, we measured on average 0.69dB improvement of SNRs for the proposed method from the SVR. This could be attributed to the sparsity of the solution which enabled training on a large data set and the 7 The original NIP algorithm was developed for superresolving the NNsubsampled image (not bicubic resampling which is used for experiments with all the other methods). Accordingly, for the experiments with NIP, the low resolution images were generated by NN subsampling. The visual qualities of the superresolution results are not significantly different from the results obtained from bicubic resampling. However, the quantitative results should not be directly compared with other methods.
9 9 effectiveness of the candidate combination scheme. Moreover, in comparison to SVR the proposed method requires much less processing time: superresolving a size image into requires around 25 seconds for the proposed method and 20 minutes for the SVRbased method. For quantitative comparison, SNRs of different algorithms are plotted in Fig. 5. a b c d e f g h i j k l Fig. 4. Results of different superresolution algorithms on two images from Fig. 3: ab. original, cd. bicubic, ef. SVR [6], gh. NNbased method [2], ij. NIP [9], and kl. proposed method. 4 Conclusion This paper approached the problem of image superresolution from a nonlinear regression viewpoint. A combination of KMP and gradient descent is adopted to obtain a sparse KRR solution which enabled a realistic application of regressionbased superresolution. To resolve the problem of smoothing artifacts that occur due to the regularization, the NIP was adopted to postprocess the regression result such that the edges are sharpen while the artifacts are suppressed. Comparison with the existing examplebased image superresolution methods demonstrated the effectiveness of the proposed method. Future work should include comparison and combination of various nonexamplebased approaches.
10 10 increase of SNR from bicubic bicubic SVR NN NIP proposed image index Fig. 5. Performance of different superresolutions algorithms. Acknowledgment. The contents of this paper have greatly benefited from discussions with G. BakIr and C. Walder, and comments from anonymous reviewers. The idea of using localized KRR was originated by C. Walder. References 1. Baker, S., Kanade, T.: Limits on superresolution and how to break them. IEEE Trans. Pattern Analysis and Machine Intelligence 24(9), (2002) 2. Freeman, W.T., Jones, T.R., Pasztor, E.C.: Examplebased superresolution. IEEE Computer Graphics and Applications 22(2), (2002) 3. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Computer Graphics (Proc. Siggraph 2001), pp ACM Press, NY (2001) 4. Keerthi, S.S., Chu, W.: A matching pursuit approach to sparse gaussian process regression. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge, MA (2005) 5. Kim, K.I., Franz, M.O., Schölkopf, B.: Iterative kernel principal component analysis for image modeling. IEEE Trans. Pattern Analysis and Machine Intelligence 27(9), (2005) 6. Kim, K.I., Kim, D.H., Kim, J.H.: Examplebased learning for image superresolution. In: Proc. the third TsinghuaKAIST Joint Workshop on Pattern Recognition, pp (2004) 7. Ni, K., Nguyen, T.Q.: Image superresolution using support vector regression. IEEE Trans. Image Processing 16(6), (2007) 8. Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudoinputs. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge, MA (2006) 9. Tappen, M.F., Russel, B.C., Freeman, W.T.: Exploiting the sparse derivative prior for superresolution and image demosaicing. In: Proc. IEEE Workshop on Statistical and Computational Theories of Vision (2003) 10. Tschumperlé, D., Deriche, R.: Vectorvalued image regularization with pdes: a common framework for different applications. IEEE Trans. Pattern Analysis and Machine Intelligence 27(4), (2005) 11. Vincent, P., Bengio, Y.: Kernel matching pursuit. Machine Learning 48, (2002)
SingleImage Superresolution Using Sparse Regression and Natural Image Prior
1 SingleImage Superresolution Using Sparse Regression and Natural Image Prior Kwang In Kim and Younghee Kwon Abstract This paper proposes a framework for singleimage superresolution. The underlying
More informationA Learning Based Method for SuperResolution of Low Resolution Images
A Learning Based Method for SuperResolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method
More informationSuperresolution Reconstruction Algorithm Based on Patch Similarity and Backprojection Modification
1862 JOURNAL OF SOFTWARE, VOL 9, NO 7, JULY 214 Superresolution Reconstruction Algorithm Based on Patch Similarity and Backprojection Modification Weilong Chen Digital Media College, Sichuan Normal
More informationConvolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
More informationA Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,
More informationImage Hallucination Using Neighbor Embedding over Visual Primitive Manifolds
Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds Wei Fan & DitYan Yeung Department of Computer Science and Engineering, Hong Kong University of Science and Technology {fwkevin,dyyeung}@cse.ust.hk
More informationFace Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
More informationSUPERRESOLUTION (SR) has been an active research
498 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 3, APRIL 2013 A SelfLearning Approach to Single Image SuperResolution MinChun Yang and YuChiang Frank Wang, Member, IEEE Abstract Learningbased approaches
More informationHigh Quality Image Magnification using CrossScale SelfSimilarity
High Quality Image Magnification using CrossScale SelfSimilarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, RolfRainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D21079 Hamburg
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
More informationSingle Image SuperResolution using Gaussian Process Regression
Single Image SuperResolution using Gaussian Process Regression He He and WanChi Siu Department of Electronic and Information Engineering The Hong Kong Polytechnic University {07821020d, enwcsiu@polyu.edu.hk}
More informationNEIGHBORHOOD REGRESSION FOR EDGEPRESERVING IMAGE SUPERRESOLUTION. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo
NEIGHBORHOOD REGRESSION FOR EDGEPRESERVING IMAGE SUPERRESOLUTION Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,
More informationHigh Quality Image Deblurring Panchromatic Pixels
High Quality Image Deblurring Panchromatic Pixels ACM Transaction on Graphics vol. 31, No. 5, 2012 Sen Wang, Tingbo Hou, John Border, Hong Qin, and Rodney Miller Presented by BongSeok Choi School of Electrical
More informationImage SuperResolution via Sparse Representation
1 Image SuperResolution via Sparse Representation Jianchao Yang, Student Member, IEEE, John Wright, Student Member, IEEE Thomas Huang, Life Fellow, IEEE and Yi Ma, Senior Member, IEEE Abstract This paper
More informationLowresolution Character Recognition by Videobased Superresolution
2009 10th International Conference on Document Analysis and Recognition Lowresolution Character Recognition by Videobased Superresolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
More informationAccurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
More informationSachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
More informationBayesian Image SuperResolution
Bayesian Image SuperResolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian
More informationLABEL PROPAGATION ON GRAPHS. SEMISUPERVISED LEARNING. Changsheng Liu 10302014
LABEL PROPAGATION ON GRAPHS. SEMISUPERVISED LEARNING Changsheng Liu 10302014 Agenda Semi Supervised Learning Topics in Semi Supervised Learning Label Propagation Local and global consistency Graph
More informationNeural Networks. CAP5610 Machine Learning Instructor: GuoJun Qi
Neural Networks CAP5610 Machine Learning Instructor: GuoJun Qi Recap: linear classifier Logistic regression Maximizing the posterior distribution of class Y conditional on the input vector X Support vector
More informationFast Direct SuperResolution by Simple Functions
Fast Direct SuperResolution by Simple Functions ChihYuan Yang and MingHsuan Yang Electrical Engineering and Computer Science, University of California at Merced {cyang35,mhyang}@ucmerced.edu Abstract
More informationDYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
More informationGaussian Processes in Machine Learning
Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany carl@tuebingen.mpg.de WWW home page: http://www.tuebingen.mpg.de/ carl
More informationA Sampled Texture Prior for Image SuperResolution
A Sampled Texture Prior for Image SuperResolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman Robotics Research Group Department of Engineering Science University of Oxford Parks Road,
More informationSuperResolution from a Single Image
SuperResolution from a Single Image Daniel Glasner Shai Bagon Michal Irani Dept. of Computer Science and Applied Mathematics The Weizmann Institute of Science Rehovot 76100, Israel Abstract Methods for
More informationA NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy
A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo University of Padova, Italy ABSTRACT Current TimeofFlight matrix sensors allow for the acquisition
More informationImage Interpolation by Pixel Level DataDependent Triangulation
Volume xx (200y), Number z, pp. 1 7 Image Interpolation by Pixel Level DataDependent Triangulation Dan Su, Philip Willis Department of Computer Science, University of Bath, Bath, BA2 7AY, U.K. mapds,
More informationSupporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
More informationContextConstrained Hallucination for Image SuperResolution
ContextConstrained Hallucination for Image SuperResolution Jian Sun Xi an Jiaotong University Xi an, P. R. China jiansun@mail.xjtu.edu.cn Jiejie Zhu Marshall F. Tappen EECS, University of Central Florida
More informationAnalecta Vol. 8, No. 2 ISSN 20647964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More informationApplications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
More informationjorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
More informationTree based ensemble models regularization by convex optimization
Tree based ensemble models regularization by convex optimization Bertrand Cornélusse, Pierre Geurts and Louis Wehenkel Department of Electrical Engineering and Computer Science University of Liège B4000
More informationINTRODUCTION TO NEURAL NETWORKS
INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbookchapterslides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An
More informationAn Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department CarnegieMellon University Pittsburgh, Pennsylvania 15213 Abstract
More informationRedundant Wavelet Transform Based Image Super Resolution
Redundant Wavelet Transform Based Image Super Resolution Arti Sharma, Prof. Preety D Swami Department of Electronics &Telecommunication Samrat Ashok Technological Institute Vidisha Department of Electronics
More informationAssessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
More information8. Linear leastsquares
8. Linear leastsquares EE13 (Fall 21112) definition examples and applications solution of a leastsquares problem, normal equations 81 Definition overdetermined linear equations if b range(a), cannot
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 7, JULY 2009 1181
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 20, NO. 7, JULY 2009 1181 The Global Kernel kmeans Algorithm for Clustering in Feature Space Grigorios F. Tzortzis and Aristidis C. Likas, Senior Member, IEEE
More informationImage SuperResolution Using Deep Convolutional Networks
1 Image SuperResolution Using Deep Convolutional Networks Chao Dong, Chen Change Loy, Member, IEEE, Kaiming He, Member, IEEE, and Xiaoou Tang, Fellow, IEEE arxiv:1501.00092v3 [cs.cv] 31 Jul 2015 Abstract
More informationSuperresolution method based on edge feature for high resolution imaging
Science Journal of Circuits, Systems and Signal Processing 2014; 3(61): 2429 Published online December 26, 2014 (http://www.sciencepublishinggroup.com/j/cssp) doi: 10.11648/j.cssp.s.2014030601.14 ISSN:
More informationSingle Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering
Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering Jun Xie 1, ChengChuan Chou 2, Rogerio Feris 3, MingTing Sun 1 1 University
More informationSuperResolution Through Neighbor Embedding
SuperResolution Through Neighbor Embedding Hong Chang, DitYan Yeung, Yimin Xiong Department of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {hongch,
More informationCouple Dictionary Training for Image Superresolution
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Couple Dictionary Training for Image Superresolution Jianchao Yang, Student Member, IEEE, Zhaowen Wang, Student Member, IEEE, Zhe Lin, Member, IEEE, Scott Cohen,
More informationRESOLUTION IMPROVEMENT OF DIGITIZED IMAGES
Proceedings of ALGORITMY 2005 pp. 270 279 RESOLUTION IMPROVEMENT OF DIGITIZED IMAGES LIBOR VÁŠA AND VÁCLAV SKALA Abstract. A quick overview of preprocessing performed by digital still cameras is given
More informationModelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
More informationPerformance Verification of SuperResolution Image Reconstruction
Performance Verification of SuperResolution Image Reconstruction Masaki Sugie Department of Information Science, Kogakuin University Tokyo, Japan Email: em13010@ns.kogakuin.ac.jp Seiichi Gohshi Department
More informationData Mining  Evaluation of Classifiers
Data Mining  Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010
More informationRecognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
More informationMeanShift Tracking with Random Sampling
1 MeanShift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
More informationSolving Threeobjective Optimization Problems Using Evolutionary Dynamic Weighted Aggregation: Results and Analysis
Solving Threeobjective Optimization Problems Using Evolutionary Dynamic Weighted Aggregation: Results and Analysis Abstract. In this paper, evolutionary dynamic weighted aggregation methods are generalized
More informationSupport Vector Machines with Clustering for Training with Very Large Datasets
Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano
More informationStatistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP  Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
More informationMultidimensional Scaling for Matching. Lowresolution Face Images
Multidimensional Scaling for Matching 1 Lowresolution Face Images Soma Biswas, Member, IEEE, Kevin W. Bowyer, Fellow, IEEE, and Patrick J. Flynn, Senior Member, IEEE Abstract Face recognition performance
More informationComputational Optical Imaging  Optique Numerique.  Deconvolution 
Computational Optical Imaging  Optique Numerique  Deconvolution  Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationTwo Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
More informationClassspecific Sparse Coding for Learning of Object Representations
Classspecific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH CarlLegienStr. 30, 63073 Offenbach am Main, Germany
More informationImage Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode Value
IJSTE  International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
More informationBlind Deconvolution of Corrupted Barcode Signals
Blind Deconvolution of Corrupted Barcode Signals Everardo Uribe and Yifan Zhang Advisors: Ernie Esser and Yifei Lou Interdisciplinary Computational and Applied Mathematics Program University of California,
More informationCheng Soon Ong & Christfried Webers. Canberra February June 2016
c Cheng Soon Ong & Christfried Webers Research Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 31 c Part I
More informationMachine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
More informationBildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image PreProcessing  Pixel Brightness Transformation  Geometric Transformation  Image Denoising 1 1. Image PreProcessing
More informationAN EPIPOLARCONSTRAINED PRIOR FOR EFFICIENT SEARCH IN MULTIVIEW SCENARIOS. Eduardo PérezPellitero
AN EPIPOLARCONSTRAINED PRIOR FOR EFFICIENT SEARCH IN MULTIVIEW SCENARIOS Ignacio Bosch Jordi Salvador Eduardo PérezPellitero Javier RuizHidalgo Technicolor R&I Hannover Image Processing Group Universitat
More informationVideo stabilization for high resolution images reconstruction
Advanced Project S9 Video stabilization for high resolution images reconstruction HIMMICH Youssef, KEROUANTON Thomas, PATIES Rémi, VILCHES José. Abstract Superresolution reconstruction produces one or
More information2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
More informationMaking Sense of the Mayhem: Machine Learning and March Madness
Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University atran3@stanford.edu ginzberg@stanford.edu I. Introduction III. Model The goal of our research
More informationClarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery
Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Zhilin Zhang and Bhaskar D. Rao Technical Report University of California at San Diego September, Abstract Sparse Bayesian
More informationEnvironmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
More informationELECE8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems
Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed
More informationPERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM
PERFORMANCE ANALYSIS OF HIGH RESOLUTION IMAGES USING INTERPOLATION TECHNIQUES IN MULTIMEDIA COMMUNICATION SYSTEM Apurva Sinha 1, Mukesh kumar 2, A.K. Jaiswal 3, Rohini Saxena 4 Department of Electronics
More informationArtificial Neural Network and NonLinear Regression: A Comparative Study
International Journal of Scientific and Research Publications, Volume 2, Issue 12, December 2012 1 Artificial Neural Network and NonLinear Regression: A Comparative Study Shraddha Srivastava 1, *, K.C.
More informationImage Compression and Decompression using Adaptive Interpolation
Image Compression and Decompression using Adaptive Interpolation SUNILBHOOSHAN 1,SHIPRASHARMA 2 Jaypee University of Information Technology 1 Electronicsand Communication EngineeringDepartment 2 ComputerScience
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
More informationImage SuperResolution as Sparse Representation of Raw Image Patches
Image SuperResolution as Sparse Representation of Raw Image Patches Jianchao Yang, John Wright, Yi Ma, Thomas Huang University of Illinois at UrbanaChampagin Beckman Institute and Coordinated Science
More informationAutomatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles mjhustc@ucla.edu and lunbo
More informationRemoval of Noise from MRI using Spectral Subtraction
International Journal of Electronic and Electrical Engineering. ISSN 09742174, Volume 7, Number 3 (2014), pp. 293298 International Research Publication House http://www.irphouse.com Removal of Noise
More informationResolving Objects at Higher Resolution from a Single Motionblurred Image
Resolving Objects at Higher Resolution from a Single Motionblurred Image Amit Agrawal and Ramesh Raskar Mitsubishi Electric Research Labs (MERL) 201 Broadway, Cambridge, MA, USA 02139 [agrawal,raskar]@merl.com
More informationA Novel Method for Brain MRI Superresolution by Waveletbased POCS and Adaptive Edge Zoom
A Novel Method for Brain MRI Superresolution by Waveletbased POCS and Adaptive Edge Zoom N. Hema Rajini*, R.Bhavani Department of Computer Science and Engineering, Annamalai University, Annamalai Nagar
More informationAnother Example: the Hubble Space Telescope
296 DIP Chapter and 2: Introduction and Integral Equations Motivation: Why Inverse Problems? A largescale example, coming from a collaboration with Università degli Studi di Napoli Federico II in Naples.
More informationLINEAR SYSTEMS. Consider the following example of a linear system:
LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in
More informationSuperresolution images reconstructed from aliased images
Superresolution images reconstructed from aliased images Patrick Vandewalle, Sabine Süsstrunk and Martin Vetterli LCAV  School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne
More informationLocal Gaussian Process Regression for Real Time Online Model Learning and Control
Local Gaussian Process Regression for Real Time Online Model Learning and Control Duy NguyenTuong Jan Peters Matthias Seeger Max Planck Institute for Biological Cybernetics Spemannstraße 38, 776 Tübingen,
More informationThe Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
More informationEfficient online learning of a nonnegative sparse autoencoder
and Machine Learning. Bruges (Belgium), 2830 April 2010, dside publi., ISBN 293030102. Efficient online learning of a nonnegative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
More informationLowresolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 22772502. Lowresolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
More informationUsing Bayesian Neural Network to Solve the Inverse Problem in Electrical Impedance Tomography
Using Bayesian Neural Network to Solve the Inverse Problem in Electrical Impedance Tomography Jouko Lampinen and Aki Vehtari Laboratory of Computational Engineering Helsinki University of Technology P.O.Box
More informationRegression Using Support Vector Machines: Basic Foundations
Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering
More informationSemiSupervised Support Vector Machines and Application to Spam Filtering
SemiSupervised Support Vector Machines and Application to Spam Filtering Alexander Zien Empirical Inference Department, Bernhard Schölkopf Max Planck Institute for Biological Cybernetics ECML 2006 Discovery
More informationSharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Sharpening The term
More informationExtracting a Good Quality Frontal Face Images from Low Resolution Video Sequences
Extracting a Good Quality Frontal Face Images from Low Resolution Video Sequences Pritam P. Patil 1, Prof. M.V. Phatak 2 1 ME.Comp, 2 Asst.Professor, MIT, Pune Abstract The face is one of the important
More informationLeastSquares Intersection of Lines
LeastSquares Intersection of Lines Johannes Traa  UIUC 2013 This writeup derives the leastsquares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationIntroduction to Machine Learning. Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011
Introduction to Machine Learning Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011 1 Outline 1. What is machine learning? 2. The basic of machine learning 3. Principles and effects of machine learning
More informationHT2015: SC4 Statistical Data Mining and Machine Learning
HT2015: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford http://www.stats.ox.ac.uk/~sejdinov/sdmml.html Bayesian Nonparametrics Parametric vs Nonparametric
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationVisualization by Linear Projections as Information Retrieval
Visualization by Linear Projections as Information Retrieval Jaakko Peltonen Helsinki University of Technology, Department of Information and Computer Science, P. O. Box 5400, FI0015 TKK, Finland jaakko.peltonen@tkk.fi
More informationIntroduction to Machine Learning
Introduction to Machine Learning Prof. Alexander Ihler Prof. Max Welling icamp Tutorial July 22 What is machine learning? The ability of a machine to improve its performance based on previous results:
More informationROBUST COLOR JOINT MULTIFRAME DEMOSAICING AND SUPER RESOLUTION ALGORITHM
ROBUST COLOR JOINT MULTIFRAME DEMOSAICING AND SUPER RESOLUTION ALGORITHM Theodor Heinze HassoPlattnerInstitute for Software Systems Engineering Prof.Dr.HelmertStr. 23, 14482 Potsdam, Germany theodor.heinze@hpi.unipotsdam.de
More informationNumerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
More informationAdmin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
More informationEECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines
EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation
More information