Potential of face area data for predicting sharpness of natural images
|
|
- Anissa Shelton
- 8 years ago
- Views:
Transcription
1 Potential of face area data for predicting sharpness of natural images Mikko Nuutinen a, Olli Orenius b, Timo Säämänen b, Pirkko Oittinen a a Dept. of Media Technology, Aalto University School of Science and Technology, Espoo, Finland b Dept. of Psychology, University of Helsinki, Helsinki, Finland ABSTRACT Face detection techniques are used for many different applications. For example, face detection is a basic component in many consumer still and video cameras. In this study, we compare the performance of face area data and freely selected local area data for predicting the sharpness of photographs. The local values were collected systematically from images, and for the analyses we selected only the values with the highest performance. The objective sharpness metric was based on the statistics of the wavelet coefficients for the selected areas. We used three image contents whose subjective sharpness values had been measured. The image contents were captured by 13 cameras, and the images were evaluated by 25 subjects. The quality of the cameras ranged from low-end mobile phone cameras to low-end compact cameras. The image contents simulated typical photos that consumers take with their mobile phones. The face area sizes on the images were approximately 0.4, 1.0 or 4.0 %. Based on the results, the face area data proved to be valuable for measuring the sharpness of the photographs if the face size was large enough. When the face area size was 1.0 or 4.0 %, the performance of the measured sharpness values was equal to or better than the sharpness values measured from the best local areas. When the face area was too small (0.4 %), the performance was low compared with the best local areas. Keywords: sharpness, image quality, face area, digital cameras 1. INTRODUCTION The reproduction of human faces is an important quality factor for photographs [1,2]. Face detection and recognition is a wide and active area of research [3-5] and is a sub-field of image analysis and pattern recognition research. Face detection methods can be used for numerous applications. For example, face detection has been used in many consumer still and video cameras. Face area data can be used for tuning exposure, focus or color [6-9]. Face area data can also be utilized for image enhancement processing in a camera or post-processing in a computer [10,11]. In this study, the values of face area data are evaluated for image quality calculation. We compared the performance of face area data and the best local area data for predicting the subjective sharpness of photographs. The attributes used to characterize camera image quality have often been measured using specific test targets under laboratory conditions. Differences between the lighting conditions of a laboratory and natural scenes can result in different image processing settings. The level of sharpening and noise removal, for example, can change when illumination level changes. Of course, different exposure times and color channel gains also have an effect on image quality. Problems related to lighting conditions can be avoided if quality is measured directly from natural photographs. However, difficulties arise from the unreliability of computational image quality methods when applied to images produced by digital cameras. For example, many no-reference sharpness metrics interpret graininess or noise as edges or some other image structure because they do not recognize image content or they cannot find the appropriate local areas from where the metric should be calculated. The reflectance and shape characteristics of human faces are fairly well known. The colors in faces fall within a specific area in the color space. In addition, the global statistics can be defined analytically. The idea behind this study is the analogy between the test target data and face area data in photographs. Because of known properties, test target images can be used to characterize the function of cameras. In addition, the sampling patches are easy to align with known shapes of targets. The same approach is also applicable for the face areas in photographs. We can think of the face area in a photograph as a natural test target because its characteristics are known. In this study, we compare the objective sharpness values measured from face area data to values measured from the best local areas. The local values were collected systematically from images, and for the analyses we selected only the values with the highest performance. The objective sharpness metric was based on the statistics of the wavelet coefficients of the selected area. The face areas were selected manually. The performance of the face detection algorithms was
2 disregarded because it was out of the scope of the study; we wanted to evaluate the information content of face data for sharpness measurements, not the performance of face detector algorithms. The first research question is: How does the performance of face area data for predicting sharpness compare to the performance of freely selected local area data? The second research question is: How does face area size affect performance? This paper is organized as follows. After an introduction about the background and motivations of this work, Section 2 presents the test images, the subjective data and the sharpness metric. Section 3 presents the results, and Section 4 offers conclusions and suggestions for future work. 2. METHODS 2.1 Test images The test materials used in this study were prepared from the three views (image contents) shown in Figure 1. The images were captured by 13 cameras (Table 1). One camera was a digital still camera (DSC), and the other 12 cameras were mobile device cameras. The pixel count of the cameras was between 3 and 12 Mpix. Eleven cameras were equipped with an LED or powerful xenon flash. Each view was captured several times, and only the best image was selected for each camera. The only allowable limiting factor of a shoot was related to the performance of the camera. The image had to be in focus, and both the white balance and brightness had to be accurately adjusted. The views were also captured by a Reference camera. The images from Reference camera were used as a high-quality reference for observers in the subjective tests. Table 1. Camera types used in the study. Camera Pixel count Type Flash type Mobile Mobile Mobile Dual LED Mobile LED Mobile Dual LED Mobile Dual LED Mobile Dual LED Mobile LED Mobile LED Mobile LED Mobile Xenon Mobile Xenon DSC Xenon Example images of content are shown in Figure 1. Content 1 simulates a bar or restaurant photo; its illuminance was very low (2 lux). The exposure was based on camera flash or LED and for the analyses of Content 1, we used only the images that were captured by cameras with a flash or LED. Thus, the images of Camera 1 and Camera 2 were rejected out of the analyses. Content 2 simulates a living room photo, and its illuminance was 100 lux. Content 3 simulates a tourist photo; it was captured outdoors, and its illuminance was high (15 klux). The relevant variable for this study was the face size in relation to the image size. For Content 1, face size was 4 %; for Content 2, the face size was 1 %; and for Content 3, the face size was 0.4 %.
3 Content 1 Content 2 Content 3 Figure 1. Image contents used in the study 2.2 Subjective data In this study, we utilized sharpness data from a large-scale subjective test set. For the subjective test, the images were scaled to a 1600 x 1200 pixel size. Black borders were added to the images to match the image file resolution to the display resolution (1920 x 1200). The test setup included two Eizo ColorEdge CG241W displays and one small display. The test image was shown on one display, and the reference image was shown on the other. The input of the observer was shown on the small display. The observers evaluated the overall quality value and the attribute values of an image. In this study, we utilized only the sharpness attribute data. The images from each content were shown one at a time. The order of images and contents were randomized for each observer. The viewing distance was about 80 cm, and the ambient illuminance was 20 lux. University students were used as observers (n=25). They were all naïve regarding image quality. Figure 2 shows the subjective sharpness values for different contents sorted in ascending order. The 95% confidence intervals (vertical lines) and a sharpness value of 50 (horizontal lines) on a scale of were added to the figures. The content specific scales were easier to evaluate with the aid of the added horizontal lines. Based on Figure 2, there are clear differences in the scales among the contents. Content 1 had low illuminance, and the sharpness scale was wide and increased smoothly. The high-end cameras with powerful xenon flashes had high sharpness values, and the low-end cameras with low-power LED flashes had low sharpness values. The distribution shapes for Content 2 and Content 3 differed. Content 2 had been captured under low indoor illuminance, and Content 3 had been captured under high outdoor illuminance. For Content 2, there is a group for the unsharp images without statistical differences between the values, and there is a group for the two or three sharper images. For Content 3, there are few groups for the sharp images without statistical differences between the values and the single unsharp image. Content 1 Content 2 Content 3 Figure 2. Subjective sharpness values on the vertical axis (scale 0-100) with 95 % confidence intervals sorted in ascending order (on the horizontal axis) for Content 1 with 11 cameras and for Content 2 and Content 3 with 13 cameras
4 2.3 Sharpness metric The objective sharpness metric, S, is calculated by Equations (1) and (2). Equation (1) calculates the standard deviation σ k of the first scale wavelet coefficient energy for the sub-band of direction k. Equation (2) combines the standard deviation values of the different directions for overall sharpness. Wavelet decomposition was performed using the Matlab Wavelet toolbox and its Haar wavelets. The standard deviation was calculated within a segment of size MxN pixels, where μ k is the mean wavelet coefficient energy, and k is the vertical, horizontal or diagonal (v, h, d) direction. A segment was a cropped face area or a freely selected pixel block. σ k = M N 1 ( w ijk k ) MN 1 1 i= j= μ (1) S = σ v + σ h + σ d 3 When we measured the face area performance, the face areas were cropped manually from the images. In the pretest, we tested a face detection algorithm for face area detection and cropping. The performance of the algorithm depended on the image quality and image content. The algorithm found the face areas but also made false-positive detections. The goal of this study was to evaluate the face area data for predicting image sharpness. With the manual cropping, the face area blocks were equal regardless of image quality or image content. Figure 3 shows the cropped face area images for Content 3. (2) Figure 3. Face areas cropped from the images of Content 3 for Cameras 1-13 Figure 4 shows the first scale approximation coefficients and wavelet coefficients in the vertical, horizontal and diagonal directions for the face area of an image from Content 3. The wavelet coefficients have been scaled [-50 50] for visualization purposes. It can be seen from Figure 4 that the mouth, eyebrows and nose areas have high coefficient values for the vertical and horizontal directions. It can be expected that their energy contributes the value of sharpness metric.
5 Figure 4. Wavelet decomposition for Content 3 image The local area blocks we compared to the face areas were selected by searching the highest correlation values between Equation (2) and the subjective data. Equation (2) was applied to the corresponding square block areas of the images. The block size was constant (125 x 125 pixels), and the image size was 1600 x 1200 pixels in all cases. The candidate blocks included structure energy such as edges and textures. The candidate blocks for Content 1 are shown in Figure 5 as an example. Finally, the three best local areas (blocks) for the different contents were selected for the next analyses. Figure 6 shows the three selected blocks and the face areas for the different contents. The selected blocks include both coarse textures and strong edges. Figure 5. Candidate blocks for Content 1 Content 1 Content 2 Content 3 Figure 6. Local block and face areas used for sharpness calculations
6 3. RESULTS Table 2 shows the Pearson linear correlation coefficients for the face area and the three best local areas with subjective sharpness. The Pearson linear correlation coefficients are also shown for the global areas. The global area metric was calculated using all the pixel values of an image. Based on the results, the face area performance of Content 1 and Content 2 was equal to or higher than the performance of the local or global areas. The face area performance of Content 3 was low compared to the best local area. The performance of the global area was high for Content 1, moderate for Content 2 and low for Content 3. Table 2. Pearson linear correlation coefficients for the face areas, the three best local areas and the global area Face area Local area 1 Local area 2 Local area 3 Global Content Content Content (left), (right) The performance of the face area for Content 1 and 2 was notably higher than the performance of Content 3. The reason for this result could be the larger face area. The larger face areas have more information related to sharpness than the smaller face areas. The size of the face area of Content 3 was only 0.4 %. The size of the face area was 4.0 % for Content 1 and 1.0 % for content 2. However, based on the data shown in Figure 2, Content 3 was difficult for the subjective observers, lowering its usefulness for objective metric validation. The illuminance level of Content 3 was high enough for all cameras, and thus the quality differences among the captured images were low. Content 1 was easier to evaluate. The illuminance level of Content 1 was low, and thus the quality differences between the captured images were high. For Content 1, the quality of the images captured by the cameras equipped with xenon flash was high, and the quality of the images captured by the cameras with low-power LEDs was low. In addition, Content 1 only included a single object, which further simplified the evaluation task. Content 2 and Content 3 included numerous objects that could divert the observers attention. The performance of the global area was high for Content 1. As with the subjective test, the reason for this result could be the low illuminance level and/or simple image composition. For Content 1, the global metric estimated only the reproduction of the person in the view. The peripheral energy did not affect the metric as much as it affected the other contents. With Content 2, for example, the view was complex, and there were many objects and textures that had an effect on the global values, but no significant effect on subjective perception. 4. CONCLUSIONS Based on the results, face area data are useful for measuring the sharpness of photographs if the face size is large enough. If the face area is too small, the performance can be low compared with the best local areas. It is concluded that face areas include information that no-reference or reduced-reference metrics can utilize. A metric could recognize the faces automatically or semi-automatically and use the data if the face area size is large enough. If faces cannot be found or their sizes are too small, the metric could employ traditional methods, such as edges, for calculations. There are certain factors that should be taken into account when the reliability of the study is analyzed and further studies are proposed. For example, the persons in Content 1 and Content 2 had eyeglasses. The frequency energy of eyeglasses can be a strong component of the sharpness metric. It could be argued that eyeglass data belong to the face area data. The face area performance of Content 1 and Content 2 was high compared to Content 3. However, the face area sizes of Content 1 and Content 2 were large compared to Content 3. The face areas of Content 1 and Content 2 provided the metric with more information. A comparison of the performances of the left and right face areas for Content 3 show that both were low, although the right face area had eyeglasses. It is clear that the eyeglass factor needs to be considered in further studies. The different contents also had different illuminance levels. A useful constraint for further studies would be to restrict the measurements to an environment in which the only variable would be the face area size. The validation measurements for the metric should be done under laboratory conditions. The only variable parameter would be the distance between the camera and the person.
7 In addition to the illuminance levels between contents, the persons can change between the contents. It would be useful to measure how the face area data of different persons affect the results (e.g., how a person affects the scale of objective values or what would be the most robust and person-independent statistical parameter for describing the face area data). ACKNOWLEDGEMENTS This work was partially financed by Nokia Mobile Solutions / Symbian Smartphones. The authors thank Fredrik Hollsten and Jussi Tarvainen for the test images. REFERENCES [1] Tong, Y., Konik, H., Cheikh, F. A., Tremeau, A., Full Reference Image Quality Assessment Based on Saliency Map Analysis, J. Imaging Sci. Technol. 54(3), (2010). [2] Menegaz, G., Zambon, R., Towards a Semantic-Driven Metric for Image Quality, Proc. IEEE ICIP, vol. 3, 1176 (2005). [3] Lang, L., Gu, W., Study of Face Detection Algorithm for Real-time Face Detection System, Proc. ISECS, (2009). [4] David, A., Panchanathan, S., Wavelet-histogram method for face recognition, Journal of electronic Imaging 9(2), (2000). [5] Marrszalec, E., Martinkauppi, B., Soriano, M., Pietikäinen, M., Physic-based face database for color research, Journal of Electronic Imaging 9(1), (2000). [6] Jin, E. W., Lin, S., Dharumalingam, D., Face detection assisted auto exposure: supporting evidence from a psychophysical study, Proc. SPIE 7537, [75370K] (2010). [7] Lajevardi, S. M., Hussain, Z. M., Contourlet Structural Similarity for Facial Expression Recognition, Proc. IEEE ICASSP, (2010). [8] Wang, Y.-K., Wang, C-F., Face Detection with Automatic White Balance for Digital Still Cameras, Proc. IEEE IIHMSP, (2008). [9] Rahman, M. T., Kehtanavaz, N., Real-Time Face Priority Auto Focus for Digital and Cell-Phone Cameras, IEEE Transaction on Consumer Electronics 54(4), (2008). [10] Delahunt, P. B., Zhang, X., Brainard, D. H., Perceptual image quality: Effects of tone characteristics, Journal of Electronic Ímaging 14(2), (2005). [11] Ciuc, M., Capata, A., Florea, C., Objective measures for quality assessment of automatic skin enhancement algorithms, Proc. SPIE 7529, [75290N] (2010).
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
More informationAssessment of Camera Phone Distortion and Implications for Watermarking
Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION
More informationFace detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
More informationTemplate-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
More informationDigital Image Requirements for New Online US Visa Application
Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must
More informationBuild Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
More informationSimultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationAnalecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More informationModelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
More informationInvestigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction
More informationPHOTOGRAPHIC guidlines for PORTRAITS
PHOTOGRAPHIC guidlines for PORTRAITS guidelines portrait guidlines FOR PHOTOGRAPHERS Model Ann-Sofi Jönsson, photographer Peter Karlsson, Svarteld form & foto CLOTHES Recommend the model ideally to wear
More informationFast Subsequent Color Iris Matching in large Database
Fast Subsequent Color Iris Matching in large Database Adnan Alam Khan 1, Safeeullah Soomro 2 and Irfan Hyder 3 1 PAF-KIET Department of Telecommunications, Employer of Institute of Business Management
More informationSource Class Identification for DSLR and Compact Cameras
Source Class Identification for DSLR and Compact Cameras Yanmei Fang #,, Ahmet Emir Dirik #2, Xiaoxi Sun #, Nasir Memon #3 # Dept. of Computer & Information Science Polytechnic Institute of New York University,
More informationWhite paper. Lightfinder. Outstanding performance in difficult light conditions
White paper Lightfinder Outstanding performance in difficult light conditions Table of contents 1. Introduction 4 2. Lightfinder background 4 3. Applications 5 4. Comparison during night time and poor
More informationInternational Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
More informationGuidelines for Producing High Quality Photographs for U.S.Travel Documents
U.S. Passport & U.S. Visa Photography Guidelines for Producing High Quality Photographs for U.S.Travel Documents Technological advances have changed the way passport and visa photos may be taken and the
More informationLOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
More informationQuantifying Spatial Presence. Summary
Quantifying Spatial Presence Cedar Riener and Dennis Proffitt Department of Psychology, University of Virginia Keywords: spatial presence, illusions, visual perception Summary The human visual system uses
More informationLow-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
More informationDYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
More informationTarget Validation and Image Calibration in Scanning Systems
Target Validation and Image Calibration in Scanning Systems COSTIN-ANTON BOIANGIU Department of Computer Science and Engineering University Politehnica of Bucharest Splaiul Independentei 313, Bucharest,
More informationScanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
More informationSubspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft
More informationFace Model Fitting on Low Resolution Images
Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com
More informationBernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
More informationUsing Microsoft Picture Manager
Using Microsoft Picture Manager Storing Your Photos It is suggested that a county store all photos for use in the County CMS program in the same folder for easy access. For the County CMS Web Project it
More informationHSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER
HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee
More informationUsing the Olympus C4000 REV. 04/2006
Using the Olympus C4000 REV. 04/2006 In the digital photographic world, information is captured and stored as data not as pictures. The input device (camera) converts light to a series of 1 s and 0 s and
More informationBCC Multi Stripe Wipe
BCC Multi Stripe Wipe The BCC Multi Stripe Wipe is a similar to a Horizontal or Vertical Blind wipe. It offers extensive controls to randomize the stripes parameters. The following example shows a Multi
More informationCOMPONENT FORENSICS OF DIGITAL CAMERAS: A NON-INTRUSIVE APPROACH
COMPONENT FORENSICS OF DIGITAL CAMERAS: A NON-INTRUSIVE APPROACH Ashwin Swaminathan, Min Wu and K. J. Ray Liu Electrical and Computer Engineering Department, University of Maryland, College Park ABSTRACT
More informationCanny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
More informationNavigation Aid And Label Reading With Voice Communication For Visually Impaired People
Navigation Aid And Label Reading With Voice Communication For Visually Impaired People A.Manikandan 1, R.Madhuranthi 2 1 M.Kumarasamy College of Engineering, mani85a@gmail.com,karur,india 2 M.Kumarasamy
More informationMeasuring Line Edge Roughness: Fluctuations in Uncertainty
Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as
More informationMean-Shift Tracking with Random Sampling
1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of
More informationIMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING
ABSTRACT: IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING Hakan Wiman Department of Photogrammetry, Royal Institute of Technology S - 100 44 Stockholm, Sweden (e-mail hakanw@fmi.kth.se) ISPRS Commission
More informationARTICLE Night lessons - Lighting for network cameras
ARTICLE Night lessons - Lighting for network cameras A summary report from Axis and Raytec regional test nights Winter 2011 2012 - England, Scotland, Denmark Table of contents 1. Introduction 3 2. Lesson
More informationPIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM
PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,
More informationEffect of skylight configuration and sky type on the daylight impression of a room
Eco-Architecture IV 53 Effect of skylight configuration and sky type on the daylight impression of a room P. Seuntiens, M. van Boven & D. Sekulovski Philips Research, Eindhoven, The Netherlands Abstract
More informationhttp://dx.doi.org/10.1117/12.906346
Stephanie Fullerton ; Keith Bennett ; Eiji Toda and Teruo Takahashi "Camera simulation engine enables efficient system optimization for super-resolution imaging", Proc. SPIE 8228, Single Molecule Spectroscopy
More informationClustering & Visualization
Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.
More informationOpen Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition
Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary
More informationApplication of Face Recognition to Person Matching in Trains
Application of Face Recognition to Person Matching in Trains May 2008 Objective Matching of person Context : in trains Using face recognition and face detection algorithms With a video-surveillance camera
More informationAdmin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis
Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?
More information3 hours One paper 70 Marks. Areas of Learning Theory
GRAPHIC DESIGN CODE NO. 071 Class XII DESIGN OF THE QUESTION PAPER 3 hours One paper 70 Marks Section-wise Weightage of the Theory Areas of Learning Theory Section A (Reader) Section B Application of Design
More informationPractical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
More informationDigital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
More informationA Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation
A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,
More informationSamsung Rendering Engine for Clean Pages (ReCP) Printer technology that delivers professional-quality prints for businesses
Samsung Rendering Engine for Clean Pages (ReCP) Printer technology that delivers professional-quality prints for businesses Contents Introduction 3 Improve scan and copy quality with ReCP 3 Small text
More informationPalmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap
Palmprint Recognition By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palm print Palm Patterns are utilized in many applications: 1. To correlate palm patterns with medical disorders, e.g. genetic
More informationA method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
More informationSOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp
SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS Nitin Khanna and Edward J. Delp Video and Image Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,
More informationSachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
More information2015 - Photography 4-H Project Newsletter
2015 - Photography 4-H Project Newsletter Welcome to the 4-H Photography Project! This newsletter is your guide to the project. It contains rules, guidelines and suggestions. Read it carefully and keep
More informationUsers Manual Model #93711. English
Users Manual Model #93711 English Congratulations on your purchase of the Celestron NexImage 5 Solar System imaging camera. Your NexImage camera comes with the following: + NexImage 5 Camera + 1.25 nose
More informationA PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir
More informationRequirement of Photograph for Indian Passport
Requirement of Photograph for Indian Passport Sample Photo Requirements The photograph should be in colour and of the size of 2 inch x 2 inch 51 mm x 51 mm. The photo print should be clear and with a continuous
More informationThe Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
More informationAn Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
More informationCULTURAL HERITAGE USER GUIDE
Capture One CULTURAL HERITAGE USER GUIDE Capture One Cultural Heritage edition is a Raw work-flow application based on the Capture One DB solution and features exclusive new tools expressly designed to
More informationTechnical Tip Image Resolutions for Digital Cameras, Scanners, and Printing
518 442-3608 Technical Tip Image Resolutions for Digital Cameras, Scanners, and Printing One of the most confusion issues associated with digital cameras, scanners, and printing involves image resolution.
More informationTracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford
More informationNational Performance Evaluation Facility for LADARs
National Performance Evaluation Facility for LADARs Kamel S. Saidi (presenter) Geraldine S. Cheok William C. Stone The National Institute of Standards and Technology Construction Metrology and Automation
More informationResolution for Color photography
Resolution for Color photography Paul M. Hubel a and Markus Bautsch b a Foveon, Inc., 282 San Tomas Expressway, Santa Clara, CA, USA 955; b Stiftung Warentest, Luetzowplatz -3, D-785 Berlin-Tiergarten,
More informationCircle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
More informationROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
More informationMouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
More informationVolume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies
Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com
More informationDepartment of Mechanical Engineering, King s College London, University of London, Strand, London, WC2R 2LS, UK; e-mail: david.hann@kcl.ac.
INT. J. REMOTE SENSING, 2003, VOL. 24, NO. 9, 1949 1956 Technical note Classification of off-diagonal points in a co-occurrence matrix D. B. HANN, Department of Mechanical Engineering, King s College London,
More informationHANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT
International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika
More informationBildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
More informationThe Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
More informationHow To Segmentate In Ctv Video
Time and Date OCR in CCTV Video Ginés García-Mateos 1, Andrés García-Meroño 1, Cristina Vicente-Chicote 3, Alberto Ruiz 1, and Pedro E. López-de-Teruel 2 1 Dept. de Informática y Sistemas 2 Dept. de Ingeniería
More informationOpen issues and research trends in Content-based Image Retrieval
Open issues and research trends in Content-based Image Retrieval Raimondo Schettini DISCo Universita di Milano Bicocca schettini@disco.unimib.it www.disco.unimib.it/schettini/ IEEE Signal Processing Society
More informationHow To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
More informationSHOW MORE SELL MORE. Top tips for taking great photos
SHOW MORE SELL MORE Top tips for taking great photos TAKE BETTER PICTURES. SELL MORE STUFF. The more clear, crisp, quality pictures you show, the easier it is for buyers to find your listings and make
More informationVEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
More informationDigital Camera Imaging Evaluation
Digital Camera Imaging Evaluation Presenter/Author J Mazzetta, Electro Optical Industries Coauthors Dennis Caudle, Electro Optical Industries Bob Wageneck, Electro Optical Industries Contact Information
More informationSony's "Beyond 4K" solutions bring museum visual exhibits into the next generation
MUSEUM Sony's "Beyond 4K" solutions bring museum visual exhibits into the next generation Overview With Beyond 4K solution, versatile high-resolution video meets the challenge of presenting works of truth
More informationExperiments with a Camera-Based Human-Computer Interface System
Experiments with a Camera-Based Human-Computer Interface System Robyn Cloud*, Margrit Betke**, and James Gips*** * Computer Science Department, Boston University, 111 Cummington Street, Boston, MA 02215,
More information3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationUsing visible SNR (vsnr) to compare image quality of pixel binning and digital resizing
Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Joyce Farrell a, Mike Okincha b, Manu Parmar ac, and Brian Wandell ac a Dept. of Electrical Engineering, Stanford
More informationTips for better photos
A photograph can be a great tool for communicating the MDC message. Done well, photos grab your attention and convey lots of information in a brief glance. Now that there are more high-quality digital
More informationNo-Reference Metric for a Video Quality Control Loop
No-Reference Metric for a Video Quality Control Loop Jorge CAVIEDES Philips Research, 345 Scarborough Rd, Briarcliff Manor NY 10510, USA, jorge.caviedes@philips.com and Joel JUNG Laboratoires d Electronique
More informationDetection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences
Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,
More informationECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
More informationDevelopment of a License Plate Number Recognition System Incorporating Low- Resolution Cameras
System Image Enforcement Cameras Vehicle Image Recognition Result Development of a License Plate Number Recognition System Incorporating Low- Resolution Cameras Road Side Equipment (Radio equipment) KENTA
More informationUnderstanding The Face Image Format Standards
Understanding The Face Image Format Standards Paul Griffin, Ph.D. Chief Technology Officer Identix April 2005 Topics The Face Image Standard The Record Format Frontal Face Images Face Images and Compression
More informationBLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION
BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION P. Vanroose Katholieke Universiteit Leuven, div. ESAT/PSI Kasteelpark Arenberg 10, B 3001 Heverlee, Belgium Peter.Vanroose@esat.kuleuven.ac.be
More informationA Method of Caption Detection in News Video
3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.
More informationJPEG compression of monochrome 2D-barcode images using DCT coefficient distributions
Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai
More informationPhotogrammetric Point Clouds
Photogrammetric Point Clouds Origins of digital point clouds: Basics have been around since the 1980s. Images had to be referenced to one another. The user had to specify either where the camera was in
More informationWhat is the Right Illumination Normalization for Face Recognition?
What is the Right Illumination Normalization for Face Recognition? Aishat Mahmoud Dan-ali Department of Computer Science and Engineering The American University in Cairo AUC Avenue, P.O. Box 74, New Cairo
More informationA Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
More informationDigital Photography Composition. Kent Messamore 9/8/2013
Digital Photography Composition Kent Messamore 9/8/2013 Photography Equipment versus Art Last week we focused on our Cameras Hopefully we have mastered the buttons and dials by now If not, it will come
More informationGet the benefits of mobile document capture with Motorola s Advanced Document Imaging
Tech Brief Get the benefits of mobile document capture with Motorola s Advanced Document Imaging Technology Executive summary While the world is migrating to a paperless society, there are still many types
More informationVideo Conferencing Display System Sizing and Location
Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed
More informationAdvantage of the CMOS Sensor
- TECHNICAL DOCUMENTATION Advantage of the CMOS Sensor Contents 1. Introduction...2 2. Comparing CCD & CMOS...2 3. CMOS Sensor Exmor...3 4. New Wide-D Technology Using High-speed Readout of the CMOS Sensor
More informationDigital Photography for Adults
Digital Photography for Adults Course Title: Digital Photography Age Group: Adults Tutor: Cost : AED 860 Zahra Jewanjee www.zjewanjee.com Tutor s Phone No. 055 9265710 Day / Date: Start time: End time:
More information