Potential of face area data for predicting sharpness of natural images

Size: px
Start display at page:

Download "Potential of face area data for predicting sharpness of natural images"

Transcription

1 Potential of face area data for predicting sharpness of natural images Mikko Nuutinen a, Olli Orenius b, Timo Säämänen b, Pirkko Oittinen a a Dept. of Media Technology, Aalto University School of Science and Technology, Espoo, Finland b Dept. of Psychology, University of Helsinki, Helsinki, Finland ABSTRACT Face detection techniques are used for many different applications. For example, face detection is a basic component in many consumer still and video cameras. In this study, we compare the performance of face area data and freely selected local area data for predicting the sharpness of photographs. The local values were collected systematically from images, and for the analyses we selected only the values with the highest performance. The objective sharpness metric was based on the statistics of the wavelet coefficients for the selected areas. We used three image contents whose subjective sharpness values had been measured. The image contents were captured by 13 cameras, and the images were evaluated by 25 subjects. The quality of the cameras ranged from low-end mobile phone cameras to low-end compact cameras. The image contents simulated typical photos that consumers take with their mobile phones. The face area sizes on the images were approximately 0.4, 1.0 or 4.0 %. Based on the results, the face area data proved to be valuable for measuring the sharpness of the photographs if the face size was large enough. When the face area size was 1.0 or 4.0 %, the performance of the measured sharpness values was equal to or better than the sharpness values measured from the best local areas. When the face area was too small (0.4 %), the performance was low compared with the best local areas. Keywords: sharpness, image quality, face area, digital cameras 1. INTRODUCTION The reproduction of human faces is an important quality factor for photographs [1,2]. Face detection and recognition is a wide and active area of research [3-5] and is a sub-field of image analysis and pattern recognition research. Face detection methods can be used for numerous applications. For example, face detection has been used in many consumer still and video cameras. Face area data can be used for tuning exposure, focus or color [6-9]. Face area data can also be utilized for image enhancement processing in a camera or post-processing in a computer [10,11]. In this study, the values of face area data are evaluated for image quality calculation. We compared the performance of face area data and the best local area data for predicting the subjective sharpness of photographs. The attributes used to characterize camera image quality have often been measured using specific test targets under laboratory conditions. Differences between the lighting conditions of a laboratory and natural scenes can result in different image processing settings. The level of sharpening and noise removal, for example, can change when illumination level changes. Of course, different exposure times and color channel gains also have an effect on image quality. Problems related to lighting conditions can be avoided if quality is measured directly from natural photographs. However, difficulties arise from the unreliability of computational image quality methods when applied to images produced by digital cameras. For example, many no-reference sharpness metrics interpret graininess or noise as edges or some other image structure because they do not recognize image content or they cannot find the appropriate local areas from where the metric should be calculated. The reflectance and shape characteristics of human faces are fairly well known. The colors in faces fall within a specific area in the color space. In addition, the global statistics can be defined analytically. The idea behind this study is the analogy between the test target data and face area data in photographs. Because of known properties, test target images can be used to characterize the function of cameras. In addition, the sampling patches are easy to align with known shapes of targets. The same approach is also applicable for the face areas in photographs. We can think of the face area in a photograph as a natural test target because its characteristics are known. In this study, we compare the objective sharpness values measured from face area data to values measured from the best local areas. The local values were collected systematically from images, and for the analyses we selected only the values with the highest performance. The objective sharpness metric was based on the statistics of the wavelet coefficients of the selected area. The face areas were selected manually. The performance of the face detection algorithms was

2 disregarded because it was out of the scope of the study; we wanted to evaluate the information content of face data for sharpness measurements, not the performance of face detector algorithms. The first research question is: How does the performance of face area data for predicting sharpness compare to the performance of freely selected local area data? The second research question is: How does face area size affect performance? This paper is organized as follows. After an introduction about the background and motivations of this work, Section 2 presents the test images, the subjective data and the sharpness metric. Section 3 presents the results, and Section 4 offers conclusions and suggestions for future work. 2. METHODS 2.1 Test images The test materials used in this study were prepared from the three views (image contents) shown in Figure 1. The images were captured by 13 cameras (Table 1). One camera was a digital still camera (DSC), and the other 12 cameras were mobile device cameras. The pixel count of the cameras was between 3 and 12 Mpix. Eleven cameras were equipped with an LED or powerful xenon flash. Each view was captured several times, and only the best image was selected for each camera. The only allowable limiting factor of a shoot was related to the performance of the camera. The image had to be in focus, and both the white balance and brightness had to be accurately adjusted. The views were also captured by a Reference camera. The images from Reference camera were used as a high-quality reference for observers in the subjective tests. Table 1. Camera types used in the study. Camera Pixel count Type Flash type Mobile Mobile Mobile Dual LED Mobile LED Mobile Dual LED Mobile Dual LED Mobile Dual LED Mobile LED Mobile LED Mobile LED Mobile Xenon Mobile Xenon DSC Xenon Example images of content are shown in Figure 1. Content 1 simulates a bar or restaurant photo; its illuminance was very low (2 lux). The exposure was based on camera flash or LED and for the analyses of Content 1, we used only the images that were captured by cameras with a flash or LED. Thus, the images of Camera 1 and Camera 2 were rejected out of the analyses. Content 2 simulates a living room photo, and its illuminance was 100 lux. Content 3 simulates a tourist photo; it was captured outdoors, and its illuminance was high (15 klux). The relevant variable for this study was the face size in relation to the image size. For Content 1, face size was 4 %; for Content 2, the face size was 1 %; and for Content 3, the face size was 0.4 %.

3 Content 1 Content 2 Content 3 Figure 1. Image contents used in the study 2.2 Subjective data In this study, we utilized sharpness data from a large-scale subjective test set. For the subjective test, the images were scaled to a 1600 x 1200 pixel size. Black borders were added to the images to match the image file resolution to the display resolution (1920 x 1200). The test setup included two Eizo ColorEdge CG241W displays and one small display. The test image was shown on one display, and the reference image was shown on the other. The input of the observer was shown on the small display. The observers evaluated the overall quality value and the attribute values of an image. In this study, we utilized only the sharpness attribute data. The images from each content were shown one at a time. The order of images and contents were randomized for each observer. The viewing distance was about 80 cm, and the ambient illuminance was 20 lux. University students were used as observers (n=25). They were all naïve regarding image quality. Figure 2 shows the subjective sharpness values for different contents sorted in ascending order. The 95% confidence intervals (vertical lines) and a sharpness value of 50 (horizontal lines) on a scale of were added to the figures. The content specific scales were easier to evaluate with the aid of the added horizontal lines. Based on Figure 2, there are clear differences in the scales among the contents. Content 1 had low illuminance, and the sharpness scale was wide and increased smoothly. The high-end cameras with powerful xenon flashes had high sharpness values, and the low-end cameras with low-power LED flashes had low sharpness values. The distribution shapes for Content 2 and Content 3 differed. Content 2 had been captured under low indoor illuminance, and Content 3 had been captured under high outdoor illuminance. For Content 2, there is a group for the unsharp images without statistical differences between the values, and there is a group for the two or three sharper images. For Content 3, there are few groups for the sharp images without statistical differences between the values and the single unsharp image. Content 1 Content 2 Content 3 Figure 2. Subjective sharpness values on the vertical axis (scale 0-100) with 95 % confidence intervals sorted in ascending order (on the horizontal axis) for Content 1 with 11 cameras and for Content 2 and Content 3 with 13 cameras

4 2.3 Sharpness metric The objective sharpness metric, S, is calculated by Equations (1) and (2). Equation (1) calculates the standard deviation σ k of the first scale wavelet coefficient energy for the sub-band of direction k. Equation (2) combines the standard deviation values of the different directions for overall sharpness. Wavelet decomposition was performed using the Matlab Wavelet toolbox and its Haar wavelets. The standard deviation was calculated within a segment of size MxN pixels, where μ k is the mean wavelet coefficient energy, and k is the vertical, horizontal or diagonal (v, h, d) direction. A segment was a cropped face area or a freely selected pixel block. σ k = M N 1 ( w ijk k ) MN 1 1 i= j= μ (1) S = σ v + σ h + σ d 3 When we measured the face area performance, the face areas were cropped manually from the images. In the pretest, we tested a face detection algorithm for face area detection and cropping. The performance of the algorithm depended on the image quality and image content. The algorithm found the face areas but also made false-positive detections. The goal of this study was to evaluate the face area data for predicting image sharpness. With the manual cropping, the face area blocks were equal regardless of image quality or image content. Figure 3 shows the cropped face area images for Content 3. (2) Figure 3. Face areas cropped from the images of Content 3 for Cameras 1-13 Figure 4 shows the first scale approximation coefficients and wavelet coefficients in the vertical, horizontal and diagonal directions for the face area of an image from Content 3. The wavelet coefficients have been scaled [-50 50] for visualization purposes. It can be seen from Figure 4 that the mouth, eyebrows and nose areas have high coefficient values for the vertical and horizontal directions. It can be expected that their energy contributes the value of sharpness metric.

5 Figure 4. Wavelet decomposition for Content 3 image The local area blocks we compared to the face areas were selected by searching the highest correlation values between Equation (2) and the subjective data. Equation (2) was applied to the corresponding square block areas of the images. The block size was constant (125 x 125 pixels), and the image size was 1600 x 1200 pixels in all cases. The candidate blocks included structure energy such as edges and textures. The candidate blocks for Content 1 are shown in Figure 5 as an example. Finally, the three best local areas (blocks) for the different contents were selected for the next analyses. Figure 6 shows the three selected blocks and the face areas for the different contents. The selected blocks include both coarse textures and strong edges. Figure 5. Candidate blocks for Content 1 Content 1 Content 2 Content 3 Figure 6. Local block and face areas used for sharpness calculations

6 3. RESULTS Table 2 shows the Pearson linear correlation coefficients for the face area and the three best local areas with subjective sharpness. The Pearson linear correlation coefficients are also shown for the global areas. The global area metric was calculated using all the pixel values of an image. Based on the results, the face area performance of Content 1 and Content 2 was equal to or higher than the performance of the local or global areas. The face area performance of Content 3 was low compared to the best local area. The performance of the global area was high for Content 1, moderate for Content 2 and low for Content 3. Table 2. Pearson linear correlation coefficients for the face areas, the three best local areas and the global area Face area Local area 1 Local area 2 Local area 3 Global Content Content Content (left), (right) The performance of the face area for Content 1 and 2 was notably higher than the performance of Content 3. The reason for this result could be the larger face area. The larger face areas have more information related to sharpness than the smaller face areas. The size of the face area of Content 3 was only 0.4 %. The size of the face area was 4.0 % for Content 1 and 1.0 % for content 2. However, based on the data shown in Figure 2, Content 3 was difficult for the subjective observers, lowering its usefulness for objective metric validation. The illuminance level of Content 3 was high enough for all cameras, and thus the quality differences among the captured images were low. Content 1 was easier to evaluate. The illuminance level of Content 1 was low, and thus the quality differences between the captured images were high. For Content 1, the quality of the images captured by the cameras equipped with xenon flash was high, and the quality of the images captured by the cameras with low-power LEDs was low. In addition, Content 1 only included a single object, which further simplified the evaluation task. Content 2 and Content 3 included numerous objects that could divert the observers attention. The performance of the global area was high for Content 1. As with the subjective test, the reason for this result could be the low illuminance level and/or simple image composition. For Content 1, the global metric estimated only the reproduction of the person in the view. The peripheral energy did not affect the metric as much as it affected the other contents. With Content 2, for example, the view was complex, and there were many objects and textures that had an effect on the global values, but no significant effect on subjective perception. 4. CONCLUSIONS Based on the results, face area data are useful for measuring the sharpness of photographs if the face size is large enough. If the face area is too small, the performance can be low compared with the best local areas. It is concluded that face areas include information that no-reference or reduced-reference metrics can utilize. A metric could recognize the faces automatically or semi-automatically and use the data if the face area size is large enough. If faces cannot be found or their sizes are too small, the metric could employ traditional methods, such as edges, for calculations. There are certain factors that should be taken into account when the reliability of the study is analyzed and further studies are proposed. For example, the persons in Content 1 and Content 2 had eyeglasses. The frequency energy of eyeglasses can be a strong component of the sharpness metric. It could be argued that eyeglass data belong to the face area data. The face area performance of Content 1 and Content 2 was high compared to Content 3. However, the face area sizes of Content 1 and Content 2 were large compared to Content 3. The face areas of Content 1 and Content 2 provided the metric with more information. A comparison of the performances of the left and right face areas for Content 3 show that both were low, although the right face area had eyeglasses. It is clear that the eyeglass factor needs to be considered in further studies. The different contents also had different illuminance levels. A useful constraint for further studies would be to restrict the measurements to an environment in which the only variable would be the face area size. The validation measurements for the metric should be done under laboratory conditions. The only variable parameter would be the distance between the camera and the person.

7 In addition to the illuminance levels between contents, the persons can change between the contents. It would be useful to measure how the face area data of different persons affect the results (e.g., how a person affects the scale of objective values or what would be the most robust and person-independent statistical parameter for describing the face area data). ACKNOWLEDGEMENTS This work was partially financed by Nokia Mobile Solutions / Symbian Smartphones. The authors thank Fredrik Hollsten and Jussi Tarvainen for the test images. REFERENCES [1] Tong, Y., Konik, H., Cheikh, F. A., Tremeau, A., Full Reference Image Quality Assessment Based on Saliency Map Analysis, J. Imaging Sci. Technol. 54(3), (2010). [2] Menegaz, G., Zambon, R., Towards a Semantic-Driven Metric for Image Quality, Proc. IEEE ICIP, vol. 3, 1176 (2005). [3] Lang, L., Gu, W., Study of Face Detection Algorithm for Real-time Face Detection System, Proc. ISECS, (2009). [4] David, A., Panchanathan, S., Wavelet-histogram method for face recognition, Journal of electronic Imaging 9(2), (2000). [5] Marrszalec, E., Martinkauppi, B., Soriano, M., Pietikäinen, M., Physic-based face database for color research, Journal of Electronic Imaging 9(1), (2000). [6] Jin, E. W., Lin, S., Dharumalingam, D., Face detection assisted auto exposure: supporting evidence from a psychophysical study, Proc. SPIE 7537, [75370K] (2010). [7] Lajevardi, S. M., Hussain, Z. M., Contourlet Structural Similarity for Facial Expression Recognition, Proc. IEEE ICASSP, (2010). [8] Wang, Y.-K., Wang, C-F., Face Detection with Automatic White Balance for Digital Still Cameras, Proc. IEEE IIHMSP, (2008). [9] Rahman, M. T., Kehtanavaz, N., Real-Time Face Priority Auto Focus for Digital and Cell-Phone Cameras, IEEE Transaction on Consumer Electronics 54(4), (2008). [10] Delahunt, P. B., Zhang, X., Brainard, D. H., Perceptual image quality: Effects of tone characteristics, Journal of Electronic Ímaging 14(2), (2005). [11] Ciuc, M., Capata, A., Florea, C., Objective measures for quality assessment of automatic skin enhancement algorithms, Proc. SPIE 7529, [75290N] (2010).

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Assessment of Camera Phone Distortion and Implications for Watermarking

Assessment of Camera Phone Distortion and Implications for Watermarking Assessment of Camera Phone Distortion and Implications for Watermarking Aparna Gurijala, Alastair Reed and Eric Evans Digimarc Corporation, 9405 SW Gemini Drive, Beaverton, OR 97008, USA 1. INTRODUCTION

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Digital Image Requirements for New Online US Visa Application

Digital Image Requirements for New Online US Visa Application Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Camera identification by grouping images from database, based on shared noise patterns

Camera identification by grouping images from database, based on shared noise patterns Camera identification by grouping images from database, based on shared noise patterns Teun Baar, Wiger van Houten, Zeno Geradts Digital Technology and Biometrics department, Netherlands Forensic Institute,

More information

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction

More information

PHOTOGRAPHIC guidlines for PORTRAITS

PHOTOGRAPHIC guidlines for PORTRAITS PHOTOGRAPHIC guidlines for PORTRAITS guidelines portrait guidlines FOR PHOTOGRAPHERS Model Ann-Sofi Jönsson, photographer Peter Karlsson, Svarteld form & foto CLOTHES Recommend the model ideally to wear

More information

Quantifying Spatial Presence. Summary

Quantifying Spatial Presence. Summary Quantifying Spatial Presence Cedar Riener and Dennis Proffitt Department of Psychology, University of Virginia Keywords: spatial presence, illusions, visual perception Summary The human visual system uses

More information

Color to Grayscale Conversion with Chrominance Contrast

Color to Grayscale Conversion with Chrominance Contrast Color to Grayscale Conversion with Chrominance Contrast Yuting Ye University of Virginia Figure 1: The sun in Monet s Impression Sunrise has similar luminance as the sky. It can hardly be seen when the

More information

Fast Subsequent Color Iris Matching in large Database

Fast Subsequent Color Iris Matching in large Database Fast Subsequent Color Iris Matching in large Database Adnan Alam Khan 1, Safeeullah Soomro 2 and Irfan Hyder 3 1 PAF-KIET Department of Telecommunications, Employer of Institute of Business Management

More information

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

More information

Source Class Identification for DSLR and Compact Cameras

Source Class Identification for DSLR and Compact Cameras Source Class Identification for DSLR and Compact Cameras Yanmei Fang #,, Ahmet Emir Dirik #2, Xiaoxi Sun #, Nasir Memon #3 # Dept. of Computer & Information Science Polytechnic Institute of New York University,

More information

White paper. Lightfinder. Outstanding performance in difficult light conditions

White paper. Lightfinder. Outstanding performance in difficult light conditions White paper Lightfinder Outstanding performance in difficult light conditions Table of contents 1. Introduction 4 2. Lightfinder background 4 3. Applications 5 4. Comparison during night time and poor

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT

More information

Guidelines for Producing High Quality Photographs for U.S.Travel Documents

Guidelines for Producing High Quality Photographs for U.S.Travel Documents U.S. Passport & U.S. Visa Photography Guidelines for Producing High Quality Photographs for U.S.Travel Documents Technological advances have changed the way passport and visa photos may be taken and the

More information

Low-resolution Image Processing based on FPGA

Low-resolution Image Processing based on FPGA Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

Target Validation and Image Calibration in Scanning Systems

Target Validation and Image Calibration in Scanning Systems Target Validation and Image Calibration in Scanning Systems COSTIN-ANTON BOIANGIU Department of Computer Science and Engineering University Politehnica of Bucharest Splaiul Independentei 313, Bucharest,

More information

Using the Olympus C4000 REV. 04/2006

Using the Olympus C4000 REV. 04/2006 Using the Olympus C4000 REV. 04/2006 In the digital photographic world, information is captured and stored as data not as pictures. The input device (camera) converts light to a series of 1 s and 0 s and

More information

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER

HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER HSI BASED COLOUR IMAGE EQUALIZATION USING ITERATIVE n th ROOT AND n th POWER Gholamreza Anbarjafari icv Group, IMS Lab, Institute of Technology, University of Tartu, Tartu 50411, Estonia sjafari@ut.ee

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

CIELAB, GMA 1. INTRODUCTION

CIELAB, GMA 1. INTRODUCTION Can Gamut Mapping Quality Be Predicted by Colour Image Difference Formulae? Eriko Bando*, Jon Y. Hardeberg, David Connah The Norwegian Color Research Laboratory, Gjøvik University College, Gjøvik, Norway

More information

A method of generating free-route walk-through animation using vehicle-borne video image

A method of generating free-route walk-through animation using vehicle-borne video image A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1

More information

2015 - Photography 4-H Project Newsletter

2015 - Photography 4-H Project Newsletter 2015 - Photography 4-H Project Newsletter Welcome to the 4-H Photography Project! This newsletter is your guide to the project. It contains rules, guidelines and suggestions. Read it carefully and keep

More information

Colorado State University. Guide for 4-H Photography Judges

Colorado State University. Guide for 4-H Photography Judges Colorado State University Guide for 4-H Photography Judges Photography Criteria Use the following criteria to help you judge 4-H photography. TECHNICAL FOCUS Adjustments of the distance setting on a lens

More information

BCC Multi Stripe Wipe

BCC Multi Stripe Wipe BCC Multi Stripe Wipe The BCC Multi Stripe Wipe is a similar to a Horizontal or Vertical Blind wipe. It offers extensive controls to randomize the stripes parameters. The following example shows a Multi

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

Canny Edge Detection

Canny Edge Detection Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

Samsung Rendering Engine for Clean Pages (ReCP) Printer technology that delivers professional-quality prints for businesses

Samsung Rendering Engine for Clean Pages (ReCP) Printer technology that delivers professional-quality prints for businesses Samsung Rendering Engine for Clean Pages (ReCP) Printer technology that delivers professional-quality prints for businesses Contents Introduction 3 Improve scan and copy quality with ReCP 3 Small text

More information

COMPONENT FORENSICS OF DIGITAL CAMERAS: A NON-INTRUSIVE APPROACH

COMPONENT FORENSICS OF DIGITAL CAMERAS: A NON-INTRUSIVE APPROACH COMPONENT FORENSICS OF DIGITAL CAMERAS: A NON-INTRUSIVE APPROACH Ashwin Swaminathan, Min Wu and K. J. Ray Liu Electrical and Computer Engineering Department, University of Maryland, College Park ABSTRACT

More information

Effect of skylight configuration and sky type on the daylight impression of a room

Effect of skylight configuration and sky type on the daylight impression of a room Eco-Architecture IV 53 Effect of skylight configuration and sky type on the daylight impression of a room P. Seuntiens, M. van Boven & D. Sekulovski Philips Research, Eindhoven, The Netherlands Abstract

More information

Sony's "Beyond 4K" solutions bring museum visual exhibits into the next generation

Sony's Beyond 4K solutions bring museum visual exhibits into the next generation MUSEUM Sony's "Beyond 4K" solutions bring museum visual exhibits into the next generation Overview With Beyond 4K solution, versatile high-resolution video meets the challenge of presenting works of truth

More information

Resolution for Color photography

Resolution for Color photography Resolution for Color photography Paul M. Hubel a and Markus Bautsch b a Foveon, Inc., 282 San Tomas Expressway, Santa Clara, CA, USA 955; b Stiftung Warentest, Luetzowplatz -3, D-785 Berlin-Tiergarten,

More information

Chromatic Improvement of Backgrounds Images Captured with Environmental Pollution Using Retinex Model

Chromatic Improvement of Backgrounds Images Captured with Environmental Pollution Using Retinex Model Chromatic Improvement of Backgrounds Images Captured with Environmental Pollution Using Retinex Model Mario Dehesa, Alberto J. Rosales, Francisco J. Gallegos, Samuel Souverville, and Isabel V. Hernández

More information

The KTH-INDECS Database

The KTH-INDECS Database The KTH-INDECS Database Andrzej Pronobis, Barbara Caputo Computational Vision and Active Perception Laboratory (CVAP) Department of Numerical Analysis and Computer Science (NADA) KTH, SE-1 44 Stockholm,

More information

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People

Navigation Aid And Label Reading With Voice Communication For Visually Impaired People Navigation Aid And Label Reading With Voice Communication For Visually Impaired People A.Manikandan 1, R.Madhuranthi 2 1 M.Kumarasamy College of Engineering, mani85a@gmail.com,karur,india 2 M.Kumarasamy

More information

ARTICLE Night lessons - Lighting for network cameras

ARTICLE Night lessons - Lighting for network cameras ARTICLE Night lessons - Lighting for network cameras A summary report from Axis and Raytec regional test nights Winter 2011 2012 - England, Scotland, Denmark Table of contents 1. Introduction 3 2. Lesson

More information

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM

PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM PIXEL-LEVEL IMAGE FUSION USING BROVEY TRANSFORME AND WAVELET TRANSFORM Rohan Ashok Mandhare 1, Pragati Upadhyay 2,Sudha Gupta 3 ME Student, K.J.SOMIYA College of Engineering, Vidyavihar, Mumbai, Maharashtra,

More information

http://dx.doi.org/10.1117/12.906346

http://dx.doi.org/10.1117/12.906346 Stephanie Fullerton ; Keith Bennett ; Eiji Toda and Teruo Takahashi "Camera simulation engine enables efficient system optimization for super-resolution imaging", Proc. SPIE 8228, Single Molecule Spectroscopy

More information

Removal of Noise from MRI using Spectral Subtraction

Removal of Noise from MRI using Spectral Subtraction International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 3 (2014), pp. 293-298 International Research Publication House http://www.irphouse.com Removal of Noise

More information

Clustering & Visualization

Clustering & Visualization Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.

More information

SHOW MORE SELL MORE. Top tips for taking great photos

SHOW MORE SELL MORE. Top tips for taking great photos SHOW MORE SELL MORE Top tips for taking great photos TAKE BETTER PICTURES. SELL MORE STUFF. The more clear, crisp, quality pictures you show, the easier it is for buyers to find your listings and make

More information

3 hours One paper 70 Marks. Areas of Learning Theory

3 hours One paper 70 Marks. Areas of Learning Theory GRAPHIC DESIGN CODE NO. 071 Class XII DESIGN OF THE QUESTION PAPER 3 hours One paper 70 Marks Section-wise Weightage of the Theory Areas of Learning Theory Section A (Reader) Section B Application of Design

More information

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis

Admin stuff. 4 Image Pyramids. Spatial Domain. Projects. Fourier domain 2/26/2008. Fourier as a change of basis Admin stuff 4 Image Pyramids Change of office hours on Wed 4 th April Mon 3 st March 9.3.3pm (right after class) Change of time/date t of last class Currently Mon 5 th May What about Thursday 8 th May?

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

Using Microsoft Picture Manager

Using Microsoft Picture Manager Using Microsoft Picture Manager Storing Your Photos It is suggested that a county store all photos for use in the County CMS program in the same folder for easy access. For the County CMS Web Project it

More information

Timeframe: 8 class periods

Timeframe: 8 class periods 1 st 6 weeks Weeks: 1-2 8 class periods Unit Name: Introduction to Photography 1) Introduction to the history of photography, early photographers, cameras, and photographic processes, including camera

More information

Application of Face Recognition to Person Matching in Trains

Application of Face Recognition to Person Matching in Trains Application of Face Recognition to Person Matching in Trains May 2008 Objective Matching of person Context : in trains Using face recognition and face detection algorithms With a video-surveillance camera

More information

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation

A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation A Novel Method to Improve Resolution of Satellite Images Using DWT and Interpolation S.VENKATA RAMANA ¹, S. NARAYANA REDDY ² M.Tech student, Department of ECE, SVU college of Engineering, Tirupati, 517502,

More information

Sub-pixel mapping: A comparison of techniques

Sub-pixel mapping: A comparison of techniques Sub-pixel mapping: A comparison of techniques Koen C. Mertens, Lieven P.C. Verbeke & Robert R. De Wulf Laboratory of Forest Management and Spatial Information Techniques, Ghent University, 9000 Gent, Belgium

More information

Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing

Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Using visible SNR (vsnr) to compare image quality of pixel binning and digital resizing Joyce Farrell a, Mike Okincha b, Manu Parmar ac, and Brian Wandell ac a Dept. of Electrical Engineering, Stanford

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences

Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Detection and Restoration of Vertical Non-linear Scratches in Digitized Film Sequences Byoung-moon You 1, Kyung-tack Jung 2, Sang-kook Kim 2, and Doo-sung Hwang 3 1 L&Y Vision Technologies, Inc., Daejeon,

More information

Requirement of Photograph for Indian Passport

Requirement of Photograph for Indian Passport Requirement of Photograph for Indian Passport Sample Photo Requirements The photograph should be in colour and of the size of 2 inch x 2 inch 51 mm x 51 mm. The photo print should be clear and with a continuous

More information

Video Conferencing Display System Sizing and Location

Video Conferencing Display System Sizing and Location Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed

More information

SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp

SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS. Nitin Khanna and Edward J. Delp SOURCE SCANNER IDENTIFICATION FOR SCANNED DOCUMENTS Nitin Khanna and Edward J. Delp Video and Image Processing Laboratory School of Electrical and Computer Engineering Purdue University West Lafayette,

More information

ISO meets Aperture and Shutter Speed How to sort out the exposure trifecta and capture the exposure you want.

ISO meets Aperture and Shutter Speed How to sort out the exposure trifecta and capture the exposure you want. Before and After the Click Training ISO meets Aperture and Shutter Speed How to sort out the exposure trifecta and capture the exposure you want. Warning: This tutorial requires decision making. Created

More information

Image Processing with. ImageJ. Biology. Imaging

Image Processing with. ImageJ. Biology. Imaging Image Processing with ImageJ 1. Spatial filters Outlines background correction image denoising edges detection 2. Fourier domain filtering correction of periodic artefacts 3. Binary operations masks morphological

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

Users Manual Model #93711. English

Users Manual Model #93711. English Users Manual Model #93711 English Congratulations on your purchase of the Celestron NexImage 5 Solar System imaging camera. Your NexImage camera comes with the following: + NexImage 5 Camera + 1.25 nose

More information

Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India

Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India

More information

CULTURAL HERITAGE USER GUIDE

CULTURAL HERITAGE USER GUIDE Capture One CULTURAL HERITAGE USER GUIDE Capture One Cultural Heritage edition is a Raw work-flow application based on the Capture One DB solution and features exclusive new tools expressly designed to

More information

Measuring Line Edge Roughness: Fluctuations in Uncertainty

Measuring Line Edge Roughness: Fluctuations in Uncertainty Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as

More information

Robust Panoramic Image Stitching

Robust Panoramic Image Stitching Robust Panoramic Image Stitching CS231A Final Report Harrison Chau Department of Aeronautics and Astronautics Stanford University Stanford, CA, USA hwchau@stanford.edu Robert Karol Department of Aeronautics

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

The Visual Internet of Things System Based on Depth Camera

The Visual Internet of Things System Based on Depth Camera The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed

More information

Get the benefits of mobile document capture with Motorola s Advanced Document Imaging

Get the benefits of mobile document capture with Motorola s Advanced Document Imaging Tech Brief Get the benefits of mobile document capture with Motorola s Advanced Document Imaging Technology Executive summary While the world is migrating to a paperless society, there are still many types

More information

Movie 3. Camera Raw sharpening

Movie 3. Camera Raw sharpening Movie 3 Camera Raw sharpening 1 Amount slider 1 The Amount slider is like a volume control. As you increase the Amount the overall sharpening is increased. A default setting of 25% is applied to all raw

More information

IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING

IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING ABSTRACT: IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING Hakan Wiman Department of Photogrammetry, Royal Institute of Technology S - 100 44 Stockholm, Sweden (e-mail hakanw@fmi.kth.se) ISPRS Commission

More information

Adobe Marketing Cloud Sharpening images in Scene7 Publishing System and on Image Server

Adobe Marketing Cloud Sharpening images in Scene7 Publishing System and on Image Server Adobe Marketing Cloud Sharpening images in Scene7 Publishing System and on Image Server Contents Contact and Legal Information...3 About image sharpening...4 Adding an image preset to save frequently used

More information

Technical Considerations Detecting Transparent Materials in Particle Analysis. Michael Horgan

Technical Considerations Detecting Transparent Materials in Particle Analysis. Michael Horgan Technical Considerations Detecting Transparent Materials in Particle Analysis Michael Horgan Who We Are Fluid Imaging Technology Manufacturers of the FlowCam series of particle analyzers FlowCam HQ location

More information

Passport photo guidelines

Passport photo guidelines Passport photo guidelines Finland introduced new requirements for passport photos on 21 August 2006. The new requirements are based on the standards set by the International Civil Aviation Organization,

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 29 (2008) Indiana University A Software-Based System for Synchronizing and Preprocessing Eye Movement Data in Preparation for Analysis 1 Mohammad

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Technical Tip Image Resolutions for Digital Cameras, Scanners, and Printing

Technical Tip Image Resolutions for Digital Cameras, Scanners, and Printing 518 442-3608 Technical Tip Image Resolutions for Digital Cameras, Scanners, and Printing One of the most confusion issues associated with digital cameras, scanners, and printing involves image resolution.

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

DATA RATE AND DYNAMIC RANGE COMPRESSION OF MEDICAL IMAGES: WHICH ONE GOES FIRST? Shahrukh Athar, Hojatollah Yeganeh and Zhou Wang

DATA RATE AND DYNAMIC RANGE COMPRESSION OF MEDICAL IMAGES: WHICH ONE GOES FIRST? Shahrukh Athar, Hojatollah Yeganeh and Zhou Wang DATA RATE AND DYNAMIC RANGE COMPRESSION OF MEDICAL IMAGES: WHICH ONE GOES FIRST? Shahrukh Athar, Hojatollah Yeganeh and Zhou Wang Dept. of Electrical & Computer Engineering, University of Waterloo, Waterloo,

More information

Mouse Control using a Web Camera based on Colour Detection

Mouse Control using a Web Camera based on Colour Detection Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,

More information

Department of Mechanical Engineering, King s College London, University of London, Strand, London, WC2R 2LS, UK; e-mail: david.hann@kcl.ac.

Department of Mechanical Engineering, King s College London, University of London, Strand, London, WC2R 2LS, UK; e-mail: david.hann@kcl.ac. INT. J. REMOTE SENSING, 2003, VOL. 24, NO. 9, 1949 1956 Technical note Classification of off-diagonal points in a co-occurrence matrix D. B. HANN, Department of Mechanical Engineering, King s College London,

More information

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing

More information

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT Akhil Gupta, Akash Rathi, Dr. Y. Radhika

More information

Digital Camera Imaging Evaluation

Digital Camera Imaging Evaluation Digital Camera Imaging Evaluation Presenter/Author J Mazzetta, Electro Optical Industries Coauthors Dennis Caudle, Electro Optical Industries Bob Wageneck, Electro Optical Industries Contact Information

More information

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

Tips for better photos

Tips for better photos A photograph can be a great tool for communicating the MDC message. Done well, photos grab your attention and convey lots of information in a brief glance. Now that there are more high-quality digital

More information

CCTV Labs Test Chart

CCTV Labs Test Chart CCTV Labs Test Chart v.3.x Instructions for setup and usage Produced by CCTV Labs Pty.Ltd. 2006 ABN 26 088 387 179 66 Yaringa Road, Castle Hill, NSW 2154, AUSTRALIA Please handle Your Test Chart with care!

More information

Laser Gesture Recognition for Human Machine Interaction

Laser Gesture Recognition for Human Machine Interaction International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-04, Issue-04 E-ISSN: 2347-2693 Laser Gesture Recognition for Human Machine Interaction Umang Keniya 1*, Sarthak

More information

BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION

BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION BLIND SOURCE SEPARATION OF SPEECH AND BACKGROUND MUSIC FOR IMPROVED SPEECH RECOGNITION P. Vanroose Katholieke Universiteit Leuven, div. ESAT/PSI Kasteelpark Arenberg 10, B 3001 Heverlee, Belgium Peter.Vanroose@esat.kuleuven.ac.be

More information

Open issues and research trends in Content-based Image Retrieval

Open issues and research trends in Content-based Image Retrieval Open issues and research trends in Content-based Image Retrieval Raimondo Schettini DISCo Universita di Milano Bicocca schettini@disco.unimib.it www.disco.unimib.it/schettini/ IEEE Signal Processing Society

More information

GVC3200 Video Conferencing System

GVC3200 Video Conferencing System GVC3200 Video Conferencing System Prepare below equipment before setup: Display Device (e.g., HDTV), with power adapters GVC3200, with power adapters Ethernet cable for GVC3200 to connect to uplink network

More information