DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE
|
|
|
- Bruno Goodman
- 9 years ago
- Views:
Transcription
1 DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE N. Satish Kumar 1, B L Mukundappa 2, Ramakanth Kumar P 1 1 Dept. of Information Science, R V College of Engineering, Bangalore, India 2 Associate Prof., Dept. of Computer Science, University College of Science, Tumkur, India ABSTRACT The objective of the paper was to design and development of a stereo vision system to build 3D model for underwater objects. The developed algorithm first enhance the underwater image quality then construct 3D model by using iterative closest point (ICP) algorithm. From the enhanced images feature points are extracted and feature based matching was done between these pair of images. Epipolar geometry was constructed to remove the outliers between matched points and to recover the geometrical relation between cameras. Then stereo images were rectified and dense matched. Then 3D point was estimated using linear triangulation. After the registration of multi-view range images, a 3D model was constructed using a Linear Triangulation technique. KEYWORDS: Underwater image, ICP algorithm, 3D model I. INTRODUCTION Generating a complete 3D model of an object has been a topic of much interest in recent computer vision and computer graphics research. Many computer vision techniques have been investigated to generate complete 3D models. Underwater 3D imagery generation is still challenge due to their many unconventional parameters such as refractive index of water, light illumination, uneven background, etc. Presently there are two major approaches. First is based on merging multi-view range images into a 3D model [1-2]. The second approach is based on processing photographic images using a volumetric reconstruction technique, such as voxel coloring and shape-from-silhouettes [3]. Multiview 3D modeling has been done by many active or passive ranging techniques. Laser range imaging and structured light techniques are the most common active techniques. These techniques project special light patterns onto the surface of a real object to measure the depth to the surface by a simple triangulation technique [4, 7]. Even though active methods are fast and accurate, they are more expensive. However, relatively less research has been done using passive techniques, such as stereo image analysis. This is mainly due to the inherent problems (e.g., mismatching and occlusion) of stereo matching. The quality of underwater images are poor as they suffered from strong attenuation and scattering of light. To overcome these problems this paper has been contributed to first enhance underwater images and applying passive method to build 3-D Model. The work first employs image enhancing technique to reduce the effect of scattering of light and attenuation and also improves the contrast of the images. In order to remove the mismatches between pair of stereo images, this methodology computes epipolar geometry and also performs dense matching to get more features by employing rectification process. Multi-view range images are obtained using stereo cameras and turntable. The developed computer vision system has two inexpensive still cameras to capture stereo images of an object. The cameras are calibrated by a projective calibration technique. Multi-view range images are obtained by changing the viewing direction to the object. We also employ a turntable stage to rotate the object and to obtain multiple range images. Multiple range images are then registered and integrated into a single 3D model. In order to register range images automatically, we employ Iterative Closest Point (ICP) algorithm to integrate multiple range images into a single mesh model using volumetric integration technique. 34 Vol. 1, Issue 4, pp
2 Error analysis on real objects shows the accuracy of our 3D model reconstruction. Section 2 presents the problem and solution of the images in underwater conditions. Section 3 presents range image aquition methodology, Section 4 presents a 3D modeling technique of merging multi-view range images. Finally, section 5 concludes the paper. II. PROBLEMS IN UNDERWATER & SOLUTION To capture the images in underwater conditions, two underwater cameras with lights enabled were mounted on a stand. Underwater imaging faces a major problem of light attenuation which limits the visibility distance and degrades the quality of the images such as blurring or lacking of structure in the regions of interest. The developed method uses efficient image enhancement algorithm. We implemented the program with Matlab. The method is comprised of three main steps: Homomorphic filtering: The homomorphic filter simultaneously increases the contrast and normalizes the brightness across the image. Contrast limited adaptive histogram equalization (CLAHE): The histogram equalization is used to enhance the contrast of the image. Adaptive Noise-Removal Filtering: A Weiner filter is implemented to remove the noise produced by equalization step. III. RANGE IMAGE ACQUITION AND CALIBRATION We employ a projective camera model to calibrate our stereo camera (MINI MC-1). Calibration of the projective camera model can be considered as an estimation of a projective transformation matrix from the world coordinate system (WCS) to the camera s coordinate system (CCS). We employ a turntable to take range images while stereo cameras are stable. We set up an aquarium to take images in underwater condition. We mount MC-mini underwater cameras on a stable stand and keep an Model on a turntable. The lab setup to do experiment is as shown in fig. 2. Our system makes use of camera calibration, so we employ Tzai stereo camera calibration model to calibrate our stereo cameras. We use 8X9 check board (as shown in Fig. 1) to calibrate the cameras. Fig.1 Check board for camera calibration We got internal camera parameters (K1 & K2) as a result of calibration process. These internal camera parameters are useful in estimating metric 3-D reconstruction so that we can get the approximate dimensions of the object. Fig. 2 Lab setup for the experimentation 35 Vol. 1, Issue 4, pp
3 IV. 3-D MODELING METHODOLOGY This section describes the complete overview of the developed system flow chart. The methodology is as shown in the fig. 3. Fig.3: Developed 3-D Modeling methodology 4.1 Extraction of 2D feature points and Correspondence matching The work is with large scaled underwater scenes, where illumination is frequently changing and to find a set of stable features which are useful for later stage of estimating 3-D points. Therefore, a feature based approach, namely Scale Invariant Feature Transform (SIFT) which is developed in OpenCV library is used in this work. Images are represented by a set of SIFT feature as shown in Fig 3. Although, there are some newly derived techniques that can return faster or more efficient result, the developed method chooses the SIFT, because of its invariance to image translation, scaling, partial invariance to illumination changes. Key Points after getting from SIFT are then compared between every consecutive pair of images and the matching points are used to calculate the epipolar geometry between cameras. The epipolar geometry is used to further discard false matches. The feature based approaches look for features in images that are robust under the change of view points, illumination, and occlusion. The features used can be edge element, corners, line segments, gradients, depending on the method. 4.2 Computation of Epipolar geometry The epipolar geometry provides us with a constraint to reduce the complexity of correspondence matching. Instead of searching the whole image or region for a matching element, we only have to search along a line. Even when the matching is already found by other methods, epipolar geometry can be applied to verify the correct matches and remove outliers. The epipolar geometry is used for two purposes: a) To remove false matches from SIFT matching and b) To recover the geometrical transformation between 2 cameras from the computation of the fundamental matrix. 4.3 Fundamental matrix estimation To estimate the fundamental matrix (F), Random Sampling Consensus (RANSAC) was used. The library of OpenCV provides functions to estimate Fundamental matrix using Lmeds and RANSAC. To estimate the fundamental matrix, equation (1) can be deduced and rewriting it in the following way Uf = 0 (1) ( F F F F F F F F F ) T f = 11, 12, 13, 21, 22, 23, 31, 32, 33 (2) (3) 36 Vol. 1, Issue 4, pp
4 4.4 Rectification & dense matching Both stereo pairs are rectified - Rectification transforms a stereo image pairs in such a way that epipolar lines became horizontal, using the algorithm presented in [Isgrò, 1999]. This step allows an easier dense matching process. As our developed system has constructed 3-D structure of the object from multiple views, if there is more number of feature points then 3-D structure will be more accurate. So, to get more number of feature points we employ dense matching process. Depth information about the objects present in a rectified pair of images: far objects will have zero disparity and the closest objects will have maximum disparity instead. Figure 4 shows the corresponding matching features removing outliers. After the matching image points have been discovered using dense matching, the next step is to compute the correspondent 3D object points. The method of finding the position of a third point knowing the geometry of two other known reference points is called triangulation. Since the two matching points are just the projected images of a 3D object point, then the 3D point is the intersection between two optical rays passing through the two camera centers and two matching image points. The matching points are converted into metric representation using the intrinsic parameters of camera calculated as a result of the camera calibration process. By projecting the points into 3D space and finding intersections of the visual rays, location of object points can be estimated. This process is referred as triangulation. After removing outliers, the final result is a 3D point cloud which can be interpolated to construct the 3D model of the object. 4.5 Outliers removal Fig. 4 Corresponding matching without outliers Once the set of 3D points has been computed, the final step is to remove the isolated points, which are the points with less than 2 neighbors. A point is considered a neighbor of another if it is within a sphere of a given radius centered at that point. This final process is an effective procedure to detect any remaining outliers as outliers generally generate isolated 3D points as shown in Fig. 5. Fig. 5 Partial 3-D reconstruction of two images 37 Vol. 1, Issue 4, pp
5 We set some threshold to remove the 3-D outlier points. If any point is not having any neighbor within that threshold then that point is considered as outlier and it has to be removed from the 3-D point set. Otherwise the 3-D model is not accurate. The remaining 3D points are stored as a partial reconstruction of the surface. In this manner we calculate the remaining 3D points for the rest of the object views, and using Iterative Closest Point (ICP) algorithm, those 3D points are registered to the common coordinate system. Then those point clouds are interpolated to construct the surface. Once we get the surface of the object then that 3D model can be texture mapped so that the final 3D model of the object looks like actual object. 4.6 Integration of all the 3-D points Using the above methodology, all the partial 3-D structures of the object are obtained and integrated into the common coordinate system using Iterative Closest Point (ICP) algorithm. Thus we got proper 3-D point cloud of all the views. Once we get point cloud then those points were interpolated and surface was put on the point cloud to get the 3-D model of object as shown in Fig Texture Mapping Fig 6 3-D model with surface from 4 views views After obtaining the 3-D model of an object, texture of the original object has been mapped onto the 3- D model so that it looks same as the object. The result of the texture mapped 3-D model is as shown in fig. 7 V. CONCLUSION Fig 7 3-D model with texture mapped The system consists of an inexpensive underwater stereo camera, a turn table and personal computer. Developed autonomous System to build 3-D Model of underwater objects is easy to use and robust under illumination changes as this system extracts SIFT features rather than intensity values of the images. 38 Vol. 1, Issue 4, pp
6 The images are enhanced and feature points of those images are extracted and matched between pair of stereo images. Final 3D reconstruction is optimized and improved in a post processing stage. Geometrical 3Dconstruction obtained with natural images collected during the experiment out to be very efficient and promising. The estimation of dimensions of the object is also nearly accurate. ACKNOWLEDGMENT I owe my sincere feelings of gratitude to NAVAL Research Board, New Delhi for their supporting, guidance and suggestions which helped us a lot to write the paper. REFERENCES [1] Oscar Pizarro, Ryan Eustice and Hanumant Singh, Large Area 3D Reconstructions from Underwater Surveys [2] Soon-Yong Park, Murali Subbarao, A multiview 3D modeling system based on stereo vision techniques [3] St ephane Bazeille(1), Isabelle Quidu(1), Luc Jaulin(1), Jean-Phillipe Malkasse, Automatic Underwater Image Pre-Processing [4] Rafael Garcia, Tudor Nicosevici and Xevi Cufí, On the Way to Solve Lighting Problems in Underwater Imaging [5] S. M. Christie and F. Kvasnik, Contrast enhancement of underwater images with coherent optical image processors [6] Kashif Iqbal, Rosalina Abdul Salam, Azam Osman and Abdullah Zawawi Talib, Underwater Image Enhancement Using an Integrated Colour Model [7] Roger Y Tsai, A Versatile Camera Calibration Techniaue for High-Accuracy 3D Machine Vision Metrology Using Off-the-shelf TV Cameras and Lenses [8] Qurban Memony and Sohaib Khanz, Camera calibration and three-dimensional world reconstruction of stereo-vision using neural networks [9] Matthew Bryantt, David Wettergreen, Samer Abdallaht, Alexander Zelinsky, Robust Camera Calibration for an Autonomous Underwater Vehicle [10] H. Rob, Rob Hess - School of Oregon State University ; [11] Y. Bougett, Camera calibration toolbox for Matlab.; http: // doc/. [12] R. Hartley and A. Zisserman, Multiple view geometry in computer vision, Cambridge, UK; New York: Cambridge University Press, [13] G. Chou, Large scale 3d reconstruction: a triangular based approach, [14] D.G. Lowe, Distinctive Image Features from Scale Invariant Feature Points (SIFT), University of British Columbia 2004 Authors N. Satish Kumar, Research Scholar CSE dept. R. V. College of Engineering, Bangalore. He has received Master Degree (M.Tech) from VTU (R.V.C.E). His research areas are Digital Image processing, parallel programming. Mukundappa B L got B.Sc Degree with Physics, Chemistry & Mathematics as major subjects from Mysore University, M.Sc degree in Chemistry & Computer Science. He has been working as Principal & Associate professor in University science college, Tumkur & has 25 Years of experience in teaching. Ramakanth Kumar P, HOD, ISE dept. R. V. College of Engineering, Bangalore. He has received PhD from Mangalore University. His research areas are Digital Image processing, Data mining, Pattern matching, Natural Language Processing. 39 Vol. 1, Issue 4, pp
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH
MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
Automated Process for Generating Digitised Maps through GPS Data Compression
Automated Process for Generating Digitised Maps through GPS Data Compression Stewart Worrall and Eduardo Nebot University of Sydney, Australia {s.worrall, e.nebot}@acfr.usyd.edu.au Abstract This paper
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
CS 534: Computer Vision 3D Model-based recognition
CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Introduction. C 2009 John Wiley & Sons, Ltd
1 Introduction The purpose of this text on stereo-based imaging is twofold: it is to give students of computer vision a thorough grounding in the image analysis and projective geometry techniques relevant
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
Motion Capture Sistemi a marker passivi
Motion Capture Sistemi a marker passivi N. Alberto Borghese Laboratory of Human Motion Analysis and Virtual Reality (MAVR) Department of Computer Science University of Milano 1/41 Outline Introduction:
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA. Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo. University of Padova, Italy
A NEW SUPER RESOLUTION TECHNIQUE FOR RANGE DATA Valeria Garro, Pietro Zanuttigh, Guido M. Cortelazzo University of Padova, Italy ABSTRACT Current Time-of-Flight matrix sensors allow for the acquisition
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
A Robust Method for Solving Transcendental Equations
www.ijcsi.org 413 A Robust Method for Solving Transcendental Equations Md. Golam Moazzam, Amita Chakraborty and Md. Al-Amin Bhuiyan Department of Computer Science and Engineering, Jahangirnagar University,
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling
, March 13-15, 2013, Hong Kong Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling Naveed Ahmed Abstract We present a system for spatio-temporally
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service
siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service Ahmad Pahlavan Tafti 1, Hamid Hassannia 2, and Zeyun Yu 1 1 Department of Computer Science, University of Wisconsin -Milwaukee,
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Aras Dargazany 1 and Karsten Berns 2 Abstract In this paper, an stereo-based
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Part-Based Recognition
Part-Based Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, Part-Based Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam [email protected] Mike Bailey [email protected] San Diego Supercomputer Center University of California San Diego Abstract With
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
Feature Tracking and Optical Flow
02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve
Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
3D/4D acquisition. 3D acquisition taxonomy 22.10.2014. Computer Vision. Computer Vision. 3D acquisition methods. passive. active.
Das Bild kann zurzeit nicht angezeigt werden. 22.10.2014 3D/4D acquisition 3D acquisition taxonomy 3D acquisition methods passive active uni-directional multi-directional uni-directional multi-directional
Augmented Reality Tic-Tac-Toe
Augmented Reality Tic-Tac-Toe Joe Maguire, David Saltzman Department of Electrical Engineering [email protected], [email protected] Abstract: This project implements an augmented reality version
A Prototype For Eye-Gaze Corrected
A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
Optical Tracking Using Projective Invariant Marker Pattern Properties
Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
Randomized Trees for Real-Time Keypoint Recognition
Randomized Trees for Real-Time Keypoint Recognition Vincent Lepetit Pascal Lagger Pascal Fua Computer Vision Laboratory École Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland Email:
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM
3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM Dr. Trikal Shivshankar 1, Patil Chinmay 2, Patokar Pradeep 3 Professor, Mechanical Engineering Department, SSGM Engineering
Real-Time 3D Reconstruction Using a Kinect Sensor
Computer Science and Information Technology 2(2): 95-99, 2014 DOI: 10.13189/csit.2014.020206 http://www.hrpub.org Real-Time 3D Reconstruction Using a Kinect Sensor Claudia Raluca Popescu *, Adrian Lungu
International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014
Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College
Topographic Change Detection Using CloudCompare Version 1.0
Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections
Blind Deconvolution of Barcodes via Dictionary Analysis and Wiener Filter of Barcode Subsections Maximilian Hung, Bohyun B. Kim, Xiling Zhang August 17, 2013 Abstract While current systems already provide
An Approach for Utility Pole Recognition in Real Conditions
6th Pacific-Rim Symposium on Image and Video Technology 1st PSIVT Workshop on Quality Assessment and Control by Image and Video Analysis An Approach for Utility Pole Recognition in Real Conditions Barranco
Dynamic composition of tracking primitives for interactive vision-guided navigation
Dynamic composition of tracking primitives for interactive vision-guided navigation Darius Burschka and Gregory Hager Johns Hopkins University, Baltimore, USA ABSTRACT We present a system architecture
Flow Separation for Fast and Robust Stereo Odometry
Flow Separation for Fast and Robust Stereo Odometry Michael Kaess, Kai Ni and Frank Dellaert Abstract Separating sparse flow provides fast and robust stereo visual odometry that deals with nearly degenerate
ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan
Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification
ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION
1 ARTIFICIAL INTELLIGENCE METHODS IN EARLY MANUFACTURING TIME ESTIMATION B. Mikó PhD, Z-Form Tool Manufacturing and Application Ltd H-1082. Budapest, Asztalos S. u 4. Tel: (1) 477 1016, e-mail: [email protected]
Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique
Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit Abstract In order to reduce the
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality Henry Fuchs University of North Carolina at Chapel Hill (USA) and NSF Science and Technology Center for Computer Graphics and
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER
ROBUST VEHICLE TRACKING IN VIDEO IMAGES BEING TAKEN FROM A HELICOPTER Fatemeh Karimi Nejadasl, Ben G.H. Gorte, and Serge P. Hoogendoorn Institute of Earth Observation and Space System, Delft University
Face Recognition in Low-resolution Images by Using Local Zernike Moments
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie
Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach
Outline Crowdclustering with Sparse Pairwise Labels: A Matrix Completion Approach Jinfeng Yi, Rong Jin, Anil K. Jain, Shaili Jain 2012 Presented By : KHALID ALKOBAYER Crowdsourcing and Crowdclustering
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
Build Panoramas on Android Phones
Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken
An Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
Sachin Patel HOD I.T Department PCST, Indore, India. Parth Bhatt I.T Department, PCST, Indore, India. Ankit Shah CSE Department, KITE, Jaipur, India
Image Enhancement Using Various Interpolation Methods Parth Bhatt I.T Department, PCST, Indore, India Ankit Shah CSE Department, KITE, Jaipur, India Sachin Patel HOD I.T Department PCST, Indore, India
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
The Big Data methodology in computer vision systems
The Big Data methodology in computer vision systems Popov S.B. Samara State Aerospace University, Image Processing Systems Institute, Russian Academy of Sciences Abstract. I consider the advantages of
NEW MEXICO Grade 6 MATHEMATICS STANDARDS
PROCESS STANDARDS To help New Mexico students achieve the Content Standards enumerated below, teachers are encouraged to base instruction on the following Process Standards: Problem Solving Build new mathematical
VOLUMNECT - Measuring Volumes with Kinect T M
VOLUMNECT - Measuring Volumes with Kinect T M Beatriz Quintino Ferreira a, Miguel Griné a, Duarte Gameiro a, João Paulo Costeira a,b and Beatriz Sousa Santos c,d a DEEC, Instituto Superior Técnico, Lisboa,
Circle Object Recognition Based on Monocular Vision for Home Security Robot
Journal of Applied Science and Engineering, Vol. 16, No. 3, pp. 261 268 (2013) DOI: 10.6180/jase.2013.16.3.05 Circle Object Recognition Based on Monocular Vision for Home Security Robot Shih-An Li, Ching-Chang
Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
PCL Tutorial: The Point Cloud Library By Example. Jeff Delmerico. Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus. jad12@buffalo.
PCL Tutorial: The Point Cloud Library By Example Jeff Delmerico Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus [email protected] February 11, 2013 Jeff Delmerico February 11, 2013 1/38
Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016
Optical Flow Shenlong Wang CSC2541 Course Presentation Feb 2, 2016 Outline Introduction Variation Models Feature Matching Methods End-to-end Learning based Methods Discussion Optical Flow Goal: Pixel motion
Cloud tracking with optical flow for short-term solar forecasting
Cloud tracking with optical flow for short-term solar forecasting Philip Wood-Bradley, José Zapata, John Pye Solar Thermal Group, Australian National University, Canberra, Australia Corresponding author:
DAMAGED ROAD TUNNEL LASER SCANNER SURVEY
University of Brescia - ITALY DAMAGED ROAD TUNNEL LASER SCANNER SURVEY Prof. Giorgio Vassena [email protected] WORKFLOW - Demand analysis - Instruments choice - On field operations planning - Laser
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
3D Modeling and Simulation using Image Stitching
3D Modeling and Simulation using Image Stitching Sean N. Braganza K. J. Somaiya College of Engineering, Mumbai, India ShubhamR.Langer K. J. Somaiya College of Engineering,Mumbai, India Pallavi G.Bhoite
Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling
81 Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling Andrey Dimitrov 1 and Mani Golparvar-Fard 2 1 Graduate Student, Depts of Civil Eng and Engineering
Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System
Ref: C0287 Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System Avital Bechar, Victor Bloch, Roee Finkelshtain, Sivan Levi, Aharon Hoffman, Haim Egozi and Ze ev Schmilovitch,
6 Space Perception and Binocular Vision
Space Perception and Binocular Vision Space Perception and Binocular Vision space perception monocular cues to 3D space binocular vision and stereopsis combining depth cues monocular/pictorial cues cues
REGISTRATION OF LASER SCANNING POINT CLOUDS AND AERIAL IMAGES USING EITHER ARTIFICIAL OR NATURAL TIE FEATURES
REGISTRATION OF LASER SCANNING POINT CLOUDS AND AERIAL IMAGES USING EITHER ARTIFICIAL OR NATURAL TIE FEATURES P. Rönnholm a, *, H. Haggrén a a Aalto University School of Engineering, Department of Real
Announcements. Active stereo with structured light. Project structured light patterns onto the object
Announcements Active stereo with structured light Project 3 extension: Wednesday at noon Final project proposal extension: Friday at noon > consult with Steve, Rick, and/or Ian now! Project 2 artifact
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Feature Point Selection using Structural Graph Matching for MLS based Image Registration
Feature Point Selection using Structural Graph Matching for MLS based Image Registration Hema P Menon Department of CSE Amrita Vishwa Vidyapeetham Coimbatore Tamil Nadu - 641 112, India K A Narayanankutty
Space Perception and Binocular Vision
Space Perception and Binocular Vision Space Perception Monocular Cues to Three-Dimensional Space Binocular Vision and Stereopsis Combining Depth Cues 9/30/2008 1 Introduction to Space Perception Realism:
Making Better Medical Devices with Multisensor Metrology
Making Better Medical Devices with Multisensor Metrology by Nate J. Rose, Chief Applications Engineer, Optical Gaging Products (OGP) Multisensor metrology is becoming a preferred quality control technology
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
Computer Applications in Textile Engineering. Computer Applications in Textile Engineering
3. Computer Graphics Sungmin Kim http://latam.jnu.ac.kr Computer Graphics Definition Introduction Research field related to the activities that includes graphics as input and output Importance Interactive
