Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles
|
|
|
- Bernadette Harrell
- 9 years ago
- Views:
Transcription
1 CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE Department, MCKV Institute of Engineering, Howrah, India 2 CSE Department, Jadavpur University, Kolkata, India * Corresponding author. [email protected] Abstract Integrated Traffic Management Systems (ITMS) are now implemented in different cities in India to primarily address the concerns of road-safety and security. An automated Red Light Violation Detection System (RLVDS) is an integral part of the ITMS. In our present work we have designed and developed a complete system for generating the list of all stop-line violating vehicle images automatically from video snapshots of road-side surveillance cameras. The system first generates adaptive background images for each camera view, subtracts captured images from the corresponding background images and analyses potential occlusions over the stop-line in a traffic signal. Considering round-the-clock operations in a real-life test environment, the developed system could successfully track 92% images of vehicles with violations on the stop-line in a Red traffic signal. 1. Introduction Integrated automated traffic management systems are now being implemented across different major cities in India. Primary objective of such systems is to track down vehicles that violated traffic regulations using surveillance cameras and intelligent image analytic softwares. The current work on development of an automated Red Light Violation Detection System (RLVDS) may be considered as key module of the overall Integrated Traffic Management System (ITMS). The objective of the current research work is to develop an automated system to localize vehicles violating the traffic Red light signal and capture the images of such vehicles with date, time and location information. Now-a-days in most of the road crossings the traffic is controlled by automatic signaling system. In general red, yellow and green colored lights are used for interpretation of three types of signals for traffic controlling operations. Green light signals to start a stopped vehicle, yellow light signals a moving vehicle to slow down the speed and red light signals the vehicle to stop. In any road crossing when red light is shown to a lane, the signal conveys the message to the vehicles rushing towards the crossing to stop immediately. To make the procedure more systematic and more convenient a uniform thick white line is drawn in each lane before the crossing. This line is commonly known as stop-line. Each vehicle coming towards the crossing must stop before this line if red signal is seen by it. Even if the front wheel of the vehicle touches the stop-line partially then also it is decided as a red light violating vehicle. Stop-line is usually placed perpendicular to the direction of flow of traffic and is placed in the plane of the road. The task for detecting the stop-line violating vehicles (on a Red traffic signal) is done by the traffic police manually. In a busy schedule with thousands of vehicles passing through the crossing it may not always be possible to manually generate a full list of all the violating vehicles. This has been the primary motivation behind the work presented in this paper. We have installed multiple cameras by the roadside at a certain height to capture the image automatically at a regular interval of time, processed all the images captured by the camera and generated a list of all violating vehicles. A well designed interface has been developed to get full list of images automatically updated at regular interval of time. Not much work has been done on automatic localization of Red light violating vehicles specifically. Most of the existing automatic traffic monitoring systems [1-7] concentrate on localization of license plate regions and subsequent interpretation of the characters therein. The automatic License Plate Recognition (ALPR) techniques are predominantly designed for different developed countries with standardized vehicle license plates and strictly monitored integrated traffic management systems. However, in Indian scenario with minimal traffic monitoring these systems fail miserably. In general, most of the works on ALPR systems [1-4] apply edge based features for localizing standardized license plate regions. Some of these works [2, 5, 6] captures the
2 CS11 60 image of a vehicle carefully placed in front of a camera occupying the full view of it and taking a clear image of the license plate. Unfortunately in the practical scenario there may be multiple vehicles of different types in a single scene along with partial occlusions of the vehicles and also the license plates from other objects, where the above methods do not work. In one of the earlier works [1], Rank filter is used for localization of license plate regions giving unsatisfactory result for skewed license plates. An analysis of Swedish license plate is done in [2] using vertical edge detection followed by binarisation. This does not give better result for non-uniformly illuminated plates. An exhaustive study of plate recognition is done in [3] for different European countries. In Greece the license plate uses shining plate. The bright white background is used as a characteristic for license plate in [4]. Spanish license plate is recognized in [5] using Sobel edge detection operator. It also uses the aspect ratio and distance of the plate from the center of the image as characteristics. But it is constrained for single line license plates. During the localization phase the position of the characters is used in [6]. It assumes that no significant edge lies near the license plate and characters are disjoint. In course of the work, we have collected a dataset for the experiment, as discussed in section 2. Section 3 describes the procedure of the experiment and includes the subsections 3.1 to 3.3 explaining the methods for adaptive background image generation, background subtraction and identification of stop-line and stop-line violation detection respectively. In section 4 results of the experiment are shown and discussed. Section 5 gives the overall conclusion of the present work. 2. Collection of dataset The dataset for the current experiment is collected as a part of a demonstration project on Automated Red Light Violation Detection system for a Government traffic monitoring authority of a major metro city in India. Three surveillance cameras were installed at an important road crossing in Kolkata at a height of around ten meters from the road surface. Two of these were static cameras with manually adjustable zoom and focus, fixed at a given orientation with the road surface. The third one was a auto-focus pan-tilt-zoom (PTZ) camera, which was used to pan the entire road width at six preset pan angles automatically at an interval of 3 seconds. All the surveillance cameras were synchronized with the traffic signaling system such that the camera captures the video snapshots only when the traffic signal is turned RED. All the cameras were focused on the Stop-Line to capture frontal images of vehicles violating the Stop-Line on a RED traffic signal. The complete image dataset comprises of more than 15,000 surveillance video snapshots, captured over several days/nights in an unconstrained environment with varying outdoor lighting conditions, pollution levels, wind turbulences and vibrations of the camera. 24-bit color bitmaps were captured through CCD cameras with a frame rate of 25 fps and resolution of 704 x 576 pixels. Not all these video snapshots contain vehicle images with a clear view of license plate regions. For the current experiment, we have identified 500 images with complete license plate regions appearing in different orientations in the image frame. 3. Present work Installation of the camera and the interfacing with it is a trivial task for successful and satisfactory completion of the rest of the work. As the camera can not be placed at the middle of a road at a very low level (as already mentioned it is placed at the side of a road at a convenient height), normally the stop-line makes an angle with the projection plane of the camera. Our present work is based on scanning the image along the stop-line and to detect whether there is any violation or not. For this purpose the value of the aforesaid angle is to be known in advance. As already mentioned that we have used 3 cameras (two fixed cameras and one PTZ camera) for collecting the images of vehicles coming through different lanes, the angles made by the stop-line are different for the three sets of images. In case of CAM1 and CAM2 the angle remains fixed but different for each of the firmly installed cameras, which helps us correcting the angle internally during the processing of the images captured with these cameras. The problem is more complicated in case of PTZ camera which spans the road for focusing different lanes of it and captures images for each lane. In this case the same stop-line makes different angles for different facing of the camera. So synchronization between the direction of camera and angle of correction is required in this case, i.e. for each discrete spanning angle (we have used six spanning angles to cover the whole width of the road) of the camera the angle made by the stop-line in the image should be known in advance. The scheme for detection of red light violating vehicles is based on background subtraction technique. We may define the term background image as the image without any movable object in it. So an image consisting of road, pedestrian, wall, tree etc. may be
3 CS11 61 called of as background image. If the background image is known in advance then the intrusion of any other object into the scene may be obtained by subtracting the non-background image from the background image. Fig. 1 depicts the flow of work. images. This average image is considered as a background image till next time the background is calculated again. In this way as soon as any new image is considered as a background image the average background image is recalculated and used for future work. Thus the averaging of the backgrounds has also become adaptive in nature. Fig. 2(a) and 2(b) shows the mean background images of the two static cameras. Fig. 2 (c-h) shows mean background images of the PTZ camera for six different pan angles. (a) (b) Fig. 1. Flow chart of the scheme 3.1. Adaptive background image generation (c) (d) One key factor in the process is that the background changes automatically from morning to evening and from evening to night due to the variation of sun light and street light respectively. So to detect an object from an image captured at night, if the image is subtracted from the background image of morning then obviously wrong prediction of object will result. So there must be some mechanism implemented such that change in background may also be adapted with the change in time. We have adopted an adaptive technique for gradual modification of background image. Starting from five initial background images, if for any successive image the difference with the background image is found to be very small (less than a predefined threshold value) then the image is considered as a background image and it is stored in the background image database. Once there is any modification in the background image database, the average background image is generated by averaging the last five background (e) (g) (f) (h) Fig. 2(a-h). Mean background images of different surveillance cameras.
4 CS Background subtraction and identification of stop-line The three cameras send images at a regular interval of 3 seconds. These images are temporarily stored in the hard disk and a dynamic list is also prepared and stored in the disk. A process is written to automatically read the list of images written in a particular file and to read the images sequentially. A new image taken from the image list is always subtracted from the background image to find whether it is another background image or it is a non-background image. Fig. 3(a) shows a sample video snapshot captured by the first static camera and the corresponding subtracted image generated by subtracting it from the mean background image, as shown in fig. 2(a). In the difference image the gray values of all the pixels are added and then the summation is divided by the number of pixels to get the mean gray value over the image. If the difference image results a high mean gray value (greater than a predefined threshold D th ) then it is considered as a non-background image, i.e. there is a possibility of intrusion of an object in the image. Otherwise it is considered as a background image. (a) (b) Fig. 3(a). A sample vehicle image captured by one of the static cameras. (b). Corresponding result of subtraction of the vehicle image from the mean background image shown in fig. 2(a) Stop-line violation detection Any non-background image such as an image containing a vehicle when differenced with the background image produces black pixels throughout the image except where the movable objects are there in the former image. So in the difference image the white stop-line will also become black if it is not occluded by the vehicle. For detection of this occlusion we have hypothetically drawn five lines on the stopline parallel to the longer edges of the stop-line, with a vertical gap of three pixels between them as shown in Fig. 4. In the difference image if all the pixels along the lines are black then it is decided that there is no occlusion and if any pixel is found to be non-black then it is decided that occlusion has occurred. Now, one problem regarding the generation of non-black pixels is noise in the difference image. This noise can generate randomly located non-black pixels in the image and some of them may lie on the hypothetical lines as well. But this should not be considered as the case of occlusion. So total number of continuous nonblack pixels along the lines is important, not the presence of any isolated non-black pixel. The second problem is that though we expect continuous set of non-black pixels in case of occlusion of stop-line by a vehicle, this occlusion may also be occurred by the single or multiple passers by on the road crossing it laterally in front of the stop-line thereby occluding the stop-line. To solve this problem we have computed the
5 CS11 63 longest run of non-background pixels along the hypothetical lines and calculated the mean of them. We here assume that a few persons can occlude the stopline discretely but a vehicle (preferably four wheelers) will definitely occlude the stop-line continuously through a larger distance. Thus if the mean longest occlusion exceeds a predefined threshold L th then it is considered as a true occlusion made by a vehicle and the vehicle is considered as a stop-line violating or red light violating vehicle. (a) Fig. 4. A sample violating vehicle image with potential occlusions on the stop-line (hypothetically marked with a band of red lines). (b) 4. Results and discussion As discussed in section 2, more than video snapshots of resolution 704 x 576 are captured through multiple surveillance cameras. During the current experiment, 500 images were selected for the testing purpose of the technique presented in this paper. The experimental thresholds for computation of difference image (D th ) and estimation of true occlusion over the stop-line (L th ) were chosen as 70 and 140 respectively for the current experiment. We have developed a complete application software for RLVDS for identification of vehicle images violating the traffic signal. Fig. 5(a) shows the screenshots of the setup form of RLVDS to set the image path, select camera, estimate skew angle and tune different experimental thresholds. Fig. 5(b) shows the main application screen with potential images of the vehicles violating the stopline. Fig. 5(c) shows sample screenshot of the violation-slip generated by the police department. (c) Fig. 5. Snap shots of the application software. (a) Setup screen with a list of all vehicle images (b) Main application screen with a list of potentially violating vehicle images (c) A sample violation-slip generated by the police department
6 CS11 64 For computation of accuracy of the current technique, we have considered two different categories of error situations. Firstly, inaccurate tracking of nonviolating vehicle images (in a traffic signal) and secondly, non-detection of actual violating vehicles. The overall observed error rate in this experiment is 8%. For all such vehicle images, that could be tracked perfectly during any possible traffic-signal violations, were identified as true positive cases. As observed from the experimentation on the collected dataset of 500 image samples of vehicle images the true positive accuracy is 92%. Major reason behind erroneous tracking of vehicle images is occlusion on the stop-line by pedestrians, speeding vehicles over the stop-line, erratic changes in outdoor lighting conditions and possible vibrations of the surveillance cameras. Project, KTH Electrical Engineering, Sweden, December [3] Cesar Garcia-Osorio, Jose-Francsico Diez-Pastor, J. J. Rodriguez, J. Maudes, License Plate Number Recognition New Heuristics and a comparative study of classifier, cibrg.org/documents/garcia08icinco.pdf. [4] J. R. Parker and P. Federl, An Approach to License Plate Recognition, Computer Science Technical Report ( I), [5] H. Kawasnicka and B. Wawrzyniak, License Plate Localization and Recognition in Camera Pictures, AI- METH 2002, Poland, November [6] C. N. Anagnostopoulos, I. Anagnostopoulos, V. Loumos and E. Kayafas, A license plate recognision algorithm for Intelligent Transport applications, pdf. [7] 5. Conclusion In our present work we have developed a complete application for generating the list of all stop-line violating vehicles automatically from a full list of captured images. The technique successfully identifies vehicle images in 92% cases. All the images we have used here as data are all real life images and the high rate of success of our work directs that this algorithm can easily be implemented commercially in large scale at different crossings in the city to generate a full and centralized list of stop-line violating vehicles throughout the day and night with a minimum manual controlling operation. The current work can also be extended to localize the potential license plate regions from the list of violating vehicles and subsequently interpret the characters appearing on the license plate using an Optical Character Recognition module. 6. Acknowledgement Authors are thankful to the CMATER and the SRUVM project, C.S.E. Department, Jadavpur University, for providing necessary infrastructural facilities during the progress of the work. One of the authors, Mr. S. Saha, is thankful to the authorities of MCKV Institute of Engineering for kindly permitting him to carry on the research work. 7. References [1] O. Martinsky, Algorithmic and Mathematical Principles of Automatic Number Plate Recognition System, B. Sc. Thesis, BRNO University of Technology, [2] Erik Bergenudd, Low-Cost Real-Time License Plate Recognision for a Vehicle PC, Master s Degree
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers
Algorithm for License Plate Localization and Recognition for Tanzania Car Plate Numbers Isack Bulugu Department of Electronics Engineering, Tianjin University of Technology and Education, Tianjin, P.R.
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
CCTV - Video Analytics for Traffic Management
CCTV - Video Analytics for Traffic Management Index Purpose Description Relevance for Large Scale Events Technologies Impacts Integration potential Implementation Best Cases and Examples 1 of 12 Purpose
Product Characteristics Page 2. Management & Administration Page 2. Real-Time Detections & Alerts Page 4. Video Search Page 6
Data Sheet savvi Version 5.3 savvi TM is a unified video analytics software solution that offers a wide variety of analytics functionalities through a single, easy to use platform that integrates with
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
An Efficient Geometric feature based License Plate Localization and Stop Line Violation Detection System
An Efficient Geometric feature based License Plate Localization and Stop Line Violation Detection System Waing, Dr.Nyein Aye Abstract Stop line violation causes in myanmar when the back wheel of the car
Camera Technology Guide. Factors to consider when selecting your video surveillance cameras
Camera Technology Guide Factors to consider when selecting your video surveillance cameras Introduction Investing in a video surveillance system is a smart move. You have many assets to protect so you
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING
REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING Ms.PALLAVI CHOUDEKAR Ajay Kumar Garg Engineering College, Department of electrical and electronics Ms.SAYANTI BANERJEE Ajay Kumar Garg Engineering
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
OptimizedIR in Axis cameras
Whitepaper OptimizedIR in Axis cameras Leveraging new, smart and long-life LED technology Table of contents Introduction 3 1. Why do you need ir illumination? 3 2. Built-in or stand-alone ir illumination?
TVL - The True Measurement of Video Quality
ACTi Knowledge Base Category: Educational Note Sub-category: Video Quality, Hardware Model: N/A Firmware: N/A Software: N/A Author: Ando.Meritee Published: 2010/10/25 Reviewed: 2010/10/27 TVL - The True
Twelve. Figure 12.1: 3D Curved MPR Viewer Window
Twelve The 3D Curved MPR Viewer This Chapter describes how to visualize and reformat a 3D dataset in a Curved MPR plane: Curved Planar Reformation (CPR). The 3D Curved MPR Viewer is a window opened from
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
Professional Surveillance System User s Manual
Professional Surveillance System User s Manual Version 4.06 Table of Contents 1 OVERVIEW AND ENVIRONMENT... 1 1.1 Overview...1 1.2 Environment...1 2 INSTALLATION AND UPGRADE... 2 2.1 Installation...2 2.2
Web: www.stp.hu; www.logipix.eu; www.walkdvr.com
StP Technical Development Ltd. version: 1 Contact: Address: 11-13. Késmárk st., 1158 Budapest, Hungary Phone: +36 1 410-0556; +36 20 480-5933 Fax: +36 1 414-0913 E-mail: [email protected] Technical support:
Camera Demo Tool User Manual
Camera Demo Tool User Manual Version 1.5.04.13 2013/02/23 Table of Contents Camera Demo Tool v1.5.04.13 1 Overview 3 Introduction... 3 Main Functions... 3 System Requirements... 4 Installation... 4 2 Startup
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation
RIVA Megapixel cameras with integrated 3D Video Analytics - The next generation Creating intelligent solutions with Video Analytics (VCA- Video Content Analysis) Intelligent IP video surveillance is one
Basler. Line Scan Cameras
Basler Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting conditions
Poker Vision: Playing Cards and Chips Identification based on Image Processing
Poker Vision: Playing Cards and Chips Identification based on Image Processing Paulo Martins 1, Luís Paulo Reis 2, and Luís Teófilo 2 1 DEEC Electrical Engineering Department 2 LIACC Artificial Intelligence
Chapter I Model801, Model802 Functions and Features
Chapter I Model801, Model802 Functions and Features 1. Completely Compatible with the Seventh Generation Control System The eighth generation is developed based on the seventh. Compared with the seventh,
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
Optimal Vision Using Cameras for Intelligent Transportation Systems
WHITE PAPER www.baslerweb.com Optimal Vision Using Cameras for Intelligent Transportation Systems Intelligent Transportation Systems (ITS) require intelligent vision solutions. Modern camera technologies
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Video Analytics A New Standard
Benefits The system offers the following overall benefits: Tracker High quality tracking engine UDP s embedded intelligent Video Analytics software is fast becoming the standard for all surveillance and
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
A quick user guide for your LX Apollo DVR
A quick user guide for your LX Apollo DVR The LX Apollo series of DVR s is designed specially for the security and surveillance field and is an outstanding digital surveillance product. It has an embedded
From Product Management Telephone Nuremberg
Release Letter Product: Version: IVA Intelligent Video Analysis 4.50 1. General Intelligent Video Analysis (IVA) version 4.50 is the successor of IVA 4.00. IVA is a continuously growing product with an
Color holographic 3D display unit with aperture field division
Color holographic 3D display unit with aperture field division Weronika Zaperty, Tomasz Kozacki, Malgorzata Kujawinska, Grzegorz Finke Photonics Engineering Division, Faculty of Mechatronics Warsaw University
Fixplot Instruction Manual. (data plotting program)
Fixplot Instruction Manual (data plotting program) MANUAL VERSION2 2004 1 1. Introduction The Fixplot program is a component program of Eyenal that allows the user to plot eye position data collected with
Digifort Enterprise The most complete Digifort solution for camera and alarm monitoring.
Digifort Enterprise The most complete Digifort solution for camera and alarm monitoring. The Enterprise version is the package which comprises all of the features available in the Digifort System, offering
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate
AUTOMATED VEHICLE CONTROL SYSTEM By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate Third Year Information Technology Engineering V.E.S.I.T.
Real-Time Tracking of Pedestrians and Vehicles
Real-Time Tracking of Pedestrians and Vehicles N.T. Siebel and S.J. Maybank. Computational Vision Group Department of Computer Science The University of Reading Reading RG6 6AY, England Abstract We present
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
22X DSP COLOR ZOOM CAMERA AUTO FOCUS ZOOM CAMERA
Instruction Manual 22X DSP COLOR ZOOM CAMERA AUTO FOCUS ZOOM CAMERA OPERATION / INSTALLATION MANUAL Thank you for using this product. Please read this manual carefully before use, Please keep this instruction
Efficient Background Subtraction and Shadow Removal Technique for Multiple Human object Tracking
ISSN: 2321-7782 (Online) Volume 1, Issue 7, December 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Efficient
Central Management Software CV3-M1024
Table of Contents Chapter 1. User Interface Overview...5 Chapter 2. Installation...6 2.1 Beginning Installation...6 2.2 Starting the CMS software...10 2.3 Starting it from the Start menu...10 2.4 Starting
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Understanding Megapixel Camera Technology for Network Video Surveillance Systems. Glenn Adair
Understanding Megapixel Camera Technology for Network Video Surveillance Systems Glenn Adair Introduction (1) 3 MP Camera Covers an Area 9X as Large as (1) VGA Camera Megapixel = Reduce Cameras 3 Mega
Collision Prevention and Area Monitoring with the LMS Laser Measurement System
Collision Prevention and Area Monitoring with the LMS Laser Measurement System PDF processed with CutePDF evaluation edition www.cutepdf.com A v o i d...... collisions SICK Laser Measurement Systems are
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Automatic Traffic Estimation Using Image Processing
Automatic Traffic Estimation Using Image Processing Pejman Niksaz Science &Research Branch, Azad University of Yazd, Iran [email protected] Abstract As we know the population of city and number of
Defog Image Processing
Introduction Expectations for a camera s performance, no matter the application, are that it must work and provide clear usable images, regardless of any environmental or mechanical challenges the camera
PDF Created with deskpdf PDF Writer - Trial :: http://www.docudesk.com
CCTV Lens Calculator For a quick 1/3" CCD Camera you can work out the lens required using this simple method: Distance from object multiplied by 4.8, divided by horizontal or vertical area equals the lens
SentryScope. Achieving Ultra-high Resolution Video Surveillance through Linescan Camera Technology. Spectrum San Diego, Inc. 10907 Technology Place
SentryScope Achieving Ultra-high Resolution Video Surveillance through Linescan Camera Technology Spectrum San Diego, Inc. 10907 Technology Place San Diego, CA 92127 858 676-5382 www.sentryscope.com Introduction
False alarm in outdoor environments
Accepted 1.0 Savantic letter 1(6) False alarm in outdoor environments Accepted 1.0 Savantic letter 2(6) Table of contents Revision history 3 References 3 1 Introduction 4 2 Pre-processing 4 3 Detection,
Get The Picture: Visualizing Financial Data part 1
Get The Picture: Visualizing Financial Data part 1 by Jeremy Walton Turning numbers into pictures is usually the easiest way of finding out what they mean. We're all familiar with the display of for example
Automatic License Plate Recognition using Python and OpenCV
Automatic License Plate Recognition using Python and OpenCV K.M. Sajjad Department of Computer Science and Engineering M.E.S. College of Engineering, Kuttippuram, Kerala [email protected] Abstract Automatic
Ensuring Security by Providing Technology Multipliers
Ensuring Security by Providing Technology Multipliers formerly KritiKal SecureScan UNDER VEHICLE SCANNING SYSTEM X-RAY BAGGAGE SCANNERS AUTOMATIC NUMBER PLATE RECOGNITION SPEED VIOLATION DETECTION RED
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Integrated Intelligent Video Surveillance Management System. User's Manual V2.0
Integrated Intelligent Video Surveillance Management System User's Manual V2.0 1 Contents 1. Product Description... 4 1.1. General Information... 4 1.2. System Topology... 5 1.3. Operating Environment...
AUTODOME camera family. July 2014. Our focus is tracking of objects of interest from every angle
AUTODOME camera family July 2014 Our focus is tracking of objects of interest from every angle Our focus is tracking relevant details of moving objects, 24/7 AUTODOME cameras bring imaging to the next
2 Pelco Video Analytics
Solution Video Analytics: Title: Key Extending Words Book Security Rest With in Light Technology Few new technologies have excited the imagination of video security professionals quite like intelligent
AS-M5630U Sony 32X Optical Zoom HD 2MP Network Security Camera Module
AS-M5630U Sony 32X Optical Zoom HD 2MP Network Security Camera Module 2MP Full Real-time HD Image Quality 32X Optical Zoom Full Auto Fast Focus Integrating IP Encoding ONVIF, GB/T28181Protocol Provide
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
XIS-3420/XIS-31HCX XIS-3310/XIS-31NT Wide Area Monitoring Solutions
XIS-3420/XIS-31HCX XIS-3310/XIS-31NT Wide Area Monitoring Solutions Wide Area Monitoring Monitor Your Grounds, Borders, and Key Facilities with a Sony Wide Area Monitoring (WAM) Solution - Which can help
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK. Report submitted in partial fulfillment of the requirements
VEHICLE TRACKING AND SPEED ESTIMATION SYSTEM CHAN CHIA YIK Report submitted in partial fulfillment of the requirements for the award of the degree of Bachelor of Computer System & Software Engineering
How to Conduct a Security Site Survey
How to Conduct a Security Site Survey Steve Surfaro Group Manager Enterprise Systems www.panasonic.com/security [email protected] Agenda Where does the Security Site Survey sit in the scheme of
DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson
c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or
Video Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
NVMS-1200. User Manual
NVMS-1200 User Manual Contents 1 Software Introduction... 1 1.1 Summary... 1 1.2 Install and Uninstall... 1 1.2.1 Install the Software... 1 2 Login Software... 3 2.1 Login... 3 2.2 Control Panel Instruction...
PRODUCT SHEET. [email protected] [email protected] www.biopac.com
EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part
BACnet for Video Surveillance
The following article was published in ASHRAE Journal, October 2004. Copyright 2004 American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. It is presented for educational purposes
Micro Cam Software. User Manual V1.3
Micro Cam Software User Manual V1.3 CONTENT CHAPTER 1: MICRO CAM SOFTWARE INSTALLATION AND CONNECTION... - 1-1.1 SOFTWARE MICRO CAM INSTALLATION... - 1-1.2 WIRED DEVICE CONNECTION... - 4-1.3 SOFTWARE OPERATION
White paper. In the best of light The challenges of minimum illumination
White paper In the best of light The challenges of minimum illumination Table of contents 1. Introduction 3 2. The puzzle of light sensitivity 3 3. Do not be fooled! 5 4. Making the smarter choice 6 1.
Digital Image Fundamentals. Selim Aksoy Department of Computer Engineering Bilkent University [email protected]
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University [email protected] Imaging process Light reaches surfaces in 3D. Surfaces reflect. Sensor element receives
Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment
Digital Photogrammetric System Version 6.0.2 USER MANUAL Table of Contents 1. Purpose of the document... 4 2. General information... 4 3. The toolbar... 5 4. Adjustment batch mode... 6 5. Objects displaying
THE BENEFITS OF SIGNAL GROUP ORIENTED CONTROL
THE BENEFITS OF SIGNAL GROUP ORIENTED CONTROL Authors: Robbin Blokpoel 1 and Siebe Turksma 2 1: Traffic Engineering Researcher at Peek Traffic, [email protected] 2: Product manager research
Topographic Change Detection Using CloudCompare Version 1.0
Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare
Analytics. CathexisVision. CathexisVision Features. Video Management Solutions
Analytics Features Video Management Solutions The Video Analytics Suite provides a powerful enhancement to the world-class, feature-rich Video Surveillance Management software, and significantly enriches
Making Multiple Code Reading Easy. Webinar
Making Multiple Code Reading Easy Webinar Today s Agenda Introduction How DataMan Makes Multiple Code Reading Easy Multiple Code Reading Applications Product Demonstration Videos Q&A 2 Introduction Introduction
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Network Surveillance Products. Quick Reference Guide
Network Surveillance Products Quick Reference Guide New Lineup of IPELA ENGINE Network Cameras Powered by the IPELA ENGINE signal processing system*, Sony s new generation of cameras delivers excellent
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
Real Time Target Tracking with Pan Tilt Zoom Camera
2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,
Understanding Line Scan Camera Applications
Understanding Line Scan Camera Applications Discover the benefits of line scan cameras, including perfect, high resolution images, and the ability to image large objects. A line scan camera has a single
Professional Surveillance System User s Manual
Professional Surveillance System User s Manual \ 1 Content Welcome...4 1 Feature...5 2 Installation...6 2.1 Environment...6 2.2 Installation...6 2.3 Un-installation...8 3 Main Window...9 3.1 Interface...9
CASE STUDY LANDSLIDE MONITORING
Introduction Monitoring of terrain movements (unstable slopes, landslides, glaciers, ) is an increasingly important task for today s geotechnical people asked to prevent or forecast natural disaster that
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS
VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Professional Surveillance System User s Manual
Professional Surveillance System User s Manual Version 4.06 Table of Contents 1 OVERVIEW AND ENVIRONMENT... 1 1.1 Overview... 1 1.2 Environment... 1 2 INSTALLATION AND UPGRADE... 2 2.1 Installation...
Face Recognition For Remote Database Backup System
Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM
Multi-view Intelligent Vehicle Surveillance System
Multi-view Intelligent Vehicle Surveillance System S. Denman, C. Fookes, J. Cook, C. Davoren, A. Mamic, G. Farquharson, D. Chen, B. Chen and S. Sridharan Image and Video Research Laboratory Queensland
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Understanding Video Latency What is video latency and why do we care about it?
By Pete Eberlein, Sensoray Company, Inc. Understanding Video Latency What is video latency and why do we care about it? When choosing components for a video system, it is important to understand how the
