Chapter 1: Machine Vision Systems & Image Processing 1.0 Introduction While other sensors, such as proximity, touch, and force sensing play a significant role in the improvement of intelligent systems, vision is recognized as the most powerful of sensory capabilities, also called universal sensor. A machine vision process may be divided into six principal areas: (1) Sensing. Sensing is the process that yields a visual image. (2) Preprocessing. Preprocessing deals with techniques such as noise reduction and enhancement of details. (3) Segmentation. Segmentation is the process that partitions an image into objects of interest. (4) Description. Description deals with the computation of features (e.g., size, shape) suitable for differentiating one type of object from another. (5) Recognition. Recognition is the process that identifies these objects (e.g., wrench, bolt, engine block). (6) Interpretation. Interpretation assigns meaning to an ensemble of recognized objects. 1.1 Sensing Image Acquisition 1.1.1 Principles: To obtain an image, one must have two important components: (1) Lens to collect and direct the lights; (2) visual sensor (CCDs) to receive incoming light. Lens: Each design form below illustrates different lens combinations and the performance associated with them. Please note that you are not limited to just these lens combinations. Common Lens Formulas: 1 f 1 1 = + d o d i where f Focal length of the lens; distance measured from the lens. d o object distance measured from the lens; d i image Magnification is defined as M = d i / d o = H i / H 0 ( H i --image height/size; height/size). H o --object
CCD Camera: Charge-Coupled Devices (CCDs) are the most common camera sensors used in machine vision applications. A CCD camera uses a small, rectangular piece of silicon rather than a piece of film to receive incoming light. This special silicon wafer is a solid-state electronic component which has been micro-manufactured and segmented into an array of individual light-sensitive cells called photosites. Each photosite is one element of the whole picture that is formed, thus it is called a picture element, or pixel. There are several standard CCD sensor sizes: 1/4", 1/3", 1/2", 2/3" and 1" (see Figure 1). All of these standards maintain a 4:3 (Horizontal:Vertical) aspect ratio. E.g. ¼ CCD (320x240pixel). Fig. 1 Standard CCD Sensor Sizes The size of the sensor s active area is important in determining the system s field of view. Given a fixed primary magnification (determined by the lens), larger sensors yield greater FOVs.Another issue is the ability of the lens to support certain CCD chip sizes. If the chip is too large for the lens design, the resulting image may appear to fade away and degrade towards the edges because of vignetting (extinction of rays which pass through the outer edges of the lens). CCDs'popularity can be linked to its characteristically small size and light weight. Additionally, CCDs have an impressive dynamic range and yield a highly-linear relationship between incoming energy and outgoing signal, making them ideal for metrology. The CCD silicon chip is an analog component, meaning that the pixel "values" are collected by means of sampling. The signal processor and encoder converts this information into an analog signal, which can be transferred to a monitor. In digital cameras, digitizing occurs as the signal is collected from the chip. Once digitized, processing and image enhancements can be done with little loss to the signal. Many digital CCD cameras such as the Duncan Tech cameras enable characteristics to be digitally controlled through a RS-232 port. 1.1.2 Fundamental Parameters of Vision Systems: Field of View (FOV): The viewable area of the object under inspection. In other words, this is the portion of the object that fills the camera s sensor. Working Distance (WD): The distance from the front of the lens to the object under inspection. Resolution: The minimum feature size of the object that can be distinguished by the vision system. Depth of Field (DOF): The maximum object depth that can be maintained entirely in focus. DOF is also the amount of object movement (in and out of best focus) allowable while maintaining a desired amount of focus. Sensor Size: The size of a camera sensor s active area, typically specified in the horizontal dimension. This parameter is important in determining the proper lens magnification required to obtain a desired field of view. Primary Magnification (PMAG) of the lens is defined as the ratio
between the sensor size and the FOV. Although sensor size and field of view are fundamental parameters, it is important to realize that PMAG is not. The following formula calculates primary magnification: PMAG = Sensor Size (mm) / Field of View (mm) Fig. 2: Illustration of fundamental parameters of an imaging system Fig. 3 Illustration of primary magnification and the relationship between sensor size and FOV
1.1.3 Image Quality: An imaging system should create sufficient image quality to allow one to extract desired information about the object from the image. Note that what may be adequate image quality for one application may prove inadequate in another. There are a variety of factors that contribute to the overall image quality, including resolution, image contrast, depth of field, perspective errors, and geometric errors (distortion). Resolution: Resolution is a measurement of the imaging system s ability to reproduce object detail. A low-resolution image is usually blurry and lacking in details. Contrast: Fig. 4: Contrast can be illustrated by the square wave The Components that affects image quality (1) Lens aperture (f/#): impacts the amount of light incident on the camera. Illumination should be increased as the lens aperture is closed (i.e., higher f/#). (2) High power lenses usually require more illumination, as smaller areas viewed reflect less light back into the lens. (3) The camera s minimum sensitivity is also important in determining the minimum amount of light required in the system. (4) CCD camera settings such as gain, shutter speed, etc. affect the sensor s sensitivity. (5) Fiber optic illumination usually involves an illuminator and light guide, each of which should be integrated to optimize lighting at the object. Desired image quality can typically be met by improving a system s illumination rather than by investing in higher resolution detectors, imaging lenses and software. 1.1.4 Illumination: Why correct illumination is critical to an image system Illumination plays an important role in a Machine Vision system since it often affects the complexity of vision algorithms. Arbitrary lighting of the environment is often not acceptable because it can result in low-contrast images, specula reflections (hot spots, blooming), shadows, and extraneous details. A well-designed lighting system illuminates a scene so that the complexity
of the resulting image is minimized, while the information required for object detection and extraction is enhanced. The consequence a poor illumination may cause Low-contrast: increase the complexity of vision algorithms. Specula reflections (hot spots or blooming) can hide important image information. Shadowing: can hide important image information, cause false edge detection and result in inaccurate measurements. Types of Illumination Table 1 summarizes the types of the illumination. Table 1 Four Basic Illumination Schemes Type Description Example Diffuse Lighting: for objects characterized by smooth, regular surfaces. Applied where surface characteristics are important. Backlighting: produces a black and white (binary) image. Ideally suited for applications in which silhouettes of objects are sufficient for recognition. Structure-lighting consists of projecting points, stripes, or grids onto the work surface. Structured- Lighting: Through establishing a known light patter on the work space, the disturbances of this pattern indicate the presence of an object.
Directional Lighting: Useful for inspection of object surfaces. Defects on the surface, such as pits and scratches, can be detected by using a highly directed light beam.