Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

Size: px
Start display at page:

Download "Optimal design and critical analysis of a high-resolution video plenoptic demonstrator"

Transcription

1 Optimal design and critical analysis of a high-resolution video plenoptic demonstrator Valter Drazic Jean-Jacques Sacré Arno Schubert Jérôme Bertrand Etienne Blondé

2 Journal of Electronic Imaging 2(), 007 (Jan Mar 202) Optimal design and critical analysis of a high-resolution video plenoptic demonstrator Valter Drazic Jean-Jacques Sacré Arno Schubert Jérôme Bertrand Etienne Blondé Technicolor Research & Innovation Video Processing and Perception Lab., av. de Belle-Fontaine, CS Cesson-Sévigné Cedex, France valter.drazic@technicolor.com Abstract. A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly. 202 SPIE and IS&T. [DOI:.7/.JEI ] Introduction With the advent of 3D, there is an increasing need for creating natural content in the additional dimension, as well as for storing, transmitting, decoding, and displaying it. These objectives are achieved often by using two recorded images instead of a single one and at other times by the use of one recorded and one calculated image encoding the depth of each pixel of the first image. The most popular method for passive depth measurement, which is also the most popular means for creating 3D content, is binocular stereo, where two separate cameras are used. Common convergence and focus, calibration, and bulkiness make the system very demanding. Moreover, creating multiview content for autostereoscopic displays necessitates either view interpolation between the captured left and right images or the use of Paper 79SSP received Apr. 22, 20; revised manuscript received Aug. 3, 20; accepted for publication Aug. 26, 20; published online Feb. 24, /202/$ SPIE and IS&T even more cameras. There are single sensor and single lens solutions 2 7 that avoid complex convergence, focus, and calibrations. Among single lens solutions, the plenoptic camera 4 6 can natively capture multiview content and show it on autostereoscopic displays without any need for complex correspondence calculations and view interpolations. Figure is a schematic of the plenoptic camera principle with five views. The object space is shown on the left side with sampled point objects x i. There is a main lens whose main aperture is subdivided into five virtual apertures. Each of the virtual apertures tunnels the light rays emitted by objects x i onto a corresponding lens of a lenslet array. A sharp image of the objects x i forms on the lenslets at xi 0. Each of the lenslets separates rays of light based on their direction, creating a focused image of the aperture of the main lens on the array of pixels below the lens. The five views are then rendered by collecting the images xi 0 for the different perspectives θ i. As the distance of the objects x i to the main lens is much bigger than the total span, x n, x þn, the object space is seen from the main lens point of view under different nonoverlapping parallaxes θ i. Finally, the field lens ensures telecentricity, so that every lens of the array makes an image of the entrance pupil of the main lens on exactly the five pixels behind the lens. This is also achieved if the lenses of the array are aperture matched to the main lens. 6 Two drawbacks of all single lens solutions for recording 3D by either providing multiple views or one view and a depth map are the loss of resolution and the loss of sensitiveness of their depth-sectioning skills. The loss of resolution in the plenoptic case is obvious from Fig. : Each rendered view has a spatial resolution that equals the sensor s, divided by the total number of captured views. Although this is a drawback, the plenoptic camera remains a very interesting instrument for capturing content for an autostereoscopic display, since its resolution is also scaled down by the number of displayable views. To favor the best sensor s fill factor and be sure that none of its pixels are unused, it is supposed here that the system has perfect aperture matching between the lenses of the array and the main lens. This assumption means that the lenses of the array have a rectangular Journal of Electronic Imaging 007- Jan Mar 202/Vol. 2()

3 From the lens maker Eqs. () and (2) and similar triangles in Eq. (3), d 0 þ d ¼ f : () d 0 þ z þ d x ¼ f : (2) Δu d d x ¼ A d x ; (3) Fig. Five views of the plenoptic camera principle. shape and that the entrance pupil of the main lens matches it and is also rectangular. It is the main purpose of this paper to report on the experience of using a high-resolution video camera combined with very few pixels per lenslet to obtain relatively high resolution multiview streams. The second drawback of a supposedly bad depth resolution has been recently addressed. 8 Previously, there had been opinions that the quality of the depth maps calculated from the multi-images from a plenoptic camera would have a lack of precision due to the small baseline, which equals the size of the entrance pupil of the main lens for the extreme views, but we were not able to find quantifying figures. We will in a first step recall the main results of Drazic, 8 which will be used to design a high-resolution video camera with the best possible compromise between a good spatial and a good depth resolution. 2 Optimization of the Camera for Best Depth Discrimination In this section, we will calculate the plenoptic system s depth discrimination, which is the smallest distance between two on-axis points that would give rise to a disparity of one pixel between the two views recorded through the upper and lower virtual apertures of the main lens. For simplicity, let us assume at this stage that the baseline for the most angularly separated views equals the size A of the entrance pupil of the main lens. Let us further assume that we have a point object at the distance d 0 from the entrance pupil of the main lens. The lenslet array is at the distance d from the exit pupil of the main lens. The lens has a focal length f. Figure 2(a), the top drawing, depicts this. For this situation, if we render the views, we will get only the central pixel ON in each of them. Let us consider Fig. 2(b), the bottom drawing: In this one, the object has been moved by a distance z off its initial position, and now there is a fan of rays spanning over the lenslet array by Δu. Fig. 2 Ray binning at the lenslets as a function of the defocus. the unknown distance d 0 þ z is linked to the span Δu by d 0 þ z ¼ Δu d 0 A f : (4) d 0 As long as the span Δu is smaller than the size of one lens of the array, only the central pixel receives light in each rendered view. There is no disparity between any of the recorded views. It is only when the spot size Δu spills over to a second lenslet that a second pixel on the rendered views receives light. Moreover, that second pixel registers in the views by shifting inwardly or outwardly with respect to the central pixel depending on the sign and amplitude of z. Hence, the disparity equals a measurable pixel in the two views that are the most separated angularly (views θ and θ in Fig., for instance) if the spot size Δu gets just over the dimension of one lenslet. If its size gets just bigger than one lenslet, if n is the number of pixels per lenslet and p the pixel pitch, then the span is Δu ¼ np. The span Δu is a continuous quantity, but if the disparity is not calculated with subpixel accuracy, distance d 0 þ z is not evaluated continuously but in steps, and so is Δu. The distance Δz by which the point object shall be moved from its initial position d 0 for generating a disparity of pixel is the depth discrimination: d 0 þ Δz ¼ d 0 np A f d 0 : () Rearranging Eq. () in order to emphasize the relative depth discrimination leads to: Δz d 0 ¼ np A d 0 f f : (6) The quantity np is measured in microns, whereas A is measured in millimeters. Equation (6) can be Taylor developed, and by stopping at the linear term we get a very simple expression of the depth discrimination: Δz np d 0 A d0 f : (7) Although it can be approximated, the aperture size of the entrance pupil A is known only to the lens maker. A more straightforward parameter to use would be the lens f -number, k. Another important lens characteristic is the focal length f, but with varying sensor sizes, the field angle of view is a more practical measure than the focal length. We can Journal of Electronic Imaging Jan Mar 202/Vol. 2()

4 Fig. 3 Field angle relationship to sensor s size and focal length. eliminate A and f from Eq. (7) by introducing the field of view α, the f -number k, and the size of the sensor D. Those parameters as defined in Fig. 3 are related for small α by the expressions f D and A ¼ f α k D αk : (8) Our final relative depth discrimination formula is then Δz α 2d0 Lk ; (9) d 0 D where L ¼ np is the size of a lens from the array; k is the main lens f -number; α is the field of view, in radians; D is the size of the sensor, in millimeters; and d 0 is the distance of the object, also in millimeters. Good depth discriminations are achieved for small results of Eq. (9). As this equation is a parametric description of the depth-sectioning capability of a plenoptic camera, it is easily possible to investigate each parameter separately and optimize all the critical components of the system for the best possible depth discrimination. 2. Focusing Distance d 0 To get small values for Eq. (9), it is best to use the camera at small focusing distances d 0. Depth discrimination gets worse at bigger distances, which is the case for any depth-measuring camera system. 2.4 Lenslet Size L and elemental image resolution n The elemental image is the image of the lens entrance pupil made by a lenslet onto the sensor. There are as many elemental images on the sensor as the number of lenslets in the array. The resolution of the elemental image, which is also the number of views, is n, since the analysis is restricted to a D case. In Fig., the elemental image resolution is n ¼. In order to get better depth performances, L needs to be small, and this means that the elemental resolution or the number of pixels per lenslet needs to be the smallest possible. Realizations of plenoptic cameras have used widely varying numbers of pixels. In their original paper, Adelson and Wang 4 reported used pixels, and we have seen reports 6 of up to 4 4 pixels. The number of pixels per lenslet has been kept small in order to render views with sufficient resolution, as the number of views trades off with the resolution of each view. Equation (9) teaches that it is also in the interest of the camera s depth discrimination performances to keep that value small. We will see in the next subsection that a plenoptic camera with good depth discrimination requires a big sensor area. Usually, a big sensor area implies a camera with one sensor, as opposed to those with three sensors and a color splitter. Typically, big sensors meant for producing color images out of a single device are provided with a Bayer color filter array. The color filters are arranged in quad pixels, two of them being green, one blue, and one red. If the number of pixels n per lenslet is even, then by rendering the views, we would get some views with just red and green pixels; adjacent views would have just blue and green pixels, which is improper to render the missing colors. So, the number n must be odd. There are other considerations to take into account; one of them is depicted in Fig. 4, which shows how the lenslet projects images of the pixels that are below one lenslet onto the exit pupil of the main lens. In this case, we have illustrated an elemental image with 3 horizontal and vertical 2.2 f -Number of Main Lens k Equation (9) teaches also that depth discrimination is better for small values of k. Small values of k mean big lens apertures. This is physically intuitive, since opening up the lens aperture provides the camera with a larger baseline for a more precise triangulation. So, wherever possible, main lenses with k ¼ 2.0 or 2.8 are the preferred choice. By stopping the lens down to.6, the relative depth discrimination worsens by a factor of 2 compared with an f -number of 2.8. Our analysis is based on first-order geometrical optics, but we will report in Sec. 4 how higher-order aberrations of the optics will impact the depth resolution. It is known that high-aperture lenses have also more aberrations, hence k ¼ 2.8 is a good compromise that maximizes the relative depth resolution and is still a reasonable system design value for lenses with limited aberrations. 2.3 Pixel Pitch p In Eq. (9), L ¼ np, where p is the pixel pitch. To get small values for Eq. (9), L should be a small value itself and hence the depth discrimination is better for sensors with a small pixel pitch. Fig. 4 Sampling of main lens aperture. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

5 Table Various big area sensor characteristics. lenslet s size will suit the purpose of achieving the best possible depth resolution. Width (mm) Height (mm) Pixel pitch (μm) Diagonal (mm) Phantom Arri D Red One Scarlet FF Epic pixels. In order for the lens baseline A to be optimally used, the images of the pixels projected onto the main lens need to fill the lens aperture to get the biggest possible value of A. A value of pixels seems to be the best compromise for depth discrimination. However, autostereoscopic displays that are on the market today do not need vertical views, and a value of 3 pixels suits at the same time the need to fill the lens aperture and the current technology need for just horizontal views. In practice, the pixels at the edge of an elemental image are unusable due to light spillage from adjacent elemental images. Therefore, a value of 7 pixels seems to be the best compromise. This would lead to 3 ¼ views. This value is optimal regardless of future evolutions of sensor technology, as the parameter L is an independent quantity in Eq. (9). As pixel pitch get smaller, so also does L, and our recommended 7 pixel Pixel size of the sensor in µm Pixel size of the sensor in µm 0 Field angle 4.00, d0=3.0m, k=2.0 Arri D-2 20 Epic64 ScarletFF3 Red One Size of the sensor in mm 20 3 Phantom 6 Epic64 ScarletFF3 Red One Size of the sensor in mm 2 Field angle 22.0, d0=3.0m, k=2.0 Arri D-2 Phantom 6 Pixel size of the sensor in µm Pixel size of the sensor in µm 2. The Field Angle α and sensor size D These two parameters are the most important because they influence depth discrimination in a quadratic manner. If the field angle is divided by a factor of 2 by replacing, for instance, a regular 0-mm lens on a 3-mm camera with a 0-mm lens depth discrimination gets better by a factor of 4. Similarly, if sensor dimension D is doubled, depth discrimination is four times better. For an in-depth analysis, we set some of the parameters of Eq. (9) according to the previous subsection s discussions. The main lens f -number k is 2.0, and the number of pixels per lenslet is chosen as 7, which leads to 3 usable views. We will calculate the depth discrimination for α ¼ 4 deg and 22. deg field of view for d 0 ¼ 3 and m, where α ¼ 4 deg is a standard configuration, as it is the field of view of a full-field single lens reflex camera with a 0-mm lens. We will consider sensors of various sizes and pixel pitch, as listed in Table. The value that is used for D in Eq. (9) is the diagonal of the sensor. Figure shows the relative depth discriminations Δz d 0, whose values are shown as contours. The first graph shows the performances of a system for α ¼ 4 deg at d 0 ¼ 3m. As an example, the Scarlet Full Frame (FF) 3, which is a 3-mm sensor, would distinguish two objects separated by % of the distance, hence its depth resolution is 30 cm at 3 m focusing distance. The second graph is calculated for the same parameters but where d 0 ¼ m. In the following two graphs, the field angle has been reduced to half its value, and the Scarlet FF3 sensor has a depth Field angle 4.00, d0=.0m, k=2.0 0 Arri D-2 Epic64 ScarletFF3 Red One Size of the sensor in mm Field angle 22.0, d0=.0m, k=2.0 Arri D-2 Phantom 6 Epic64 ScarletFF3 Red One Size of the sensor in mm Phantom Fig. Relative depth discriminations, in % d 0. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

6 Fig. 6 Schematic of the optical setup of a plenoptic add-on to a movie camera. resolution of 2.% at d 0 ¼ 3m, which is four times better than at α ¼ 4 deg.atd 0 ¼ m, it still has a depth resolution of 4%, which means 20 cm. A camera designed with this sensor, a main lens with an f -number of 2.0, and a lenslet array where each lenslet has the size of 7 pixels can distinguish two persons at m separated in depth by the thickness of one body. 2.6 Conclusion on the Depth Resolution of a Plenoptic Camera At this stage in this analysis, optical aberrations have been ignored; Eq. (9) is an upper performance criterion for the plenoptic camera to do depth measurement. It quantifies the best precision that can be achieved by this system. It also gives a direction for the design of the whole system, including lenslet size, sensor type, and main lens type. We can also infer that when equipped with a big area sensor, like the Epic 64, plenoptic cameras are effective because it is possible to discriminate two depth levels separated by less than 2 cm at a m distance and a field angle of 4 deg, which is quite precise. The lens aperture is hidden in Eq. (9) because it is embedded in the field angle and the sensor s size, but we can conclude that small apertures can be balanced by big sensor sizes and high pixel resolution for accurate depth mapping. 3 Design and Setup of a High-Definition Plenoptic Movie Camera Our purpose is to build a high-definition plenoptic movie camera with good depth resolution, so we will use the previous analyses to dimension the system. From Sec. 2., the sensor shall be the biggest possible with a small pixel pitch. The Red One with an Advanced Photo System type C- sized sensor and a 4k spatial resolution is a good candidate. In order to optimize depth sensitivity, we used a commercial main lens with an aperture of 2.8 and a field angle of 22. deg horizontal. To further have the best compromise between spatial and depth resolutions, the lenslet width has the dimension of five pixels. This means that this system is able to stream five views with different parallaxes, each view having a resolution of red/green/blue pixels. We used lenticular lenslets because vertical views are not needed in current autostereoscopic displays. Further, we wanted the system to be camera agnostic and built like an add-on to be put in front of any appropriate video camera. Of course lenslets have to be changed, but at least the camera sensor did not need to be customized by adding a lenslet array on top of it, which would require a redesign of the sensor and customized fabrication. 3. Plenoptic Add-on to a Movie Camera Figure 6 shows the general schematic of an add-on that can be used on any digital camera to transform it into a plenoptic system. This type of camera-agnostic system was used in the original work of Adelson and Wang. 4 The add-on consists of a main lens, which is in front of a telecentric lens, and a lenslet array. The plenoptic image is a real image formed just at the focal plane of the lenslets. The video camera is equipped with a high-quality macro relay lens, which makes a image of the plenoptic aerial image onto the sensor. As shown in Fig. 6, each individual lenslet spreads the aperture of the main lens onto five virtual pixels, forming a real plenoptic image in the focal plane of the lenslets. This image is then relayed onto the sensor. The add-on mechanics that have been developed especially for this demonstrator has the submicron precision adjustment required by the very small pixel pitch of the sensor. We have also developed specific adjustment procedures that will be explained in the following sections in order to achieve the required accuracy. Figure 7 shows a photograph of the realization with the different elements of the plenoptic add-on. The main lens has an f -number of 2.8, the captured field angle is 22. deg. The lenslet is a lenticular array, each lenticule having a pitch of 27 μm, the width of five pixels of the sensor. The most difficult part was to find a macro relay lens with an aperture of 2.8 because at that magnification, every commercial photo lens has an effective aperture that has an f -number of 4 or Fig. 7 Photograph of the demonstrator with different added parts. Journal of Electronic Imaging 007- Jan Mar 202/Vol. 2()

7 .6, and this is not in line with the best possible depth resolution. We decided to use two 0-mm.8 optics mounted head to tail and focused at infinity. This also has the advantage of controlling the magnification very precisely by slightly defocusing one of the lenses. 3.2 Raw Frame Extraction The only way to get access to individual pixel values was to record the shots onto a flash card. The video container stores JPEG (Joint Photographic Experts Group) 2000 compressed frames with a compression ratio of 9 (Redcode 36 format) or 2 (Redcode 28 format). The video stream could be played back in VirtualDub thanks to a plug-in provided by Red. Each individual pixel value could then be accessed by setting the lowpass 6 6 filter kernel coefficients of the plug-in to zero and assuming that the provided plug-in has interpolated missing red/green/blue values by a nearestneighbor demosaicing algorithm. A JPEG 2000 compression ratio at 9 or 2 does not disturb the reconstruction of each view after a compression and decompression of a plenoptic frame. This was tested on synthetically generated plenoptic pictures, where we interleaved every fifth column of five still-picture shots taken at five different parallaxes. 3.3 Geometrical Adjustment by the Power-per-View Split Procedure As the lenslets are cylindrical, Fig. 8 depicts the fact that when they are centered with respect to the pupil of the main lens, the central view number 3 receives more light power than view 2 or 4, which receives an equal amount of light power and in turn receives more power than views and. The best, and a very simple, pattern that we can use for adjusting the whole system is a white object. A white object has no texture, hence it does not matter at what distance it is placed from the main lens. Figure 9 is the picture that is then recorded on the sensor. We take only green pixels of the color filter array into account because each column Fig. 9 Plenoptic image of a perfectly aligned plenoptic system. and each line of the sensor contains green pixels. As the plenoptic image is then separated into five views, each view has a power balance that should be as depicted in Fig.. When each of the five views is uniformly illuminated and the power balance is symmetrical with the indicated values, the system is perfectly aligned. Figure shows an example of the lenslets being tilted with respect to the columns formed by the sensor s pixels. Each view is not uniformly illuminated, but the power balance is equal in each view. 3.4 Magnification Adjustment A magnification is achieved when all lenslets are imaged on exactly five columns of pixels on the sensor. They do not need to be aligned with the columns. Once the magnification factor has been set, they could need a horizontal shift in order to have the optical center of each lenslet facing exactly one column of pixels. We used the columns of pixels of the sensor as a sampling grid for analyzing the D spatial frequency of the image of the lenslet array. When a white object is used as an input pattern for the camera, the lenslet array produces on the input light bundle a periodic pattern. This pattern is imaged on the sensor by the relay lens. The relay lens focusing is adjusted until a magnification of is achieved. A magnification of means that the periodicity of the pattern Fig. Power balance per view in a perfectly aligned system. Fig. 8 Correlation of the cylindrical lenslets with the aperture of the main lens. Fig. Effect of a slight lenslet tilt for each view and for the power split. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

8 drifting across the sensor area; but its influence on the views is not significant. This value of pixel shift is calculated globally by the FFT. Locally, the shift could be bigger at some fields of the sensor, because of magnification distortion, lens aberration, lateral chroma, or other influences, so that local miss-registrations have to be analyzed more deeply, as in the next section. Fig. 2 View mapping on the focal plane of the lenslet array. within the light bundle modulated by the lenslet array is exactly five pixels. The analysis has to be conducted on only the green pixels, because the red and blue Bayer color patterns miss every second column. The analysis tool used a fast Fourier transform (FFT) of the sum of two consecutive rows in order to get a green sample at each pixel location. A very high peak appeared at the normalized frequency of the spatial period of the lenslet array relative to the spatial period of the pixels of the sensor. Hence, a peak at the normalized frequency of px ðpixelsþ ¼0.2 is adjusted for. The magnification adjustment is possible by focusing one of the head-to-tail mounted lenses in the relay. At the best possible adjustment, the lenslet array imaged onto the sensor had a frequency of pixels, which means a magnification factor of G ¼ The system cannot be adjusted better than that, because of the total number of pixels per line, which is 992 in this adjustment, padded to 2048 for the FFT and a precision of pixel for the determination of the normalized peak frequency. A 2kwide captured image implies 2048 ¼ 4 lenslets, and hence the achieved absolute global pixel misalignment is pixel over the whole 2k range, which is a small value, in particular if the registration shift is slowly 4 Local View Crosstalk The add-on is composed of three main parts, as shown in Fig. 2. The main lens images the scene on a microlens array. Then, a relay lens is used to conjugate the detector (here a charge-coupled device with a Bayer pattern filter) with the microlens focal plane. The field lens makes the object space telecentric. As a consequence, the main lens pupil is divided into five slices, each slice creating a new point of view of the scene. On the microlens focal plane, we get five interleaved images corresponding to the five views of the scene. Those three elements are the core of the plenoptic effect. All of them are linked by several conditions. The main lens must have its image plane on the microlenses while at the same time having its pupil on the focal plane of the field lens, so that the object space of the microlenses becomes telecentric. The last condition links the microlens focal length with the main lens aperture: The main lens pupil diameter must be calculated considering that its image on the microlens focal plane should size five pixels of the detector. But the key fact is that if we get five interleaved images of five views onto the microlens focal plane, those views shall be conjugated onto the sensor exactly as they are. If this is not the case, view crosstalk will result. 4. Problem of View Overlap The relay lens is one of the most important parts of the system. We have seen that the five different views generated by the microlenses, on their focal plane, are interleaved like this: :::. Theoretically, the relay lens should be perfect, to avoid superposition of the views. In particular its point spread function (PSF) should be smaller than a pixel. If it is not, adjacent views will overlap. The effective views acquired by the sensor will be a superposition of successive views. The extreme views and have very different parallaxes, and a superposition here will blend together two images with noncontinuous disparity variations. Figure 3 shows an example where view number gets mixed with Fig. 3 (Left) Schematic of the view overlaps. (Right) Real results on a shot with a superposition of elements of view onto the image of view. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

9 Fig. 4 Model used to simulate the PSF and vignetting. elements of view number. The red circles show the artifacts produced by mixed views. Extreme views are in general not usable, so that our prototype designed for five views is in practice a three-view system. Here we should have gone for a system with seven views, of which five would have been usable. 4.2 PSF and Vignetting Tradeoff for the Relay Lens The relay lens consists of two head-to-tail mounted lenses focused at infinity. We could both model and measure the PSF of the system. Both are in accordance. The lens is modeled as a variation of a double-gauss lens. It can be seen from the raytracing in Fig. 4 that the three top fields are vignetted because of the telecentric nature of the setup. The conclusion is that the lenses shall be used at their maximum apertures of.8 where vignetting has been measured to be flat across the field. On the other hand, at full aperture, the PSF is three pixels on-axis but up to eight pixels off-axis. The consequence is that in the center of the field, one view will be a mix of three adjacent views; and at the edge of the field, since we cannot shut the apertures because of the vignetting and because of the eight pixels, broad PSF disparity will be lost between views. 4.3 Accuracy of the Magnifying Power In this section, we will calculate the accuracy needed to achieve a magnification all over the field and compare it with the measured one from Sec Considering the total number of pixels on the sensor, we will calculate the accuracy of the magnifying power required so that there is less than pixel drift from the plenoptic to the relayed image. The original dimensions of the sensor are pixels. When adding pixel to the width, the new dimensions are pixels, which corresponds to a magnifying power equal to When removing pixel from the width, the new dimensions are pixels, which corresponds to a magnifying power of In conclusion, the magnifying power has to be From Sec. 3.4 we have measured the accuracy to be G ¼ The accuracy is six times greater than the needed one. The practical issues will be shown later on. 4.4 Coma, Spherical Aberration, Astigmatism All those aberrations have to be minimized, to avoid the problem of mixed views. Their only consequence is to spread the image spot, depending on the position in the field. For example, coma increases with the field. As a consequence, the different views are separable on-axis, while they are all blended off-axis. The result of inter-view disparity calculations is a disparity map that has values at its center and zero values at its borders, whatever the difference of parallax. Other aberrations are not field dependent, and they only blend some views together. The maximum acceptable PSF is two pixels in diameter. In that case, the first and last views are unusable. In our practical case, we have an on-axis PSF of three pixels. As a consequence, views and are not usable, and views 2 through 4 show depth-related disparities that can be shown on 3D displays but cannot be used for depth mapping. 4. Distortion Distortion leads to wrong results. To understand the problem, we have to consider one vertical stripe of rays that corresponds to one view (Fig. ). Normally it should reach one column of pixels. But if the relay lens has some distortion, it will reach various columns, depending on the position of each ray. The result of that aberration will be a distortion of each view, leading to wrong view assignment. One particular effect of the distortion will be strong color ringing of flat white images, which will be very visible on each view. To evaluate the maximum distortion acceptable, we have to calculate the distortion corresponding to one pixel. The distortion is maximal at corners, which is where we will study that aberration. Actually the maximal allowable distortion corresponds to one pixel (.4 μm). In our case the sensor is 24.4 mm 3.7 mm. Applying the well-known radial dependency distortion formulas to those sizes, we can conclude that the maximal distortion acceptable is 0.022% for a 28-mm field. For the sake of comparison with real-world numbers, the distortion of one 0-mm.8 lens used in the relay has been measured to be 0.2%. Fig. Effect of distortion bad view registration and color ringing. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

10 4.6 Chromatic Aberrations Lateral color can be considered a variation of the magnifying power with the wavelength. So it leads us back to Sec. 4.3 concerning the accuracy of the magnifying power. 4.7 Field Curvature This is again an aberration that has to be eliminated, more particularly when the distance between the microlenses and their focal plane is about 0 μm. If the field curvature of the relay lens is about that dimension, some problems could appear, like focusing on the microlenses at the center of the sensor and on their focal plane at borders. The result will be an image that has areas that are plenoptic and areas that are not. Experimentally, the double 0-mm relay lens seems to have some field curvature aberration, because it is quite hard to focus the whole field simultaneously. This may be due to the fact that field curvature of each lens is additive. In practice, this will also result in views that have disparities at the center of the field and that will have a loss of any at the borders. Thus, 3D will flatten to 2D with the field. 4.8 Influence of the Main Lens The aberrations of the main lens can produce errors of depth registration. But problems here are different from the ones due to the relay lens. The relay lens blurs the image on the charge-coupled device, so that the calculated disparity cannot be related to actual depth, while the main lens can make some points look closer or farther away than they really are. The main lens aberrations disturb the depth/ disparity relation when they spread a point over one entire microlens, it means that the PSF should be at most 2 μm in diameter (this is the diameter of a microlens). This would otherwise lead to bad depth registration. The aberration constraints of the main lens are less critical for the global working of the plenoptic camera. 4.9 Possible Problems Induced by the Field Lens The field lens has been designed for the green wavelength. This leads again to different view registration for blue and red and color ringing on the views. The field lens should be made achromatic. Plenopticity Check In this section, we will report about a procedure to check on a recorded picture, the plenoptic quality of the camera. Quantifying numbers will indicate how big the area on the sensor will be that receives the view it is supposed to register. This can be taken as a plenopticity test for the camera, where each pixel will be marked as good or bad. Good means it will work in conformity with the plenoptic principle, and bad means it will not, i.e., it will not record the view it is supposed to record. This check will not quantify the amount of superposition of views of the kind described in the previous subsections of Sec. 4 but will show areas where, in spite of the crosstalk, the main contributor is still the view that is supposed to be registered there. Once the plenoptic system is well adjusted, we can repeat the procedure from subsection 3.3 and record a plenoptic image of a white field like the one shown in Fig. 7. In this image, we consider only the green pixels and fuse two successive rows in order to get continuous green pixels over one row. We can now search for local maxima over each row, and as soon as a local maximum is found, we mark that pixel from the plenoptic image by, for instance, setting its level to 2 and setting the levels of nonmaxima pixels to 0. Once this is done for each row, we can de-multiplex the plenoptic image into the five views and we get what can be seen in Fig. 4. According to Fig. 8, when the system is perfectly adjusted, there is only one view, the central view number 3, which should receive the maximum amount of light. In an ideal plenoptic system, all the maxima shall map into that view. This would mean that the system is perfectly adjusted; the lenslets have each exactly the size of five pixels and there is no miss-registration due to distortion or field curvature. In our prototype and in this example, where a 2k shot has been recorded, 9.% of the field maps correctly. We can see also on which pixels the mapping is correct, these being all white pixels from the central view in Fig. 6. Further, 4.4% of the field registers in a bad location shifted by one pixel in view number 2, and.8% misses also by one pixel in view number 4. The rest misses by two pixels. As a consequence, only the white marked pixels of view 3 can be considered for a disparity calculation. If we do the same for a 4k shot, the situation changes dramatically. We can see the result in Fig. 7. Only 2.7% of the pixels in view number 3 receive effectively the light passing through the central part of the main lens. The mapping of the views of a 4k image is so bad that displaying two of them on a stereoscopic display shows very shallow 3D effects limited to the central zone of the image and vanishing outwardly. This check works only for evaluating how good the relay registers for the central view because it is based on the difference of energy in each channel. Fig. 6 Plenopticity check on a 2k image. Fig. 7 Plenopticity check on a 4k image. Journal of Electronic Imaging Jan Mar 202/Vol. 2()

11 The disparity map is calculated by a hierarchical block matching and is postprocessed by a recursive cross-lateral filter; it is shown at the bottom of Fig. 8, where bright values represent positive disparity and dark ones negative disparities. Plenoptic technology has a high potential for video plus depth recording in real time to feed multiview displays. We showed that the realization of an external plenoptic adapter requires very high optical and mechanical precision to record good-quality multiple views. Putting the microlenses directly on the camera sensor avoids some of the constraints, but local view crosstalk remains a challenge if disparity has to be correlated with depth in high-resolution captures. Fig. 8 (Top) Views 2 and 4 of a 2k real scene shot. (Bottom) A calculated disparity map. 6 Prototype Evaluation and Conclusion Due to all previous analyses, we can say that we fell into some of the difficulties of plenoptic design. The most important one was the will to design a camera-agnostic add-on. We have used the geometrical optics rules from Sec. 2 to build up the best possible camera in terms of image resolution and 3D effect. When we introduced the wave theory to analyze the system, it proved to put an extremely high constraint on the relay lens, which shall be telecentric and have an aperture of.8 to get the telecentric rays issued from the main lens field lens lenslet array triplet. Its distortion should be limited almost to zero, its field curvature flat, its PSF smaller than one pixel, and the magnification ratio equal to It is questionable whether the add-on is easier to implement than customizing directly the sensor by replacing one of its windows by a lenslet array. The conclusion is that it is very difficult, to say the least, to have the sensor and the lenslet array separated by a piece of optics. The 4k video images played back on a 3D-capable display by just feeding two views to the system proved to be nonusable because only one-fourth of the area of each view got proper view registration. For 2k-shot video, there is a nice 3D effect on the screen because 60% of the pixel received good view registration. The 3D depth, on the other hand, was not big because of all the crosstalk (explained in Sec. 4) responsible for mixing one or two views on top of a third one, diminishing the disparity to values that are not correlated to a depth within the scene but to an optical aberration of the relay lens. Figure 8 shows two views from one frame of a real scene capture. The shooting distance was m and the lens aperture 2.8, and we limited the recording to a 2k plenoptic image, so that each video frame had a resolution of pixels. Figure 8 shows views 2 and 4. Focusing is set in the middle of the scene so that we have a positive disparity in the foreground and a negative one in the background. From the analysis in Sec. 2, we should have a total disparity of 30 pixels, and due to crosstalk between the views, there is a total estimated disparity of 2 pixels that cannot be related to a depth using Eq. (9), since crosstalk has not been taken into account. Acknowledgments This work has received research funding from the European Union 6th Framework Program under contract number IST OSIRIS ( References. P. Fua, A parallel stereo algorithm that produces dense depth maps and preserves image features, Mach. Vision Appl. 6(), 3 49 (993). 2. H. P. Moreton and B. E. Loucks, Optical system for single camera stereo video, U. S. Patent No.,83,33 ( Nov 998). 3. Z. Perisic, Apparatus for three dimensional photography, U. S. Patent No. 6,72,00 B2 (3 April 2004). 4. E. H. Adelson and J. Y. Wang, Single lens stereo with a plenoptic camera, IEEE Trans. Pattern Anal. Mach. Intell. 4(2), 99 6 (992).. E. H. Adelson, Optical ranging apparatus, U. S. Patent No.,076,687 (3 Dec 99). 6. R. Ng et al., Light field photography with a hand-held plenoptic camera, Stanford Tech. Rep. CTSR (200). 7. A. Subramanian et al., Segmentation and range sensing using a movingaperture lens, IEEE International Conference on Computer Vision 2, 00 (200). 8. V. Drazic, Optimal depth resolution in plenoptic imaging, 20 IEEE International Conference on Multimedia and Expo, pp (9 23 July 20), Singapore, ICME 20. Valter Drazic is a senior scientist at Technicolor Research & Innovation France. He received an engineering degree in optics from Ecole Nationale Supérieure de Physique de Strasbourg (ENSPS) in 988 and a Ph.D from the University of Karlsruhe, Germany, in 993. He joined Thomson/Technicolor in 993 as an R&D engineer to work in the development of projection TV. In 200, he joined Thomson Indianapolis to participate in the development of the first commercialized LCOS rear projection TV. From 2007, he was in charge of the 3D Natural Content Acquisition System within the European OSIRIS funded project and developed a multiview single lens camera. His main research area covers projection technologies, nonimaging optical systems, optical design, flat displays, HDR, color management, 3D, and disparity map calculation. Arno Schubert started his career as a professional photographer before graduating as an optical engineer at the Cologne University of Applied Science in 990. He joined Thomson/Technicolor in 99 and was involved in different programs as research engineer and project leader covering projection technologies, flat displays, HDR, color management, and 3D, including the European funded research project OSIRIS with the study on single lens multiview acquisition. Today he is based in Rennes, France, leading a research group on 3D scene production tools. Journal of Electronic Imaging 007- Jan Mar 202/Vol. 2()

4. CAMERA ADJUSTMENTS

4. CAMERA ADJUSTMENTS 4. CAMERA ADJUSTMENTS Only by the possibility of displacing lens and rear standard all advantages of a view camera are fully utilized. These displacements serve for control of perspective, positioning

More information

Understanding astigmatism Spring 2003

Understanding astigmatism Spring 2003 MAS450/854 Understanding astigmatism Spring 2003 March 9th 2003 Introduction Spherical lens with no astigmatism Crossed cylindrical lenses with astigmatism Horizontal focus Vertical focus Plane of sharpest

More information

Rodenstock Photo Optics

Rodenstock Photo Optics Rogonar Rogonar-S Rodagon Apo-Rodagon N Rodagon-WA Apo-Rodagon-D Accessories: Modular-Focus Lenses for Enlarging, CCD Photos and Video To reproduce analog photographs as pictures on paper requires two

More information

PHYS 39a Lab 3: Microscope Optics

PHYS 39a Lab 3: Microscope Optics PHYS 39a Lab 3: Microscope Optics Trevor Kafka December 15, 2014 Abstract In this lab task, we sought to use critical illumination and Köhler illumination techniques to view the image of a 1000 lines-per-inch

More information

WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?

WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter? WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed

More information

Endoscope Optics. Chapter 8. 8.1 Introduction

Endoscope Optics. Chapter 8. 8.1 Introduction Chapter 8 Endoscope Optics Endoscopes are used to observe otherwise inaccessible areas within the human body either noninvasively or minimally invasively. Endoscopes have unparalleled ability to visualize

More information

1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 05-232 Imaging Systems Laboratory II Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 Abstract: For designing the optics of an imaging system, one of the main types of tools used today is optical

More information

Anamorphic imaging with three mirrors: a survey

Anamorphic imaging with three mirrors: a survey Anamorphic imaging with three mirrors: a survey Joseph M. Howard Optics Branch (Code 551), NASA Goddard Space Flight Center, Greenbelt, MD 20771 Ph: 301-286-0690 Fax: 301-286-0204 Joseph.M.Howard@nasa.gov

More information

Color holographic 3D display unit with aperture field division

Color holographic 3D display unit with aperture field division Color holographic 3D display unit with aperture field division Weronika Zaperty, Tomasz Kozacki, Malgorzata Kujawinska, Grzegorz Finke Photonics Engineering Division, Faculty of Mechatronics Warsaw University

More information

Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010

Lecture 12: Cameras and Geometry. CAP 5415 Fall 2010 Lecture 12: Cameras and Geometry CAP 5415 Fall 2010 The midterm What does the response of a derivative filter tell me about whether there is an edge or not? Things aren't working Did you look at the filters?

More information

Interference. Physics 102 Workshop #3. General Instructions

Interference. Physics 102 Workshop #3. General Instructions Interference Physics 102 Workshop #3 Name: Lab Partner(s): Instructor: Time of Workshop: General Instructions Workshop exercises are to be carried out in groups of three. One report per group is due by

More information

Ultra-High Resolution Digital Mosaics

Ultra-High Resolution Digital Mosaics Ultra-High Resolution Digital Mosaics J. Brian Caldwell, Ph.D. Introduction Digital photography has become a widely accepted alternative to conventional film photography for many applications ranging from

More information

Rodenstock Photo Optics

Rodenstock Photo Optics Apo-Sironar-S Apo-Macro-Sironar Apo-Grandagon Grandagon-N Accessories: Center filters Accessories: Focus-Mount Lenses for Analog Professional Photography Even in the age of digital photography, the professional

More information

Experimental and modeling studies of imaging with curvilinear electronic eye cameras

Experimental and modeling studies of imaging with curvilinear electronic eye cameras Experimental and modeling studies of imaging with curvilinear electronic eye cameras Viktor Malyarchuk, 1 Inhwa Jung, 1 John A. Rogers, 1,* Gunchul Shin, 2 and Jeong Sook Ha 2 1 Department of Materials

More information

Resolution for Color photography

Resolution for Color photography Resolution for Color photography Paul M. Hubel a and Markus Bautsch b a Foveon, Inc., 282 San Tomas Expressway, Santa Clara, CA, USA 955; b Stiftung Warentest, Luetzowplatz -3, D-785 Berlin-Tiergarten,

More information

Video Camera Image Quality in Physical Electronic Security Systems

Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global

More information

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND

DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND DOING PHYSICS WITH MATLAB COMPUTATIONAL OPTICS RAYLEIGH-SOMMERFELD DIFFRACTION INTEGRAL OF THE FIRST KIND THE THREE-DIMENSIONAL DISTRIBUTION OF THE RADIANT FLUX DENSITY AT THE FOCUS OF A CONVERGENCE BEAM

More information

Fig.1. The DAWN spacecraft

Fig.1. The DAWN spacecraft Introduction Optical calibration of the DAWN framing cameras G. Abraham,G. Kovacs, B. Nagy Department of Mechatronics, Optics and Engineering Informatics Budapest University of Technology and Economics

More information

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES

More information

Software-based three dimensional reconstructions and enhancements of focal depth in microphotographic images

Software-based three dimensional reconstructions and enhancements of focal depth in microphotographic images FORMATEX 2007 A. Méndez-Vilas and J. Díaz (Eds.) Software-based three dimensional reconstructions and enhancements of focal depth in microphotographic images Jörg Piper Clinic Meduna, Department for Internal

More information

The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection

The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection I want a lens that can cover the whole parking lot and I want to be able to read a license plate. Sound familiar?

More information

Resolution Enhancement of Photogrammetric Digital Images

Resolution Enhancement of Photogrammetric Digital Images DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia 1 Resolution Enhancement of Photogrammetric Digital Images John G. FRYER and Gabriele SCARMANA

More information

Revision problem. Chapter 18 problem 37 page 612. Suppose you point a pinhole camera at a 15m tall tree that is 75m away.

Revision problem. Chapter 18 problem 37 page 612. Suppose you point a pinhole camera at a 15m tall tree that is 75m away. Revision problem Chapter 18 problem 37 page 612 Suppose you point a pinhole camera at a 15m tall tree that is 75m away. 1 Optical Instruments Thin lens equation Refractive power Cameras The human eye Combining

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

AP Physics B Ch. 23 and Ch. 24 Geometric Optics and Wave Nature of Light

AP Physics B Ch. 23 and Ch. 24 Geometric Optics and Wave Nature of Light AP Physics B Ch. 23 and Ch. 24 Geometric Optics and Wave Nature of Light Name: Period: Date: MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. 1) Reflection,

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

MRC High Resolution. MR-compatible digital HD video camera. User manual

MRC High Resolution. MR-compatible digital HD video camera. User manual MRC High Resolution MR-compatible digital HD video camera User manual page 1 of 12 Contents 1. Intended use...2 2. System components...3 3. Video camera and lens...4 4. Interface...4 5. Installation...5

More information

Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors

Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors Diego Betancourt and Carlos del Río Antenna Group, Public University of Navarra, Campus

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Exposing Digital Forgeries Through Chromatic Aberration

Exposing Digital Forgeries Through Chromatic Aberration Exposing Digital Forgeries Through Chromatic Aberration Micah K. Johnson Department of Computer Science Dartmouth College Hanover, NH 03755 kimo@cs.dartmouth.edu Hany Farid Department of Computer Science

More information

Schneider Kreuznach PC Tilt/Shift Lenses

Schneider Kreuznach PC Tilt/Shift Lenses User Manual PC-TS SUPER-ANGULON 2.8/50 HM PC-TS MAKRO-SYMMAR 4.5/90 HM with interchangeable bayonet mount for Canon EOS, Nikon, Pentax K or Sony Alpha PC-TS APO-DIGITAR 5.6/120 HM Aspheric with bayonet

More information

Understanding Line Scan Camera Applications

Understanding Line Scan Camera Applications Understanding Line Scan Camera Applications Discover the benefits of line scan cameras, including perfect, high resolution images, and the ability to image large objects. A line scan camera has a single

More information

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

Geometric Optics Converging Lenses and Mirrors Physics Lab IV Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The

More information

Optical Design Tools for Backlight Displays

Optical Design Tools for Backlight Displays Optical Design Tools for Backlight Displays Introduction Backlights are used for compact, portable, electronic devices with flat panel Liquid Crystal Displays (LCDs) that require illumination from behind.

More information

Customized corneal ablation and super vision. Customized Corneal Ablation and Super Vision

Customized corneal ablation and super vision. Customized Corneal Ablation and Super Vision Customized Corneal Ablation and Super Vision Scott M. MacRae, MD; James Schwiegerling, PhD; Robert Snyder, MD, PhD ABSTRACT PURPOSE: To review the early development of new technologies that are becoming

More information

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit Abstract In order to reduce the

More information

Polarization of Light

Polarization of Light Polarization of Light References Halliday/Resnick/Walker Fundamentals of Physics, Chapter 33, 7 th ed. Wiley 005 PASCO EX997A and EX999 guide sheets (written by Ann Hanks) weight Exercises and weights

More information

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)

To determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt) Polytechnic University, Dept. Electrical and Computer Engineering EL6123 --- Video Processing, S12 (Prof. Yao Wang) Solution to Midterm Exam Closed Book, 1 sheet of notes (double sided) allowed 1. (5 pt)

More information

Optical laser beam scanner lens relay system

Optical laser beam scanner lens relay system 1. Introduction Optical laser beam scanner lens relay system Laser beam scanning is used most often by far in confocal microscopes. There are many ways by which a laser beam can be scanned across the back

More information

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department

More information

Contents OVERVIEW WORKFLOW PROBLEM. SOLUTION. FEATURE. BENEFIT 4 5 EASY STEPS TO CALIBRATE YOUR LENSES 5

Contents OVERVIEW WORKFLOW PROBLEM. SOLUTION. FEATURE. BENEFIT 4 5 EASY STEPS TO CALIBRATE YOUR LENSES 5 User Guide Contents OVERVIEW PROBLEM. SOLUTION. FEATURE. BENEFIT 4 EASY STEPS TO CALIBRATE YOUR LENSES WORKFLOW 1. SETUP SPYDERLENSCAL 6 2. SETUP CAMERA 6 3. DISTANCE SETTING 7 4. SHOOTING ENVIRONMENT

More information

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision! Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision! 1 In this presentation We are going to go over lens basics Explain figures of merit of lenses Show how

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors

Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction

More information

An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

More information

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

More information

Master Anamorphic T1.9/35 mm

Master Anamorphic T1.9/35 mm T1.9/35 mm backgrounds and a smooth, cinematic look, the 35 Close Focus (2) 0.75 m / 2 6 Magnification Ratio (3) H: 1:32.3, V: 1:16.1 Weight (kg) 2.6 Weight (lbs) 5.7 Entrance Pupil (7) (mm) -179 Entrance

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013

MassArt Studio Foundation: Visual Language Digital Media Cookbook, Fall 2013 INPUT OUTPUT 08 / IMAGE QUALITY & VIEWING In this section we will cover common image file formats you are likely to come across and examine image quality in terms of resolution and bit depth. We will cover

More information

LEICA TRI-ELMAR-M 28 35 50 mm f/4 ASPH. 1

LEICA TRI-ELMAR-M 28 35 50 mm f/4 ASPH. 1 LEICA TRI-ELMAR-M 28 35 5 mm f/4 ASPH. 1 This lens combines three of the focal lengths that are most popular among Leica M photographers. The appropriate bright-line frame appears in the viewfinder when

More information

Care and Use of the Compound Microscope

Care and Use of the Compound Microscope Revised Fall 2011 Care and Use of the Compound Microscope Objectives After completing this lab students should be able to 1. properly clean and carry a compound and dissecting microscope. 2. focus a specimen

More information

Application Report: Running µshape TM on a VF-20 Interferometer

Application Report: Running µshape TM on a VF-20 Interferometer : Running µshape TM on a VF-20 Interferometer General This report describes how a fiber interferometer from Arden Photonics Ltd was used together with the µshape TM Generic software package. The VF-20

More information

White paper. H.264 video compression standard. New possibilities within video surveillance.

White paper. H.264 video compression standard. New possibilities within video surveillance. White paper H.264 video compression standard. New possibilities within video surveillance. Table of contents 1. Introduction 3 2. Development of H.264 3 3. How video compression works 4 4. H.264 profiles

More information

Introduction to Lensometry Gregory L. Stephens, O.D., Ph.D. College of Optometry, University of Houston 2010

Introduction to Lensometry Gregory L. Stephens, O.D., Ph.D. College of Optometry, University of Houston 2010 Introduction to Lensometry Gregory L. Stephens, O.D., Ph.D. College of Optometry, University of Houston 2010 I. Introduction The focimeter, lensmeter, or Lensometer is the standard instrument used to measure

More information

Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements

Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements SANDIA REPORT SAND01-077 Unlimited Release Printed January 01 Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements Gabriel C. Birch, John C. Griffin

More information

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec. Physics for Scientists and Engineers, 4e (Giancoli) Chapter 33 Lenses and Optical Instruments 33.1 Conceptual Questions 1) State how to draw the three rays for finding the image position due to a thin

More information

Reflection and Refraction

Reflection and Refraction Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,

More information

5.3 Cell Phone Camera

5.3 Cell Phone Camera 164 Chapter 5 5.3 Cell Phone Camera The next design example we discuss is a cell phone camera. These systems have become quite popular, to the point that it is often more difficult to purchase a cell phone

More information

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract

More information

Digital Image Requirements for New Online US Visa Application

Digital Image Requirements for New Online US Visa Application Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must

More information

www.photonics.com May 2013 Color Sense Trilinear Cameras Bring Speed, Quality

www.photonics.com May 2013 Color Sense Trilinear Cameras Bring Speed, Quality www.photonics.com May 2013 Color Sense Trilinear Cameras Bring Speed, Quality Tech Feature Trilinear Cameras Offer High-Speed Color Imaging Solutions With high color quality and speed and low cost trilinear

More information

Experiment 3 Lenses and Images

Experiment 3 Lenses and Images Experiment 3 Lenses and Images Who shall teach thee, unless it be thine own eyes? Euripides (480?-406? BC) OBJECTIVES To examine the nature and location of images formed by es. THEORY Lenses are frequently

More information

Shutter Speed in Digital Photography

Shutter Speed in Digital Photography Shutter Speed in Digital Photography [Notes from Alan Aldrich as presented to the Hawkesbury Camera Club in April 2014] Light is a form of energy and as such behaves as formulated in the general power

More information

The Limits of Human Vision

The Limits of Human Vision The Limits of Human Vision Michael F. Deering Sun Microsystems ABSTRACT A model of the perception s of the human visual system is presented, resulting in an estimate of approximately 15 million variable

More information

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Spatial location in 360 of reference points over an object by using stereo vision

Spatial location in 360 of reference points over an object by using stereo vision EDUCATION Revista Mexicana de Física E 59 (2013) 23 27 JANUARY JUNE 2013 Spatial location in 360 of reference points over an object by using stereo vision V. H. Flores a, A. Martínez a, J. A. Rayas a,

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

A technical overview of the Fuel3D system.

A technical overview of the Fuel3D system. A technical overview of the Fuel3D system. Contents Introduction 3 How does Fuel3D actually work? 4 Photometric imaging for high-resolution surface detail 4 Optical localization to track movement during

More information

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM

ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM Theodor Heinze Hasso-Plattner-Institute for Software Systems Engineering Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, Germany theodor.heinze@hpi.uni-potsdam.de

More information

Theory and Methods of Lightfield Photography SIGGRAPH 2009

Theory and Methods of Lightfield Photography SIGGRAPH 2009 Theory and Methods of Lightfield Photography SIGGRAPH 2009 Todor Georgiev Adobe Systems tgeorgie@adobe.com Andrew Lumsdaine Indiana University lums@cs.indiana.edu 1 Web Page http://www.tgeorgiev.net/asia2009/

More information

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION

SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM AND A NEW METHOD OF RESULTS ANALYSIS 1. INTRODUCTION Journal of Machine Engineering, Vol. 15, No.1, 2015 machine tool accuracy, metrology, spindle error motions Krzysztof JEMIELNIAK 1* Jaroslaw CHRZANOWSKI 1 SPINDLE ERROR MOVEMENTS MEASUREMENT ALGORITHM

More information

High contrast ratio and compact-sized prism for DLP projection system

High contrast ratio and compact-sized prism for DLP projection system High contrast ratio and compact-sized prism for DLP projection system Yung-Chih Huang and Jui-Wen Pan 2,3,4,* Institute of Lighting and Energy Photonics, National Chiao Tung University, Tainan City 750,

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801 S7a1801 OVERVIEW In definiti 3D theaters, the audience wears special lightweight glasses to see the world projected onto the giant dome screen with real depth perception called 3D stereo. The effect allows

More information

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.

Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ. Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your

More information

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras.

How an electronic shutter works in a CMOS camera. First, let s review how shutters work in film cameras. How an electronic shutter works in a CMOS camera I have been asked many times how an electronic shutter works in a CMOS camera and how it affects the camera s performance. Here s a description of the way

More information

Imaging techniques with refractive beam shaping optics

Imaging techniques with refractive beam shaping optics Imaging techniques with refractive beam shaping optics Alexander Laskin, Vadim Laskin AdlOptica GmbH, Rudower Chaussee 29, 12489 Berlin, Germany ABSTRACT Applying of the refractive beam shapers in real

More information

Microlenses immersed in nematic liquid crystal with electrically. controllable focal length

Microlenses immersed in nematic liquid crystal with electrically. controllable focal length Microlenses immersed in nematic liquid crystal with electrically controllable focal length L.G.Commander, S.E. Day, C.H. Chia and D.R.Selviah Dept of Electronic and Electrical Engineering, University College

More information

What is Visualization? Information Visualization An Overview. Information Visualization. Definitions

What is Visualization? Information Visualization An Overview. Information Visualization. Definitions What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some

More information

A Game of Numbers (Understanding Directivity Specifications)

A Game of Numbers (Understanding Directivity Specifications) A Game of Numbers (Understanding Directivity Specifications) José (Joe) Brusi, Brusi Acoustical Consulting Loudspeaker directivity is expressed in many different ways on specification sheets and marketing

More information

The Olympus stereology system. The Computer Assisted Stereological Toolbox

The Olympus stereology system. The Computer Assisted Stereological Toolbox The Olympus stereology system The Computer Assisted Stereological Toolbox CAST is a Computer Assisted Stereological Toolbox for PCs running Microsoft Windows TM. CAST is an interactive, user-friendly,

More information

WAVELENGTH OF LIGHT - DIFFRACTION GRATING

WAVELENGTH OF LIGHT - DIFFRACTION GRATING PURPOSE In this experiment we will use the diffraction grating and the spectrometer to measure wavelengths in the mercury spectrum. THEORY A diffraction grating is essentially a series of parallel equidistant

More information

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4 S. Heymann, A. Smolic, K. Mueller, Y. Guo, J. Rurainsky, P. Eisert, T. Wiegand Fraunhofer Institute

More information

Integrated sensors for robotic laser welding

Integrated sensors for robotic laser welding Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of

More information

Kapitel 12. 3D Television Based on a Stereoscopic View Synthesis Approach

Kapitel 12. 3D Television Based on a Stereoscopic View Synthesis Approach Kapitel 12 3D Television Based on a Stereoscopic View Synthesis Approach DIBR (Depth-Image-Based Rendering) approach 3D content generation DIBR from non-video-rate depth stream Autostereoscopic displays

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Optical Modeling of the RIT-Yale Tip-Tilt Speckle Imager Plus

Optical Modeling of the RIT-Yale Tip-Tilt Speckle Imager Plus SIMG-503 Senior Research Optical Modeling of the RIT-Yale Tip-Tilt Speckle Imager Plus Thesis Kevin R. Beaulieu Advisor: Dr. Elliott Horch Chester F. Carlson Center for Imaging Science Rochester Institute

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

9/16 Optics 1 /11 GEOMETRIC OPTICS

9/16 Optics 1 /11 GEOMETRIC OPTICS 9/6 Optics / GEOMETRIC OPTICS PURPOSE: To review the basics of geometric optics and to observe the function of some simple and compound optical devices. APPARATUS: Optical bench, lenses, mirror, target

More information

Diffraction of a Circular Aperture

Diffraction of a Circular Aperture Diffraction of a Circular Aperture Diffraction can be understood by considering the wave nature of light. Huygen's principle, illustrated in the image below, states that each point on a propagating wavefront

More information

Fraunhofer Diffraction

Fraunhofer Diffraction Physics 334 Spring 1 Purpose Fraunhofer Diffraction The experiment will test the theory of Fraunhofer diffraction at a single slit by comparing a careful measurement of the angular dependence of intensity

More information

6 Space Perception and Binocular Vision

6 Space Perception and Binocular Vision Space Perception and Binocular Vision Space Perception and Binocular Vision space perception monocular cues to 3D space binocular vision and stereopsis combining depth cues monocular/pictorial cues cues

More information

Integration of a passive micro-mechanical infrared sensor package with a commercial smartphone camera system

Integration of a passive micro-mechanical infrared sensor package with a commercial smartphone camera system 1 Integration of a passive micro-mechanical infrared sensor package with a commercial smartphone camera system Nathan Eigenfeld Abstract This report presents an integration plan for a passive micro-mechanical

More information

Arrayoptik für ultraflache Datensichtbrillen

Arrayoptik für ultraflache Datensichtbrillen Arrayoptik für ultraflache Datensichtbrillen Peter Schreiber, Fraunhofer Institut für Angewandte Optik und Feinmechanik IOF, Jena 1. Introduction 2. Conventional near-to-eye (N2E) projection optics - first

More information

The Image Deblurring Problem

The Image Deblurring Problem page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

More information

P R E A M B L E. Facilitated workshop problems for class discussion (1.5 hours)

P R E A M B L E. Facilitated workshop problems for class discussion (1.5 hours) INSURANCE SCAM OPTICS - LABORATORY INVESTIGATION P R E A M B L E The original form of the problem is an Experimental Group Research Project, undertaken by students organised into small groups working as

More information

Camera Resolution Explained

Camera Resolution Explained Camera Resolution Explained FEBRUARY 17, 2015 BY NASIM MANSUROV Although the megapixel race has been going on since digital cameras had been invented, the last few years in particular have seen a huge

More information

Digital Photography Composition. Kent Messamore 9/8/2013

Digital Photography Composition. Kent Messamore 9/8/2013 Digital Photography Composition Kent Messamore 9/8/2013 Photography Equipment versus Art Last week we focused on our Cameras Hopefully we have mastered the buttons and dials by now If not, it will come

More information