Stereo 3D Monitoring & Correction

Size: px
Start display at page:

Download "Stereo 3D Monitoring & Correction"

Transcription

1 Advanced Measurement Technology Stereo 3D Monitoring & Correction Roger Fawcett, Managing Director, OmniTek It has long been known that it is possible to give the impression of 3D scene on a 2D screen by mimicking binocular vision. The 3D effect principally comes from the brain interpreting horizontal disparities between the positioning of items of interest in the images seen by the right and left eye as indicating that the items are at different distances from the viewer. However, while left and right images are easy to capture, producing images that give a good 3D effect for the cinema is not an easy matter and it is especially difficult for TV. Projecting 3D images on a 2D screen inevitably makes for compromises, but care is needed to ensure that no unnatural effects are generated. Getting it wrong does not just mean a headache for the stereographer but also potentially for the viewer! Stereographers have learnt that keeping all the action reasonably close to the screen plane gives reasonably reliable results and, to this end, broadcasters specify a limited depth budget. It is also understood that it is important to ensure colour matching between the left and right images. Such rules of thumb represent a step in the right direction but they still leave a lot of room for error. The achievement of really good results requires an understanding of the 3D geometry perceived by the viewer. For this reason, the 3D Toolset for OmniTek s OTM/OTR family of waveform analysers doesn t only give the disparity and colour displays needed to ensure that Stereo 3D (S3D) video meets broadcasters requirements. It also automatically calculates the XYZ coordinates of items within the images as perceived by the viewer and presents this information in the form of Depth Plans showing the scene as viewed both from above and from one side. OmniTek also offer a real-time 3D geometry and colour processor known as the 3D Wizard to help correct issues with the images. This white paper looks at the basics of 3D stereoscopy and disparity monitoring, then goes on to look at how OmniTek s OTM and OTR systems extract viewer s world coordinates and generate depth plans. It also looks at different aspects of 3D image production such as camera alignment and zoom, and at how these affect the quality of the 3D effect that is achieved and what can be done to improve the results. It also describes a range of new features being offered in V2.4 of the OTM/OTR software to enhance the analysis of Stereo 3D video. To finish, there is a brief look at issues of colour matching and a glossary of terms used around Stereo 3D video. 24 Jan. 12 WHITE PAPER

2 Contents 1. An Introduction to Stereoscopy Monitoring Binocular Disparity... 4 Seeing the Disparity... 5 Limitations of Disparity-based Quality Control Understanding how S3D video is Perceived Issues... 9 Parallel Cameras vs Toe-in Horizontal Image Translation (HIT) The Effect of Viewing Screen Size Viewing Position and Camera Zoom Interaxial Separation of the Cameras Edge Effects and Floating Windows Adding Graphics Mirror Rigs Colour Issues New Features in V i. Real World Projection ii. Rig Alignment Support iii. Chroma Sabres iv. Support for Side-by-Side, Top+Bottom and Mirror Formats v. Additional 3D Status Information Conclusion Glossary General Terms Capture Issues Monitoring Post Production Viewing 3D Page 2 of 32

3 1. An Introduction to Stereoscopy The following cues are used by the brain to construct a 3D image: A. Visual Cues e.g. object image size compared to expected size, perspective and occlusions etc. B. Eyeball Focus: the focus of the eyes on sharp detail is directly correlated with depth C. Binocular Disparity i.e. the horizontal displacement of objects between the left and right eye image D. Observer Motion Disparity i.e. the relative motion of objects in the scene as the viewer s head moves True 3D requires all these cues to be reproduced, which would be very difficult to achieve. It would essentially have to involve some form of holographic process. Stereoscopy, or Stereoscopic 3D as it is also known, is a way of producing a 3D effect using 2D image technology which can be highly realistic but doesn t replicate the original scene. Where the real world offers all four type of cue, Stereoscopic 3D (S3D) video only offers types A and C, while conventional 2D video only offers type A. When viewing S3D (and 2D) video, the eyeball is focussed on the screen rather than the item of interest, so the brain will generally get different information about the depth at which the eye is focussed and the depth indicated by the binocular disparity. The only times when this is not the case is when the item of interest is in the screen plane. Keeping objects close to the screen plane helps to minimize the difference between these cues. Similarly, moving your head doesn t have the expected result: instead the 3D scene appears to shear as you move your head and it seems like you are dragging the content around with you. Again keeping the content close to the screen plane helps to minimize this effect but ultimately there is nothing we can do other than to keep still in the cinema. This paper focuses on understanding binocular disparity and how it is possible both to monitor the S3D effect and to correct issues in this result. It will be seen that the ways in which the binocular disparity generated by S3D video is wrong can be both good and bad, and that it is important to be able to monitor these effects and to understand how the different results come about. Page 3 of 32

4 2. Monitoring Binocular Disparity The geometry that is the basis of S3D video is shown in Figure 1. L i = inter-ocular distance of eyes R S = disparity on screen W = screen width d = viewer distance Z = object distance from screen Viewer Eyes Viewing Screen Figure 1: The basic relationships In the real world, the star shown on the right of the diagram appears to the left eye to lie along the line joining the star to the left eye, and appears in the right eye to lie along the line joining the star to the right eye. It therefore lies at the point of intersection of these lines. What happens in S3D video is that the left eye image puts the star at the point marked by the cyan star on the viewing screen, while the right eye image puts it at the point marked by the red star. The brain then deduces that the star must actually be at the intersection of the lines joining the left eye to the cyan star and the right eye to the red star. The horizontal distance between the cyan star and the red star on the viewing screen is known as the horizontal disparity between the position of this star in the left-eye image and that in right-eye image (represented here by S). The horizontal disparity S has some useful properties. Firstly, for a viewer at a certain distance from the screen (d) and with eyes a distance i apart (the so-called inter-ocular distance), S is same for all objects the same distance Z behind the screen. Secondly, S is positive for objects behind the viewing screen (left image to the left of the right image), negative for objects such as the circle which are in front of the viewing screen and gets bigger the further away from the screen the object is. Note: Horizontal disparity is sometimes referred as parallax from the way that the position of an object appears to move depending on which eye is used to view it. For objects at a certain distance from the viewer, the disparity is zero. Such objects are said to be at the convergence point and appear be located on the screen itself. The disparity S therefore gives a good guide to how far an object will appear in front of or behind the viewing screen. If the disparity S is equal to the inter-ocular distance i, the lines joining the eyes to the representations of the object on the viewing screen are parallel. Parallel lines only meet at infinity. An object for which the disparity is equal to the inter-ocular distance will therefore appear to be at infinity, while any feature for which the disparity is greater than this is totally unrealistic and unnatural for the brain to resolve into a single item. Page 4 of 32

5 Seeing the Disparity The disparities between left- and right-eye images of the same scene are typically only a small percentage of the picture width and can be readily seen in a picture difference image such as that shown in Figure 2 below which was produced using OmniTek s 3D Toolset. In this image, the picture appears smooth around the convergence point but the differences are more marked for objects that will be perceived both nearer to the viewer and further away. Figure 2: Difference image Figure 3: 3D Depth Map The OmniTek 3D Tools can automatically determine the disparities in live video sequences and display these as a 3D Depth Map such as that shown in Figure 3 with the different points coloured according to how far the items represented will appear to be from the viewer. (The nearest items are coloured red; the furthest items are coloured violet and a colour ramp is used between.) The Difference image and the 3D Depth Map are also very useful for spotting where an S3D video has dropped into 2D because the Difference image will then become completely smooth while the 3D Depth Map will become covered in dots of the same colour, representing everything with zero disparity and apparently in the plane of the screen. Disparity is measured using several different units. It can be measured in pixels (determined directly from the video source) or it can be measured in metres on the viewing screen. However 3D disparity guidelines more commonly define a Depth Budget in terms of a percentage of the screen width. For example, this is an extract from the SKY 3D guidelines: Main subject point should nominally be within an overall depth budget of 3% within the limits below. Positive disparity or image separation at distant points (into the screen) should not exceed +2% for majority of shots. Negative disparity or image separation at distant points (out of the screen) should be used with care and should not exceed -1% for the majority of shots. The SKY guidelines also say that: These are guidelines that aim to deliver managed and comfortable stereoscopic viewing. As such these limits can be exceeded for specific editorial needs (such as Prime Vision, Graphic Content or Short Term visual impact), managed appropriately and in line with 3D production practice. Such instances should be constrained to 4% Positive (Into Screen) and 2.5% Negative (Out of Screen). And that: To enable the 3D program to retain the highest quality throughout, a minimum of 75% must be native 3D footage). Page 5 of 32

6 The degree to which any scene meets the specified depth budget is typically shown through a 3D Depth Histogram such as that shown in Figure 4 below. This plots the number of pixels at a particular disparity (y-axis) against the disparity amount (x-axis) against a graticule that not only shows these disparities but also marks the extent of the depth budget that has been set and hence how many pixels which are out of the guideline range. Moreover both the disparities can be displayed and the depth budget set either in pixels or as a percentage of the screen width or as a physical distance as desired. Figure 4: Example Depth Histogram Figure 5: Out-of-Budget 3D Depth Map The OmniTek 3D Toolset also offers another way of seeing this in the special Error form of the 3D Depth Map shown in Figure 5, which just shows the objects with more than the maximum positive disparity budget in violet and the objects with less than the minimum negative disparity in red. Limitations of Disparity-based Quality Control The analysis tools discussed in the previous section enable QC monitoring of live and post production material to ensure conformance to most of the existing 3D guidelines. However S3D is a new art form and while these guides are a step in the right direction, they ignore a number of issues which can affect the 3D viewing experience. Among issues they don t consider are: Is the depth perception close to the reality of the real world? If not, is it wrong for good reasons? The guidelines give no consideration to the actual depth perception of the viewer. What happens when a zoom lens is used? What is the effect of altering the camera interaxial distance? What is the true effect of horizontal image translation (HIT) and camera toe-in? Are there geometric distortions between the left and right image that could cause eye strain when trying to view the material? What is the effect of viewing this material on different screen sizes? What is the effect of viewer distance from screen? Being able to understand, analyse and correct for these issues permits the production of higher quality 3D material, producing a pleasing 3D effect without giving the viewer a headache. For this, further tools are required. Page 6 of 32

7 3. Understanding how S3D video is Perceived The key to understanding how an S3D video will be perceived is accurate knowledge of the 3D geometry represented by the horizontal disparities between the left-eye and right-eye images. The basic geometry is illustrated in Figures 6 and 7. Using some relatively straightforward geometry, we can calculate the coordinates of the star in the viewer s world. Figure 6: The relation between screen coordinates and the coordinates in the viewer s world Figure 7: The basic geometry in the X Z plane The perceived depth Z can be determined from Figure 7. By similar triangles, which gives:. Page 7 of 32

8 Once Z has been calculated, Figure 6 can be used to calculate X and Y from the screen coordinates x and y, again by using similar triangles:.. and Supplied with details of the size of the screen, the viewer s distance from the screen, and their interocular distance (all entered through the OTM/OTR Configuration window), the OmniTek 3D Toolset uses the above equations to determine the X, Y and Z coordinates in the viewer s world for each pixel in the image for which the disparity can be determined. This information is then used to generate a couple of 3D Depth Plan displays one showing the view of the scene from above (using the calculated X and Z values), another showing the scene as viewed from one side (using the calculated Y and Z values), and, in V2.4 and later, a third showing a Real World Projection (see page 24) which can be moved around to show the scene from all angles. The various depth displays that are created are illustrated in Figure 8. Page 8 of 32 Figure 8: Set of views from the OTR 1001 showing (top left) 3D Depth Plan ( helicopter view ); (top right) 3D Depth Map (bottom left) Real World projection; (bottom right) 3D Depth Map Histogram. Shown alongside the Depth Plans are white lines marking the screen plane, while purple and red lines mark the maximum and minimum limits of the depth budget. The Depth Map Histogram has markers showing the extent of the depth budget and the infinity point. Another feature of these views is that clicking the cursor on any of the displays identifies the same point on all the displays. In addition, the perceived XYZ world coordinates of that point can be automatically displayed. In the example shown above, the perceived depth of the furthest part of the flower is actually more than 1m behind the viewing screen. This is clearly not set to match the reality of the original scene, but may provide a better compromise for overall viewing experience.

9 The depth calculations used to create the Depth Plans are also put to use in the version of the 3D Depth Map Histogram shown in Figure 9 which has perceived Z depth as the horizontal axis. Figure 9: 3D Depth Map Histogram plotted against perceived depth This type of geometry extract provides 3D stereographers and convergence pullers with the ability to make informed decisions about the viewer s experience. Aside: The observant reader will have noticed that there are in fact two screen x coordinates: one for the left image and one for the right image. The OmniTek 3D Toolset uses the average of these two. By doing this, the viewer s world coordinate X is centred around a point midway between the eyes, which, assuming the viewer is sitting centrally to the viewing screen, will also be the centre of the screen. 4. Issues Precise 3D stereoscopy would exactly replicate the light ray geometry the scene would have provided if the viewer replaced the camera. For precise 3D stereoscopy to be achieved, the following would have to happen: 1. The images would have to be captured by a pair of parallel cameras, positioned the viewer s inter-ocular distance apart. 2. The resulting images would have to be given the horizontal offset equivalent to the viewer s inter-ocular distance when the images are displayed on the viewing screen 3. The ratio of the focal length of the camera to the image sensor width must match the ratio of viewer distance from the screen to viewing screen width. 4. The viewer should be sitting on the optical axis (typically a perpendicular line from the middle of the viewing screen) Page 9 of 32

10 In practice, however, stereoscopic productions don t generally meet the above requirements. This may be for production reasons, artistic reasons or simply in order to achieve a working compromise between the available 3D cues. For example, the director may call for the cameras to be used toe-in in order to frame a scene correctly or for zoom to be used at a particular point. Indeed, it is often only possible to achieve pleasing, easy to watch material by deviating from the above requirements. However, this is an area in which misconceptions abound as to when and how to break the above rules. To get the required result in a way that does not either induce headaches or spoil the 3D effect requires an understanding both of the theory behind the above rules and of what you are actually doing when you deviate from these rules. In what follows we try to give an intuitive explanation of the issues, together with the key results. Readers looking for in-depth understanding could refer to articles such as Andrew Woods and others paper on Image Distortions in Stereoscopic Video Systems from the Proceedings of the SPIE Vol (This paper is available online at As you will see, displays provided by the OmniTek 3D Toolset in particular, the new 3D Meters display added in V2.4 (see page 25) allow both the effects of the rig setup and subsequent image manipulation to be monitored, while our 3D Wizard may be used for rig corrections and live 3D geometry adjustments. Parallel Cameras vs Toe-in There is much debate as to whether cameras should be used parallel or toe-in as a certain amount of downstream processing is needed whichever strategy is used. Shooting parallel then applying the correctly calculated Horizontal Image Translation (HIT) produces the exact 3D effect. If you want to convergence pull to move items closer to or further away from the screen plane, this can be done by altering the HIT though the geometry will then no longer be exact. However, some producers of 3D prefer to convergence pull using toe-in. But while toe-in certainly can be used to alter the convergence point, it produces image distortions that themselves require correction to avoid giving the viewer a headache. The basic problem with toe-in is that the image planes for the left- and right-eye images are not parallel so ideally the images should be displayed on a pair of non-parallel screens. In order to display the images on a single flat screen, both images have to be mapped onto this screen. This effect of this mapping is to produce the so-called keystone effect which not only is a distortion of the geometry but also introduces totally unnatural vertical disparities both at the top and the bottom of the objects in the scene, the extent of which vary across the image. For S3D to be correct, the same point in the left and right image should always be vertically aligned because that matches what our brains are used to in the real world. The unnatural positioning of the left and right images produced by the toe-in keystone distortion gives our brain a stereo matching problem that it wouldn t encounter in the real world. This is likely to cause a headache, particularly if the toe-in is at all significant. The way to see how these issues come about is to consider what happens where a toe-in camera rig is pointing at a rectangular grid. Because of the toe-in, the left and right images will produce opposite keystone distortions as illustrated at the top of Figure 10. Page 10 of 32

11 a) Original Scene Content b) Capture via Toe-in Rig Left Image Right Image c) Projection onto Viewing Screen Vertical Disparity Left Image Right Image Vertical Disparity d) Overhead View of 3D Perception Depth Plane Curvature Screen Plane Figure 10: Vertical disparity and depth plane curvature produced by S3D video shot using toe-in. Another issue with toe-in can be understood by looking at the vertical lines in Figure 10. Notice how the horizontal disparity varies from left to right. In the centre of the image, the vertical lines are aligned making the lines appear in the screen plane. However to the left and right sides, the disparity increases. The result of this is that parts of the scene that should be at the same depth curve outwards towards the edges of the image. This unnatural effect is known as Depth Plane Curvature. These two effects mean that toe-in images can cause eyestrain, particularly at larger toe-in angles. The perceived depths calculated by the OmniTek 3D Toolset show very clearly where the S3D video being analysed suffers from depth plane curvature. This is shown by the images in Figure 11. Page 11 of 32

12 Figure 11: Views from the OTR 1001 showing the effect of camera toe on a football stadium scene. The 3D Depth Plan (top left) shows the depth plane curvature caused by slight toe-out in the rig. The stadium visible as the line of magenta dots in the 3D Depth map (bottom right) appears to curve around the viewer, wheras in reality it should be straight. It is however relatively straightforward to correct both for the keystoning effect and for the resulting depth plane curvature using a downstream correction box such as OmniTek s 3D Wizard, as is illustrated in Figure 12. In addition to perspective geometry correction, the 3D Wizard can automatically apply a zoom to resulting images so they are no black regions at the edges resulting from the geometry correction. Perspective mapping of images by 3D Wizard Area remaining after enlarging and cropping Figure 12: The 3D Wizard applies geometry correction to the images, then can optionally expand the resulting images to fill the screen. Page 12 of 32

13 The images shown in Figure 13 show a 3D Depth Plan before and after correction using the 3D Wizard. Figure 13: In the image on right, the 3D Wizard has applied a 0.22 degree toe-in correction. This corrects the depth plane curvature Horizontal Image Translation (HIT) The raw images captured by the cameras of a parallel 3D rig are not immediately suitable as S3D. First a horizontal offset needs to be introduced between the left-eye and right-eye images. The application of this offset is referred to as Horizontal Image Translation or HIT. It is important to note that the amount of HIT that needs to be applied to create precise S3D geometry varies with the viewing screen dimensions. The need for HIT can be appreciated by considering a distant object at infinity. The light from such an object will hit the image sensors of each camera at the same point, i.e. there will be no horizontal disparity. These rays are represented by the dashed lines in Figure 14, which hit the left and right images sensors at the same position. Image Sensor Lens f = focal length Figure 14: For regular cameras in a parallel rig, a point at infinity will map to the same point on each image sensor giving zero disparity. In order to make the object appear at infinity when the images are viewed, there needs to be a horizontal offset between the images on the viewing screen equal to the viewer s inter-ocular distance. If this condition is observed, objects that appeared to be at infinity in the real world will also appear to be at infinity in the S3D scene. In fact it can be shown that applying this offset to the whole image will cause all objects to appear at the correct depth. Page 13 of 32

14 However, this is the rule that is most often deviated from for a variety of reasons: In particular, whilst attempting to recreate the precise geometry may seem a laudable aim, it is important to remember that our eyeballs focus on the viewing screen regardless of the object depth. This is very different from the real world where there is a direct correlation between the depth cue from eyeball focus and that of binocular disparity. This mismatch confuses our brain and makes it more difficult to fuse the left and right images, especially where there is a wide range of disparities. This is one of the reasons that most guidelines for 3D recommend keeping the key focus point of any scene on or near the screen plane, with the entire scene within a specified disparity range (typically specified as a percentage of the screen width). Limiting the disparity range also places less strain on our eyes to fuse the 3D content in our brains. This is where the SKY guideline of 1% to 3% comes from. HIT is also used to convergence pull. However it is important to appreciate that changing the horizontal offset between the left-eye and right-eye images doesn t just move the objects in the scene back and forth, it also changes the 3D geometry. In some cases, this can be used to good effect as we shall see below, it can be used to correct the geometry where the video is being shown on a different size of viewing screen but in other cases, it can introduce unwanted distortions. Omnitek s 3D Tools allow the geometric effect of HIT to be readily monitored. The 3D Wizard provides accurate real time control of HIT and also supports image enlargement to ensure that the black boarders resulting from HIT are removed if required. Horizontal Image Translation (HIT) Figure 15: Horizontal Image Translation (HIT) from the 3D Wizard with image enlargement to remove black edges of the images Page 14 of 32

15 The Effect of Viewing Screen Size While it is easy to understand a depth budget expressed in terms of percentages of the screen width, there is a fundamental problem in that the depths the limits correspond to are very different depending on the size of screen on which the video is displayed. For example, a disparity of 3% on a 80cm-wide screen corresponds to a disparity 2.4cm, which to someone sitting the screen width in front of the screen will look like a depth of about 47cm. The same percentage disparity on a screen twice as wide for someone sitting the screen width in front of the screen will be perceived as a depth of about 4.5m, which may not seem unreasonable. However, the same percentage on a screen a mere three times as wide corresponds to a disparity of 7.2cm which is more than the inter-ocular distance and hence effectively beyond infinity! (Moreover, you would need to make yourself somewhat bug-eyed to resolve a large disparity.) Equally disturbing is the effect on the range of depths corresponding to a depth budget of 1% 3%. On the 80cm screen, this corresponds to about 9cm in front of the screen to 47cm behind, i.e. a total depth range of 56cm which is well within the capability of the eye to see. On the 1.6m screen, it corresponds to about 32cm in front of the screen to 4.5m behind or a total depth range of about 5m, starting just 1.3m in front of the eyes. This could easily induce a headache. This effect can be seen by comparing Figure 16 and Figure 17. Figure 16 shows the original footage from a wedding displayed on a 2.4m screen with a viewer 2.4m from the screen. This is the correct geometry. Notice how parallel lines in the scene map (just as the balustrades) are shown as parallel lines in the 3D Depth Plan. Also notice that the points in the distance are just in front of the infinity point shown in the Depth Histogram. Moreover, the size of objects and their distance from the viewing screen, as indicated on the 3D Depth Plans, all appear to be correct. For example, the castles are around 11 m from the screen which is believable. Figure 17 shows the depth perception of exactly same material when viewed on a smaller 1.47m display. The 3D Depth Plan gives instant feedback that something is wrong with the geometry. Points at infinity now appear only about 2.5m behind the screen. Lines that are parallel in the real world are no longer parallel. This result is typical for small screen. Stereographers often apply too little horizontal offset. This has the effect of bringing all the objects closer to the viewer and hence compressing the depth effect. If the S3D footage for a 3D movie is displayed on a small home TV, this is the exact effect that will be created: depth compression. It turns out that this may be more relaxing to watch, but it is important to be aware that the 3D illusion has been diluted. The points at infinity can readily be returned to something nearer infinity by using a downstream correction box such as OmniTek s 3D Wizard to apply HIT. This is illustrated in Figure 18. By applying a HIT of 1.67% of the screen width, the image now creates the correct S3D perception on the smaller 1.47m screen. However, while it is possible to correct these images to create precise 3D geometry, this may not turn out to be the most pleasing thing to do. What is likely to be preferable is some sort of compromise between precise 3D and ease of viewing. Page 15 of 32

16 Figure 16: Wedding Scene depth perception as viewed on a 2.4m screen from 2.4m away. This is the correct geometry Figure 17: Wedding Scene depth perception as viewed on a 1.47m screen from 1.47m away. This shows depth plane compression. Page 16 of 32

17 Figure 18: Wedding Scene depth perception on a 1.47m screen viewed from 1.47m away after applying a HIT of 1.67% of the screen width. This restores the correct geometry Viewing Position and Camera Zoom The S3D effect captured by a stereo rig will only be precise if the ratio of focal length of the camera to the image sensor width matches the ratio of viewer distance from the screen to viewing screen width and the viewer is sitting central to the viewing screen. To appreciate why this is the case, compare the light rays hitting the image sensor with those leaving the viewing screen and hitting the viewer s eyes as illustrated in Figure 19. Image sensor w = Sensor width α lens α W = screen width f = focal length d = Viewing distance Figure 19: Camera geometry (left); Viewer Geometry (right) Page 17 of 32

18 The angle at which light rays approach the viewer s eyes needs to match the angle seen by the camera. In other words, the angle of view (α) needs to be the same for both the camera and the viewer. This is achieved if: If this ratio is maintained, the viewing screen will recreate the light rays captured by the S3D rig but if the viewer sits at a different distance, there is Depth Distortion. The Depth Distortion associated with an incorrect viewing depth is revealed by the 3D Depth Plan display. Figure 16 showed the Depth Plan for a viewing distance of 2.4m which corresponds to the precise S3D geometry. Compare this with Figure 20 which shows what happens when the viewer moves back to 5m away from the viewing screen. For example, the yellow crosshair has been placed on the left hand castle in both images. The OmniTek 3D Toolset automatically calculates the perceived 3D. The coordinates are show in the top right of the image. In Figure 16, the Z depth is 11 i.e. the castle appears approximately 11m behind the viewing screen. In Figure 20, the Z depth of this point has moved back to approximately 24m. As can be seen, moving back deepens the depth effect. Moving forward correspondingly compresses the depth effect. Figure 20: Wedding Scene depth perception on a 2.4m screen viewed from 5m away, resulting in depth plane expansion. Crosshair now record left-hand castle to be 24m from viewing screen. Notice how the infinity point is still correct and the parallel lines remain parallel, but the depth has been expanded in a non-linear way. This Depth Plane Compression and Expansion is almost impossible to avoid because it is impossible to dictate the exact viewing position relative to the screen. Fortunately, this effect is unlikely to cause a headache as the resulting geometry, while wrong, is still natural. Page 18 of 32

19 A similar argument applies to the use of telephoto lenses. A telephoto lens has a long focal length. It is only possible to recreate the precise 3D geometry by viewing the content at the correspondingly long distance from the screen. As this is not possible in practice, the depth perception is compressed making objects look like flat cut outs. However, long lenses are often essential. Clearly it is not practical to move the viewer every time a different level of zoom is used but, in general, the result is acceptable as the depth plane compression is not a particularly objectionable artefact for moderate zooms. However, being aware of this effect allows you to make the best choices about where zoom footage is used. The other aspect of viewer positioning to consider is the effect of the viewer not being central to the viewing screen. In this case, the scene appears to shear in the direction that the viewer has moved. Figure 21 illustrates this effect. Natural Perspective for Viewer sat centrally Unnatural Perspective for Viewer sat to one side Image shears to right Viewer moves left Figure 21: Viewing S3D material away from the centre of the screen causes depth plane shear. There is no easy solution to this other than to choose the best seat in the middle of the cinema or on the sofa at home. Interaxial Separation of the Cameras Setting the camera interaxial separation to distances other than the inter-ocular distance of the viewer can produce results that have the correct geometry and are totally realistic except that the entire 3D scene is either miniaturized or enlarged. For example, images shot with the camera interaxial separation increased to around 13cm (i.e. double the inter-ocular distance) are miniaturized by the same ratio when shown on the viewing screen, i.e. a 1m ruler will appear to be only 0.5m high (assuming all other criteria for precise geometry are maintained). This effect may be highly desirable when shooting large scale objects (e.g. football stadium etc). A mini stadium effect will be created. It is rather like being a giant in Toy Town. Similarly, reducing camera interaxial separation gives a better 3D effect for very small objects. For example, if you can get two small cameras close together or use a mirror rig near a grasshopper, it can be made to look like a giant grasshopper. The results appear realistic because all three dimensions scale in proportion. These miniaturization and enlargement effects can all be analyzed using the displays of the OmniTek 3D Toolset. Page 19 of 32

20 Edge Effects and Floating Windows For S3D material, the viewing screen is rather like a window with the 3D content behind or in front of the window. Where the S3D content is behind the viewing screen, the effect is quite natural because we are used to looking through windows and seeing objects occluded differently by the window frame in the left- and right-eye views. However, when an object is in front of a window, we expect that object to occlude the window frame as it passes in front of it. Clearly this does not happen with S3D material. This creates a very unnatural effect as objects in front of the screen appear to pass behind the window frame formed by viewing screen surround. It is possible to spot where this happens with the OmniTek 3D Toolset because the 3D Depth Plan clearly shows both where objects appear in front of the screen and how close to the deg of the screen they are. If it is not possible to avoid shooting images containing such material, the 3D Wizard offers two potential ways of correcting the shot. The first is to use HIT to push the 3D depth of the scene backwards by increasing the horizontal offset. However this technique is not without other potential pitfalls. Firstly, changing the horizontal offset like this will make everything appear at the wrong depth. Worse, individual disparities may be increased so far that the disparity of some objects may now exceed the viewer s inter-ocular distance, making them difficult to resolve into a single object. The alternative approach is to use what is known as a floating window. This is an illusion created by cropping the left edge of the right eye image and the right edge of the left eye image. The result of doing this is to appear to bring the viewing screen forward, with the result that the object that previously appeared unnaturally occluded by the edge of the screen as it moved out of shot, now appears perfectly naturally occluded by the floating window. This is illustrated in Figure 22. Regular display without floating window Trimmed section of Left-eye image Trimmed section of Right-eye image Viewing Screen Object appears in front of screen but is occluded by viewing screen surround when it exits Object appears behind floating window and is occluded by viewing screen surround when it exits Floating Window Figure 22: Floating Window Geometry Page 20 of 32

21 This effect works best if the viewing screen surround is a continuous black wall with no detail to confuse the illusion. The 3D Wizard includes a floating window control that enables the floating window to be adjusted on the fly. Furthermore, the OmniTek 3D Toolset includes the facility to automatically monitor the location of the virtual window. Of course the effect of introducing a floating window also needs to be considered. A gradual transition to a floating window needs to be made and its subsequent use should be subtle. Sudden changes are not desirable. Adding Graphics When adding captioning or other graphics to an S3D scene, it is important to consider the potential interaction with the S3D scene. As a general rule, graphics should always be in front of the S3D scene content. If this is not the case it can potentially collide with the scene content creating a mismatch between the binocular 3D perception and the visual cue given by object occlusion. In order to avoid problems, graphics are generally placed in front of the screen plane. In football sequence as shown in Figure 11, the position of the graphic (top left) in relation to the rest of the scene content can clearly be seen on the 3D depth plan views. If necessary, the 3D Wizard can be used to adjust the horizontal offset to move the video backwards. Mirror Rigs Mirror rigs offer some significant advantages in terms of interaxial adjustment. Specifically, they allow the creation of very small interaxial separations. However, one of the resulting images is a mirror reflection of the true image. Additionally, the mirror causes a colour balance difference between the left and right cameras. The extended OmniTek 3D Toolset introduced at V2.4 (see page 24) is able to analyse S3D video taken straight from such a mirror rig and report the extent of the effect on the colour balance, while the OmniTek 3D Wizard supports the necessary image inversion and colour correction to correct these aberrations. Image Inversion Figure 23: Correcting Mirror rig image inversion with the 3D Wizard Page 21 of 32

22 Colour Issues To achieve a good 3D effect, it is important that the colour balance of the right-eye image matches that of the left-eye image. If the colours don t match, the 3D effect can be destroyed. However, nominally identical cameras can have sufficiently different colour calibrations to have a noticeable effect on the colour balance, while the mirror in beam-splitter rigs produces a difference in the colour between the left- and right-eye images. There are various tools available for identifying colour differences between the left and right cameras. One option is provided by a checkerboard image, created from alternate squares from the left and right images. Where the colours match, regions of the picture such as patches of sky that are of a single colour look smooth but when the colours don t match, the differences show up as a checkerboard in these regions. Figure 24: Checkerboard image of a scene where the colours are mismatched With the OmniTek 3D Toolset, colour balance can also be accurately assessed using special 3D versions of the waveform, vectorscope and pixel histogram displays in which the analyses from the left- and right-eye images are displayed together. From V2.4, it is also able to determine the differences in the gain and lift that have been applied to the right image compared to the left image, and report these through diagrams that compare the colour components of individual pixels and plot a best-fit lines through them referred to as Chroma Sabres (see page 26). The waveform display shows alternating segments from the left- and right-eye image traces. In the vectorscope, the traces are simply superimposed. In both cases red/cyan anaglyph colours are used to distinguish the left and right images. Examples of these are shown in Figures 25 and 26. Similarly, the 3D histogram display uses anaglyph colours to enable the left and right histograms to be superimposed as shown in Figure 27. The forthcoming 3D Wizard also allows the colour to be corrected. Page 22 of 32

23 Figure 25: Example 3D Waveform display Figure 26:3D vectorscope Figure 27: 3D Pixel Histogram in Overlay mode Page 23 of 32

24 5. New Features in V2.4 V2.4 of the OTM/OTR application software features a number of enhancements to the monitoring facilities offered when the VIEW_3D option is installed. i. Real World Projection A new mode has been added to the 3D Depth Plan view which offers a real world representation that you can inspect from all angles. Figure 28: Top row: Picture display and 3D Depth Map of example 3D image Bottom row: Example Real World Projections of the same image This Real World Projection is offered as a Map Display Type alongside the Plan and Elevation versions of the 3D Depth Plan. Lines outlining the geometry emanate from the viewer s position. The projection can be rotated horizontally and vertically either using a mouse or using the HORIZ and VERT knobs of an OTM or OTR Control Panel. You can also move in and out on the images using either the mouse s scroll wheel or the Control Panel GEN knob. The plane corresponding to the minimum depth of the depth budget is shown in red, while the corresponding maximum depth plane is coloured violet. The screen plane is shown with a white frame. Page 24 of 32

25 ii. Rig Alignment Support V2.4 sees the addition of a 3D Meters display to the Status category. Figure 29: 3D Meters display The new display comprises a number of meters assessing differences between corresponding Left and Right images, from which it is possible to see: What range of depths are covered Whether the cameras are vertically aligned, and if not, how far out they are (in lines) Whether there are signs of camera rotation (roll) Whether the cameras are using the same or different zooms How sharp the images are, with the top half of this meter showing the sharpness of the left image and the bottom half showing the bottom half of the meter: ideally the markers should one above the other, indicating that the images are equally sharp The difference in colour gain and lift in the Right image compared with the Left image, shown separately for the individual colour components Traffic-light colouring both for the meter markers and alongside the headings is used to indicate whether the value being measured is within acceptable limits (set as part of the Video Configuration) or in error Green for Good; Yellow for Warning; and Red for Error. The information shown can be used to make appropriate adjustments to the cameras and/or to the video images being recorded. Page 25 of 32

26 iii. Chroma Sabres Differences in the Gain and Lift between the different colour components are also the subject of a new Chroma Sabres display within the GAMUT Category. Figure 30: Example Chroma Sabres display This display comprises a set of diamond-shaped plots, one for each colour component, comparing the colour values at each pixel in the Right image against those of the corresponding pixel in the Left image. If the Gain and Lift applied to the two images were identical, each of the three plots would comprise a single vertical line running from the bottom of the diamond to the top. But in practice, the plots tend to feature a mass of points scattered around the vertical but biased either to the left of the image or the right. To interpret these distributions, a best-fit line is drawn through the points, the features of which are interpreted as a difference in Gain and Lift between the Left and Right images. Where the line leans to the right, the Gain is greater in the Right image and less in the Left image (shown as a positive value) but where it leans to the left, the Gain is greater in the Left image (shown as a negative value). The Lift is given by the offset from the origin at the bottom of the plot again positive when in the right half of the plot and negative when in the left half. The values shown here are also shown in the 3D Meters display described above. Page 26 of 32

27 iv. Support for Side-by-Side, Top+Bottom and Mirror Formats The 3D package for the OTM and OTR systems initially only catered for Stereo 3D provided as separate Left image and Right image streams, but Stereo 3D is often transmitted in a Side-by-Side or Top+Bottom format. Where a mirror camera set-up has been used, the Stereo 3D can also be transmitted with one of the images flipped horizontally or vertically. To cater for these additional formats, V2.4 has a new Channel Sources option within the 3D Settings on the System page of the Config window. This caters for all the possible options of Separate Left and Right image streams provided either with the Left images on Input 1 and the Right images on Input 2 or the other way around Side by Side images provided either on Input 1 or on Input 2 Top+Bottom images provided either on Input 1 or on Input 2 The various possible combinations of the Left image and the Right image being normal, flipped horizontally or flipped vertically. The way in which the Left and Right images are arranged is set at the time 3D analysis is enabled. v. Additional 3D Status Information V2.4 also sees a significant enhancement in the aspects of Stereo 3D signals that are monitored and reported through the Status View. In line with other video (as distinct from audio) parameters monitored through the Status View, the values at which you want either a warning to be given or an error to be reported are set on the Video Config page of the Config window. The monitored values are then displayed both on the Overview page of the Status View and on a specific 3D Status Summary page. To help interpretation, the information shown includes details of the geometry being used to interpret the data, the depth budget and the search budget. 6. Conclusion As this white paper shows, producing S3D video presents many challenges. S3D production is a rapidly advancing field and ideas of what represents best practice are still evolving. OmniTek s 3D Toolset provides a suite of tools that enable stereographers to fully analyse perceived geometry and colour balance in S3D material. The forthcoming 3D Wizard enables colour and geometry to be adjusted both live on air and in post production houses. Page 27 of 32

28 7. Glossary General Terms 2D 3D Scene with width and height. Scene with width, height, and depth. Binocular Disparity see Horizontal Disparity Binocular Vision Vision resulting from the use of both eyes. Convergence (eyes) The inward rotation of the eyes to focus on an object at a particular distance. Convergence (stereography) Setting the scene s position in relation to the screen either by toeing-in the cameras or by applying horizontal image translation (HIT) to the images. Depth Budget Depth Perception Disparity Divergence (eyes) Eyeball Focus The range of disparities into which most if not all of the disparities measured within an image should fall. The limits of the range are often expressed as a percentage of screen width. The 3D depth effect seen by the viewer. The displacement of objects between the left and right eye image. The opposite of convergence both unnatural and uncomfortable. The focus of the eyes on sharp detail. This is directly correlated with depth. Horizontal Disparity The horizontal displacement of objects between the left and right eye image. Positive horizontal disparity makes an object appear behind the viewing screen; zero horizontal disparity puts object in the screen plane; negative horizontal disparity makes object appear in front of the screen. Inter-ocular distance (i) The distance between the eyes. Usually taken to be 6.5cm. Negative Disparity Right eye image of an object to the left of the left-eye image. Makes the object appear in front of the viewing screen. Observer Motion Disparity The relative motion of objects in the scene as the viewer s head moves. Parallax Positive Disparity Effect whereby objects at a distance move to different positions when just viewed by the left eye or the right eye. Objects beyond the point on which the eyes are converged appear to move to the left when viewed with the left eye and to move to the right when viewed with the right eye: this is known as positive parallax. Objects closer than the point at which the eyes are converged appear to move to the right when viewed with the left eye and to the left when viewed with the right eye: this is known as negative parallax. This effect is reflected in the disparities between left-eye and right-eye images, which leads some writers to refer to disparity as parallax. Right eye image of an object to the right of the left-eye image. Makes the object appear behind the viewing screen. Screen Coordinates (x, y) The coordinates of objects within the image displayed on the viewing screen. Screen Plane The flat, vertical, two-dimensional surface in which the viewing screen is positioned. Page 28 of 32

3D Drawing. Single Point Perspective with Diminishing Spaces

3D Drawing. Single Point Perspective with Diminishing Spaces 3D Drawing Single Point Perspective with Diminishing Spaces The following document helps describe the basic process for generating a 3D representation of a simple 2D plan. For this exercise we will be

More information

Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings

Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings Anamorphic Projection Photographic Techniques for setting up 3D Chalk Paintings By Wayne and Cheryl Renshaw. Although it is centuries old, the art of street painting has been going through a resurgence.

More information

3D Drawing. Single Point Perspective with Diminishing Spaces

3D Drawing. Single Point Perspective with Diminishing Spaces 3D Drawing Single Point Perspective with Diminishing Spaces The following document helps describe the basic process for generating a 3D representation of a simple 2D plan. For this exercise we will be

More information

Basic Principles of Stereoscopic 3D. Simon Reeve & Jason Flock Senior Editors, BSKYB simon.reeve@bskyb.com jason.flock@bskyb.com

Basic Principles of Stereoscopic 3D. Simon Reeve & Jason Flock Senior Editors, BSKYB simon.reeve@bskyb.com jason.flock@bskyb.com Basic Principles of Stereoscopic 3D Simon Reeve & Jason Flock Senior Editors, BSKYB simon.reeve@bskyb.com jason.flock@bskyb.com Stereoscopic 3D Shooting and transmitting images in Stereoscopic 3D is an

More information

4. CAMERA ADJUSTMENTS

4. CAMERA ADJUSTMENTS 4. CAMERA ADJUSTMENTS Only by the possibility of displacing lens and rear standard all advantages of a view camera are fully utilized. These displacements serve for control of perspective, positioning

More information

Understanding astigmatism Spring 2003

Understanding astigmatism Spring 2003 MAS450/854 Understanding astigmatism Spring 2003 March 9th 2003 Introduction Spherical lens with no astigmatism Crossed cylindrical lenses with astigmatism Horizontal focus Vertical focus Plane of sharpest

More information

3D Printing LESSON PLAN PHYSICS 8,11: OPTICS

3D Printing LESSON PLAN PHYSICS 8,11: OPTICS INVESTIGATE RATIONALE Optics is commonly taught through the use of commercial optics kits that usually include a basic set of 2-4 geometric lenses (such as double convex or double concave). These lenses

More information

Optical Illusions Essay Angela Wall EMAT 6690

Optical Illusions Essay Angela Wall EMAT 6690 Optical Illusions Essay Angela Wall EMAT 6690! Optical illusions are images that are visually perceived differently than how they actually appear in reality. These images can be very entertaining, but

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Get to Know Golf! John Dunigan

Get to Know Golf! John Dunigan Get to Know Golf! John Dunigan Get to Know Golf is an initiative designed to promote the understanding the laws that govern ball flight. This information will help golfers develop the most important skill

More information

3D Briefing Document for Senior Broadcast Management

3D Briefing Document for Senior Broadcast Management Technical Report 10 3D Briefing Document for Senior Broadcast Management 3D TV - Its importance to EBU Members Geneva December 2010 Contents 3D TV Synopsis What is important for EBU Members?... 5 The

More information

Lesson 26: Reflection & Mirror Diagrams

Lesson 26: Reflection & Mirror Diagrams Lesson 26: Reflection & Mirror Diagrams The Law of Reflection There is nothing really mysterious about reflection, but some people try to make it more difficult than it really is. All EMR will reflect

More information

Video Conferencing Display System Sizing and Location

Video Conferencing Display System Sizing and Location Video Conferencing Display System Sizing and Location As video conferencing systems become more widely installed, there are often questions about what size monitors and how many are required. While fixed

More information

Mathematics on the Soccer Field

Mathematics on the Soccer Field Mathematics on the Soccer Field Katie Purdy Abstract: This paper takes the everyday activity of soccer and uncovers the mathematics that can be used to help optimize goal scoring. The four situations that

More information

Space Perception and Binocular Vision

Space Perception and Binocular Vision Space Perception and Binocular Vision Space Perception Monocular Cues to Three-Dimensional Space Binocular Vision and Stereopsis Combining Depth Cues 9/30/2008 1 Introduction to Space Perception Realism:

More information

Physics 202 Problems - Week 8 Worked Problems Chapter 25: 7, 23, 36, 62, 72

Physics 202 Problems - Week 8 Worked Problems Chapter 25: 7, 23, 36, 62, 72 Physics 202 Problems - Week 8 Worked Problems Chapter 25: 7, 23, 36, 62, 72 Problem 25.7) A light beam traveling in the negative z direction has a magnetic field B = (2.32 10 9 T )ˆx + ( 4.02 10 9 T )ŷ

More information

Getting Started in Tinkercad

Getting Started in Tinkercad Getting Started in Tinkercad By Bonnie Roskes, 3DVinci Tinkercad is a fun, easy to use, web-based 3D design application. You don t need any design experience - Tinkercad can be used by anyone. In fact,

More information

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 9 The Game Engine The Blender Game Engine This week we will have an introduction to the Game Engine build

More information

Technical Drawing Specifications Resource A guide to support VCE Visual Communication Design study design 2013-17

Technical Drawing Specifications Resource A guide to support VCE Visual Communication Design study design 2013-17 A guide to support VCE Visual Communication Design study design 2013-17 1 Contents INTRODUCTION The Australian Standards (AS) Key knowledge and skills THREE-DIMENSIONAL DRAWING PARALINE DRAWING Isometric

More information

Tutorial for Tracker and Supporting Software By David Chandler

Tutorial for Tracker and Supporting Software By David Chandler Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save

More information

The main imovie window is divided into six major parts.

The main imovie window is divided into six major parts. The main imovie window is divided into six major parts. 1. Project Drag clips to the project area to create a timeline 2. Preview Window Displays a preview of your video 3. Toolbar Contains a variety of

More information

TVL - The True Measurement of Video Quality

TVL - The True Measurement of Video Quality ACTi Knowledge Base Category: Educational Note Sub-category: Video Quality, Hardware Model: N/A Firmware: N/A Software: N/A Author: Ando.Meritee Published: 2010/10/25 Reviewed: 2010/10/27 TVL - The True

More information

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene. Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.

More information

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec.

2) A convex lens is known as a diverging lens and a concave lens is known as a converging lens. Answer: FALSE Diff: 1 Var: 1 Page Ref: Sec. Physics for Scientists and Engineers, 4e (Giancoli) Chapter 33 Lenses and Optical Instruments 33.1 Conceptual Questions 1) State how to draw the three rays for finding the image position due to a thin

More information

Digital Photography Composition. Kent Messamore 9/8/2013

Digital Photography Composition. Kent Messamore 9/8/2013 Digital Photography Composition Kent Messamore 9/8/2013 Photography Equipment versus Art Last week we focused on our Cameras Hopefully we have mastered the buttons and dials by now If not, it will come

More information

Reflection and Refraction

Reflection and Refraction Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,

More information

Ultra-High Resolution Digital Mosaics

Ultra-High Resolution Digital Mosaics Ultra-High Resolution Digital Mosaics J. Brian Caldwell, Ph.D. Introduction Digital photography has become a widely accepted alternative to conventional film photography for many applications ranging from

More information

Protocol for Microscope Calibration

Protocol for Microscope Calibration Protocol for Microscope Calibration A properly calibrated system is essential for successful and efficient software use. The following are step by step instructions on how to calibrate the hardware using

More information

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

Geometric Optics Converging Lenses and Mirrors Physics Lab IV Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The

More information

Edinburgh COLLEGE of ART ARCHITECTURE 3D Modelling in AutoCAD - tutorial exercise The screen The graphics area This is the part of the screen in which the drawing will be created. The command prompt area

More information

Physics 41, Winter 1998 Lab 1 - The Current Balance. Theory

Physics 41, Winter 1998 Lab 1 - The Current Balance. Theory Physics 41, Winter 1998 Lab 1 - The Current Balance Theory Consider a point at a perpendicular distance d from a long straight wire carrying a current I as shown in figure 1. If the wire is very long compared

More information

TABLE OF CONTENTS. INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE...

TABLE OF CONTENTS. INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE... Starting Guide TABLE OF CONTENTS INTRODUCTION... 5 Advance Concrete... 5 Where to find information?... 6 INSTALLATION... 7 STARTING ADVANCE CONCRETE... 7 ADVANCE CONCRETE USER INTERFACE... 7 Other important

More information

CATIA Functional Tolerancing & Annotation TABLE OF CONTENTS

CATIA Functional Tolerancing & Annotation TABLE OF CONTENTS TABLE OF CONTENTS Introduction...1 Functional Tolerancing and Annotation...2 Pull-down Menus...3 Insert...3 Functional Tolerancing and Annotation Workbench...4 Bottom Toolbar Changes...5 3D Grid Toolbar...5

More information

Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint

Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint Instructions for Creating a Poster for Arts and Humanities Research Day Using PowerPoint While it is, of course, possible to create a Research Day poster using a graphics editing programme such as Adobe

More information

SpaceClaim Introduction Training Session. A SpaceClaim Support Document

SpaceClaim Introduction Training Session. A SpaceClaim Support Document SpaceClaim Introduction Training Session A SpaceClaim Support Document In this class we will walk through the basic tools used to create and modify models in SpaceClaim. Introduction We will focus on:

More information

2D & 3D TelePresence

2D & 3D TelePresence 2D & 3D TelePresence delivers the ultimate experience in communication over a distance with aligned eye contact and a life-size sense of presence in a room setting. Eye Contact systems achieve eye-to-eye

More information

GeoGebra. 10 lessons. Gerrit Stols

GeoGebra. 10 lessons. Gerrit Stols GeoGebra in 10 lessons Gerrit Stols Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools. It was developed by Markus Hohenwarter

More information

COOL ART WITH MATH & SCIENCE OPTICAL ILLUSIONS CREATIVE ACTIVITIES THAT MAKE MATH & SCIENCE FUN FOR KIDS! A NDERS HANSON AND ELISSA MANN

COOL ART WITH MATH & SCIENCE OPTICAL ILLUSIONS CREATIVE ACTIVITIES THAT MAKE MATH & SCIENCE FUN FOR KIDS! A NDERS HANSON AND ELISSA MANN CHECKERBOARD HOW-TO LIBRARY COOL ART WITH MATH & SCIENCE OPTICAL ILLUSIONS CREATIVE ACTIVITIES THAT MAKE MATH & SCIENCE FUN FOR KIDS! A NDERS HANSON AND ELISSA MANN C O O L A R T W I T H M A T H & S C

More information

9/16 Optics 1 /11 GEOMETRIC OPTICS

9/16 Optics 1 /11 GEOMETRIC OPTICS 9/6 Optics / GEOMETRIC OPTICS PURPOSE: To review the basics of geometric optics and to observe the function of some simple and compound optical devices. APPARATUS: Optical bench, lenses, mirror, target

More information

Lesson 3: Behind the Scenes with Production

Lesson 3: Behind the Scenes with Production Lesson 3: Behind the Scenes with Production Overview: Being in production is the second phase of the production process and involves everything that happens from the first shot to the final wrap. In this

More information

SketchUp Instructions

SketchUp Instructions SketchUp Instructions Every architect needs to know how to use SketchUp! SketchUp is free from Google just Google it and download to your computer. You can do just about anything with it, but it is especially

More information

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801

Data Sheet. definiti 3D Stereo Theaters + definiti 3D Stereo Projection for Full Dome. S7a1801 S7a1801 OVERVIEW In definiti 3D theaters, the audience wears special lightweight glasses to see the world projected onto the giant dome screen with real depth perception called 3D stereo. The effect allows

More information

Adding Animation With Cinema 4D XL

Adding Animation With Cinema 4D XL Step-by-Step Adding Animation With Cinema 4D XL This Step-by-Step Card covers the basics of using the animation features of Cinema 4D XL. Note: Before you start this Step-by-Step Card, you need to have

More information

6 Space Perception and Binocular Vision

6 Space Perception and Binocular Vision Space Perception and Binocular Vision Space Perception and Binocular Vision space perception monocular cues to 3D space binocular vision and stereopsis combining depth cues monocular/pictorial cues cues

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

Message, Audience, Production (MAP) Framework for Teaching Media Literacy Social Studies Integration PRODUCTION

Message, Audience, Production (MAP) Framework for Teaching Media Literacy Social Studies Integration PRODUCTION Message, Audience, Production (MAP) Framework for Teaching Media Literacy Social Studies Integration PRODUCTION All media messages - a film or book, photograph or picture, newspaper article, news story,

More information

Contrast ratio what does it really mean? Introduction...1 High contrast vs. low contrast...2 Dynamic contrast ratio...4 Conclusion...

Contrast ratio what does it really mean? Introduction...1 High contrast vs. low contrast...2 Dynamic contrast ratio...4 Conclusion... Contrast ratio what does it really mean? Introduction...1 High contrast vs. low contrast...2 Dynamic contrast ratio...4 Conclusion...5 Introduction Contrast, along with brightness, size, and "resolution"

More information

The Photosynth Photography Guide

The Photosynth Photography Guide The Photosynth Photography Guide Creating the best synth starts with the right photos. This guide will help you understand how to take photos that Photosynth can use to best advantage. Reading it could

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

Creating a Planogram Database

Creating a Planogram Database Creating a Planogram Database Creating a planogram database is perhaps the most difficult part of creating planograms. Once your database is finished, however, it can be maintained with little effort and

More information

Quantifying Spatial Presence. Summary

Quantifying Spatial Presence. Summary Quantifying Spatial Presence Cedar Riener and Dennis Proffitt Department of Psychology, University of Virginia Keywords: spatial presence, illusions, visual perception Summary The human visual system uses

More information

Light and its effects

Light and its effects Light and its effects Light and the speed of light Shadows Shadow films Pinhole camera (1) Pinhole camera (2) Reflection of light Image in a plane mirror An image in a plane mirror is: (i) the same size

More information

Introduction to CATIA V5

Introduction to CATIA V5 Introduction to CATIA V5 Release 16 (A Hands-On Tutorial Approach) Kirstie Plantenberg University of Detroit Mercy SDC PUBLICATIONS Schroff Development Corporation www.schroff.com www.schroff-europe.com

More information

Twelve. Figure 12.1: 3D Curved MPR Viewer Window

Twelve. Figure 12.1: 3D Curved MPR Viewer Window Twelve The 3D Curved MPR Viewer This Chapter describes how to visualize and reformat a 3D dataset in a Curved MPR plane: Curved Planar Reformation (CPR). The 3D Curved MPR Viewer is a window opened from

More information

What s New V 11. Preferences: Parameters: Layout/ Modifications: Reverse mouse scroll wheel zoom direction

What s New V 11. Preferences: Parameters: Layout/ Modifications: Reverse mouse scroll wheel zoom direction What s New V 11 Preferences: Reverse mouse scroll wheel zoom direction Assign mouse scroll wheel Middle Button as Fine tune Pricing Method (Manufacturing/Design) Display- Display Long Name Parameters:

More information

Introduction to 3D Imaging

Introduction to 3D Imaging Chapter 5 Introduction to 3D Imaging 5.1 3D Basics We all remember pairs of cardboard glasses with blue and red plastic lenses used to watch a horror movie. This is what most people still think of when

More information

Convex Mirrors. Ray Diagram for Convex Mirror

Convex Mirrors. Ray Diagram for Convex Mirror Convex Mirrors Center of curvature and focal point both located behind mirror The image for a convex mirror is always virtual and upright compared to the object A convex mirror will reflect a set of parallel

More information

mouse (or the option key on Macintosh) and move the mouse. You should see that you are able to zoom into and out of the scene.

mouse (or the option key on Macintosh) and move the mouse. You should see that you are able to zoom into and out of the scene. A Ball in a Box 1 1 Overview VPython is a programming language that is easy to learn and is well suited to creating 3D interactive models of physical systems. VPython has three components that you will

More information

Chapter 1. Creating Sketches in. the Sketch Mode-I. Evaluation chapter. Logon to www.cadcim.com for more details. Learning Objectives

Chapter 1. Creating Sketches in. the Sketch Mode-I. Evaluation chapter. Logon to www.cadcim.com for more details. Learning Objectives Chapter 1 Creating Sketches in Learning Objectives the Sketch Mode-I After completing this chapter you will be able to: Use various tools to create a geometry. Dimension a sketch. Apply constraints to

More information

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com

More information

Understand the Sketcher workbench of CATIA V5.

Understand the Sketcher workbench of CATIA V5. Chapter 1 Drawing Sketches in Learning Objectives the Sketcher Workbench-I After completing this chapter you will be able to: Understand the Sketcher workbench of CATIA V5. Start a new file in the Part

More information

1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002

1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 05-232 Imaging Systems Laboratory II Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 Abstract: For designing the optics of an imaging system, one of the main types of tools used today is optical

More information

WPA World Artistic Pool Championship. Official Shot / Challenge Program. November 8, 2011 1

WPA World Artistic Pool Championship. Official Shot / Challenge Program. November 8, 2011 1 WPA World Artistic Pool Championship 2012 Official Shot / Challenge Program November 8, 2011 1 Revision History November 30, 2010: Initial version of shot program. January 10, 2011: February 14, 2011:

More information

INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

More information

2-1 Position, Displacement, and Distance

2-1 Position, Displacement, and Distance 2-1 Position, Displacement, and Distance In describing an object s motion, we should first talk about position where is the object? A position is a vector because it has both a magnitude and a direction:

More information

Using Microsoft Picture Manager

Using Microsoft Picture Manager Using Microsoft Picture Manager Storing Your Photos It is suggested that a county store all photos for use in the County CMS program in the same folder for easy access. For the County CMS Web Project it

More information

EXPERIMENT 6 OPTICS: FOCAL LENGTH OF A LENS

EXPERIMENT 6 OPTICS: FOCAL LENGTH OF A LENS EXPERIMENT 6 OPTICS: FOCAL LENGTH OF A LENS The following website should be accessed before coming to class. Text reference: pp189-196 Optics Bench a) For convenience of discussion we assume that the light

More information

SAM PuttLab. Reports Manual. Version 5

SAM PuttLab. Reports Manual. Version 5 SAM PuttLab Reports Manual Version 5 Reference The information contained in this document is subject to change without notice. The software described in this document is furnished under a license agreement.

More information

Basic AutoSketch Manual

Basic AutoSketch Manual Basic AutoSketch Manual Instruction for students Skf-Manual.doc of 3 Contents BASIC AUTOSKETCH MANUAL... INSTRUCTION FOR STUDENTS... BASIC AUTOSKETCH INSTRUCTION... 3 SCREEN LAYOUT... 3 MENU BAR... 3 FILE

More information

3D Te l epr e s e n c e

3D Te l epr e s e n c e 3D Te l epr e s e n c e 3D TelePresence delivers the ultimate experience in communication over a distance with aligned eye contact and a three dimensional, life-size sense of presence. Eye Contact systems

More information

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable

More information

How To Understand General Relativity

How To Understand General Relativity Chapter S3 Spacetime and Gravity What are the major ideas of special relativity? Spacetime Special relativity showed that space and time are not absolute Instead they are inextricably linked in a four-dimensional

More information

RAY OPTICS II 7.1 INTRODUCTION

RAY OPTICS II 7.1 INTRODUCTION 7 RAY OPTICS II 7.1 INTRODUCTION This chapter presents a discussion of more complicated issues in ray optics that builds on and extends the ideas presented in the last chapter (which you must read first!)

More information

Thin Lenses Drawing Ray Diagrams

Thin Lenses Drawing Ray Diagrams Drawing Ray Diagrams Fig. 1a Fig. 1b In this activity we explore how light refracts as it passes through a thin lens. Eyeglasses have been in use since the 13 th century. In 1610 Galileo used two lenses

More information

A Short Introduction to Computer Graphics

A Short Introduction to Computer Graphics A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical

More information

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro.

Video in Logger Pro. There are many ways to create and use video clips and still images in Logger Pro. Video in Logger Pro There are many ways to create and use video clips and still images in Logger Pro. Insert an existing video clip into a Logger Pro experiment. Supported file formats include.avi and.mov.

More information

What is a DSLR and what is a compact camera? And newer versions of DSLR are now mirrorless

What is a DSLR and what is a compact camera? And newer versions of DSLR are now mirrorless 1 2 What is a DSLR and what is a compact camera? And newer versions of DSLR are now mirrorless 3 The Parts Your camera is made up of many parts, but there are a few in particular that we want to look at

More information

Roof Tutorial. Chapter 3:

Roof Tutorial. Chapter 3: Chapter 3: Roof Tutorial The majority of Roof Tutorial describes some common roof styles that can be created using settings in the Wall Specification dialog and can be completed independent of the other

More information

Common Core Unit Summary Grades 6 to 8

Common Core Unit Summary Grades 6 to 8 Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations

More information

Watch Your Garden Grow

Watch Your Garden Grow Watch Your Garden Grow The Brinno GardenWatchCam is a low cost, light weight, weather resistant, battery operated time-lapse camera that captures the entire lifecycle of any garden season by taking photos

More information

C) D) As object AB is moved from its present position toward the left, the size of the image produced A) decreases B) increases C) remains the same

C) D) As object AB is moved from its present position toward the left, the size of the image produced A) decreases B) increases C) remains the same 1. For a plane mirror, compared to the object distance, the image distance is always A) less B) greater C) the same 2. Which graph best represents the relationship between image distance (di) and object

More information

Welcome to CorelDRAW, a comprehensive vector-based drawing and graphic-design program for the graphics professional.

Welcome to CorelDRAW, a comprehensive vector-based drawing and graphic-design program for the graphics professional. Workspace tour Welcome to CorelDRAW, a comprehensive vector-based drawing and graphic-design program for the graphics professional. In this tutorial, you will become familiar with the terminology and workspace

More information

Creating Your Own 3D Models

Creating Your Own 3D Models 14 Creating Your Own 3D Models DAZ 3D has an extensive growing library of 3D models, but there are times that you may not find what you want or you may just want to create your own model. In either case

More information

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions.

B2.53-R3: COMPUTER GRAPHICS. NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. B2.53-R3: COMPUTER GRAPHICS NOTE: 1. There are TWO PARTS in this Module/Paper. PART ONE contains FOUR questions and PART TWO contains FIVE questions. 2. PART ONE is to be answered in the TEAR-OFF ANSWER

More information

WAVELENGTH OF LIGHT - DIFFRACTION GRATING

WAVELENGTH OF LIGHT - DIFFRACTION GRATING PURPOSE In this experiment we will use the diffraction grating and the spectrometer to measure wavelengths in the mercury spectrum. THEORY A diffraction grating is essentially a series of parallel equidistant

More information

Free 15-day trial. Signata Waveform Viewer Datasheet

Free 15-day trial. Signata Waveform Viewer Datasheet The Signata Waveform Viewer allows you to view and analyze data captured from your oscilloscope anytime and anywhere. You will gain instant insight into device performance, quickly determine the root cause

More information

3D Viewer. user's manual 10017352_2

3D Viewer. user's manual 10017352_2 EN 3D Viewer user's manual 10017352_2 TABLE OF CONTENTS 1 SYSTEM REQUIREMENTS...1 2 STARTING PLANMECA 3D VIEWER...2 3 PLANMECA 3D VIEWER INTRODUCTION...3 3.1 Menu Toolbar... 4 4 EXPLORER...6 4.1 3D Volume

More information

Basic 2D Design Be sure you have the latest information!

Basic 2D Design Be sure you have the latest information! Basic 2D Design mastercam x getting started tutorials Basic 2D Design December 2011 Be sure you have the latest information! Information might have been changed or added since this document was published.

More information

Photoshop- Image Editing

Photoshop- Image Editing Photoshop- Image Editing Opening a file: File Menu > Open Photoshop Workspace A: Menus B: Application Bar- view options, etc. C: Options bar- controls specific to the tool you are using at the time. D:

More information

Virtual CRASH 3.0 Staging a Car Crash

Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH 3.0 Staging a Car Crash Virtual CRASH Virtual CRASH 3.0 Staging a Car Crash Changes are periodically made to the information herein; these changes will be incorporated in new editions of

More information

Scanners and How to Use Them

Scanners and How to Use Them Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types

More information

Head-Coupled Perspective

Head-Coupled Perspective Head-Coupled Perspective Introduction Head-Coupled Perspective (HCP) refers to a technique of rendering a scene that takes into account the position of the viewer relative to the display. As a viewer moves

More information

Guide To Creating Academic Posters Using Microsoft PowerPoint 2010

Guide To Creating Academic Posters Using Microsoft PowerPoint 2010 Guide To Creating Academic Posters Using Microsoft PowerPoint 2010 INFORMATION SERVICES Version 3.0 July 2011 Table of Contents Section 1 - Introduction... 1 Section 2 - Initial Preparation... 2 2.1 Overall

More information

Quick Start Tutorial Imperial version

Quick Start Tutorial Imperial version Quick Start Tutorial Imperial version 1996-2006 Cadsoft Corporation. No part of this guide or the accompanying software may be reproduced or transmitted, electronically or mechanically, without written

More information

What is Camber, Castor and Toe?

What is Camber, Castor and Toe? What is Camber, Castor and Toe? Camber is probably the most familiar suspension term to owners. It is the angle of the wheels relative to the surface of the road, looking at the car from the front or rear.

More information

Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect

Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect Objectives: PS-7.1 Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect Illustrate ways that the energy of waves is transferred by interaction with

More information

House Design Tutorial

House Design Tutorial Chapter 2: House Design Tutorial This House Design Tutorial shows you how to get started on a design project. The tutorials that follow continue with the same plan. When we are finished, we will have created

More information

Eye Tracking Instructions

Eye Tracking Instructions Eye Tracking Instructions [1] Check to make sure that the eye tracker is properly connected and plugged in. Plug in the eye tracker power adaptor (the green light should be on. Make sure that the yellow

More information

Quick Start Tutorial Metric version

Quick Start Tutorial Metric version Quick Start Tutorial Metric version 1996-2009 Cadsoft Corporation. No part of this guide or the accompanying software may be reproduced or transmitted, electronically or mechanically, without written permission

More information

Freehand Sketching. Sections

Freehand Sketching. Sections 3 Freehand Sketching Sections 3.1 Why Freehand Sketches? 3.2 Freehand Sketching Fundamentals 3.3 Basic Freehand Sketching 3.4 Advanced Freehand Sketching Key Terms Objectives Explain why freehand sketching

More information