3. Three-Dimensional Graphics 3.1 3D Vector Operations 3.1.1 Translation: x' = x + a 1 y' = y + a 2 z' = z + a 3 or in matrix format: 1 0 0 0 0 1 0 0 [x' y' z' 1] = [x y z 1] 0 0 1 0 a 1 a 2 a 3 1 3.1.2 Rotations: Right Handed Rule : with your thumb representing the axis, your fingers show the direction of positive rotation around that axis Using similar derivations as in 2D previously (Vera Anand book), we get the following matrices for rotation about the co-ordinate axes through angle a : 1 0 0 R x = 0 cos a sin a 0 -sin a cos a as a point rotates about the x axis, its x component remains unchanged Cos a 0 -sin a R Y = 0 1 0 Sin a 0 cos a Cos a sin a 0 R Z = -sin a cos a 0 0 0 1 To use one of these matrices, e.g.: [x' y' z'] = [x y z] R X
3.2 3D Viewing Operations Projection: the task of displaying a 3D object on a 2D surface (graphics display) Types of projection:
e.g. projection onto the xy plane: 1 0 0 0 0 1 0 0 [x' y' 0 1] = [x y z 1] 0 0 0 0 Effectively, we are simply ignoring the z component 3.2.1 Multiview Orthographic Projection Produces multiple views of the object by projecting it onto one of the 6 possible planes 3.2.2 Isometric Projection: Angle between each (projected) principal axis is 120 degrees
Steps: 1. Rotations and translations operate on the object to produce a good viewing angle (sometimes called 'tilting') 2. Orthographic projection (usually onto the xy plane) is performed e.g. step 1: rotation about y axis, then rotation about x axis - this maintains the verticality of lines in the projection The isometric projection is a combination of M TILT and an orthographic projection, i..e.: 1 0 0 0 0 1 0 0 [M ISO ] = [M TILT ] 0 0 0 0 cosa y sina y sina x 0 0 0 cosa x 0 0 = sina y -sina x 0 0 cosa y α y α x Finally, specific values for a x and a y are required In the above example, these values are (derivation Anand pp. 159-160): The 'tilt' matrix, combining x and y axis rotation: cosa y sina y sina x -sina y cosa x 0 cosa x sina x 0 [M TILT ] = sina y -sina x cosa x cosa y 0 cosa y 0 a x = 35.26 degrees a y = 45 degrees 3.2.3 Perspective Projection: All projectors emanate from a centre of projection ('vanishing point') More visually realistic than the projections previously discussed
Matrix for 1-point perspective projection where the centre of projection, as shown above, lies on the z axis (derivation Anand pp. 165-167): 1-point perspective: a single centre of projection is located along one of the 3 co-ordinate axes the other two centres are at infinity => horizontal and vertical lines remain horizontal and vertical 1 0 0 0 0 1 0 0 [M PERZ ]= 0 0 0-1/z cp If the centre of projection lies on the x axis: 0 0 0-1/x cp 0 1 0 0 [M PERX ]= 0 0 1 0 If the centre of projection lies on the y axis: 1 0 0 0 0 0 0-1/y cp [M PERY ]= 0 0 1 0 2- or 3-point perspective projection can be obtained by concatenation of these 1-point perspective matrices, e.g: Projection of a point onto the xy plane 1 0 0-1/x cp 0 1 0-1/y cp [M PERXY ]= 0 0 1 0
As with isometric projection, it is common to apply translations and rotations to objects prior to applying the above perspective projections, in order to provide a good viewing angle (or to allow the user to control the viewing position / camera angle) Example application (Visual Basic) 3.3 Hidden Line/Surface Removal Removing lines or surfaces not visible to the viewer May operate in: object space (accurate), or image space (fast) Common algorithms: 1. "Back-face culling" (simple object space technique) fast limited to convex objects may be used prior to other better (but slower) techniques operates by comparing the orientation of complete polygons with the view point or centre of projection, and removing those that are facing backwards Source code (VB4) available on the CT404 webpage 2. "Priority fill" or "Painter's algorithm" (object space technique)
calculates an ordered list of objects, allowing those further away to be rendered first the fastest (and least accurate) way of sorting faces is to use their average depths 3. "Z Buffer algorithm" (image space technique) often implemented at hardware level uses a Z (depth) array of same pixel dimensions as graphics window, which is computed in parallel to the image bitmap itself each value in the Z buffer represents the depth of the pixel at that point each pixel is only drawn into the bitmap if it is closer than the current Z buffer value at that point as each pixel is drawn, the Z buffer is updated 4. Binary Space Partitioning (BSP) Trees Good for drawing 3D scenes where the positions of objects are fixed and the user's viewing coordinate changes (flight simulators being a classic example) BSP trees insert all objects into a binary tree, and each object partitions (or splits) space, breaking down what is on the left and what is on the right of each object Pre-computed They allow, for any viewpoint, the extraction of the correct depth-ordering of objects, for rendering Unlike the painter s algorithm, errors do not happen when an object cuts through another. When BSP trees are built, objects are split into sections to avoid this. The following two techniques are for fully rendering and shading a scene, not just hidden surface removal: 1. "Ray Tracing" brute force! => slow but accurate traces each pixel as a "beam of light" back into the scene and determines the object it first hits can incorporate light sources, shadows, depth fog, reflected light, transparency, etc. 2. Radiosity Computation begins at light sources (rather than viewpoint) Each patch of light is traced into the scene and its interaction with surfaces is calculated; reflected light from the surfaces are calculated also The advantage is that radiosity is view independent, therefore a lot of the work can be pre-calculated, i.e. calculated off-line prior to the real-time application rendering the model
Radiosity is a classic example of the type of pre-computation that is often performed in real-time 3D applications, and will work perfectly well assuming lights/surfaces aren t moving 3.4 Shading The colour at any pixel is determined by: characteristics (including colour) of the surface itself information about the light sources (ambient, directional parallel or point, spot) and their positions relative to the surface Diffuse and specular reflections Standard (increasingly complex) shading algorithms: 1. Lambert (flat) shading: calculates and applies directly the shade of each surface
may look unrealistic because direction of light illuminating texture map will be constant and not the same as that illuminating the surface not good when viewed closely 2. Gourard (smooth) shading (1971): calculates the shade at each vertex, and interpolates (smoothes) these across surfaces 3. Phong (normal interpolating) shading (1975): calculates the normal at each vertex, and interpolates these across the surfaces the shade at each pixel is calculated from its surface normal 3.5 High-Speed Algorithmic Approaches to Realism: Texture Mapping: maps a raster image onto a surface affects its shading (but it remains smooth) Mipmapping: At least two (and probably four) textures of progressively lower resolution are used for a surface, and the graphics API uses the pixels from one of these depending on the distance and orientation of the surface as it is rendered Bump Mapping: Simulates the displacement of a surface's points slightly up or down, by modifying the surface normal according to the corresponding value in the bitmap Much simpler than actually modelling the geometry of such a complex surface, yet nearly as effective Normal Mapping: A more advanced version of bump-mapping While bump mapping uses greyscale values, normal mapping uses the three color channels (RGB) for the three normal axes (XYZ) allowing displacement to be simulated in any direction
Normal maps/bump maps are produced algorithmically by cross-referencing a highpolygon count model with the low polygon count version that will be used for real-time rendering this satisfies the polygon budget Shaders: Programs that are executed on the video hardware, detailing at a low level the procedure for manipulating the vertices or pixels on a surface Vertex shaders these manipulate vertex data values on a 3D plane via mathematical operations on an object s vertices affecting various properties of a vertex (colour, lighting etc.) but most noticeably its orientation and position this allows dynamic (rather than prerendered) animation effects, e.g. clothing, hair, etc. Pixel shaders operate at the level of the discretely viewable pixel, defining flexible and fast dynamic operations to apply to the colour at that point, allowing dynamic lighting and material effects