Non-Photorealistic Volume Rendering Using Stippling Techniques
|
|
|
- Hester Miles
- 9 years ago
- Views:
Transcription
1 Non-Photorealistic Volume Rendering Using Stippling Techniques Aidong Lu Purdue University Christopher J. Morris IBM TJ Watson Research Center David S. Ebert Purdue University Penny Rheingans University of Maryland-Baltimore County Charles Hansen University of Utah Figure 1: Engine block rendered with volume stipple renderer.(k gc = 0.39, k gs = 0.51, k ge = 1.0). (a) is the default rendering and (b) shows boundary and silhouette enhancement, as well as silhouette curves.(k sc = 0.279, k ss = 0.45, k se = 1.11) ABSTRACT Simulating hand-drawn illustration techniques can succinctly express information in a manner that is communicative and informative. We present a framework for an interactive direct volume illustration system that simulates traditional stipple drawing. By combining the principles of artistic and scientific illustration, we explore several feature enhancement techniques to create effective, interactive visualizations of scientific and medical datasets. We also introduce a rendering mechanism that generates appropriate point lists at all resolutions during an automatic preprocess, and modifies rendering styles through different combinations of these feature enhancements. The new system is an effective way to interactively preview large, complex volume datasets in a concise, meaningful, and illustrative manner. Volume stippling is effective for many applications and provides a quick and efficient method to investigate volume models. CR Categories: I.3.6 [Computer Graphics]: Methodology and Techniques interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism color, shading, and texture Keywords: non-photorealistic rendering, volume rendering, scientific visualization, medical imaging Contact author: [email protected] 1 INTRODUCTION Throughout history, archaeologists, surgeons, engineers, and other researchers have sought to represent the important scientific data that they have gathered in a manner that could be understood by others. Illustrations have proven to be an effective means to achieve this goal because they have the capability to display information more efficiently by omitting unimportant details. This refinement of the data is accomplished by directing attention to relevant features or details, simplifying complex features, or exposing features that were formerly obscured [31]. This selective inclusion of detail enables illustrations to be more expressive than photographs. Indeed, many natural science and medical publications use scientific illustrations in place of photographs because of the illustrations educational and communicative ability [10]. Illustrations take advantage of human visual acuity and can represent a large amount of information in a relatively succinct manner, as shown in Figures 2 and 3. Frequently, areas of greater emphasis are stippled to show detail, while peripheral areas are simply outlined to give context. The essential object elements (e.g., silhouettes, surface and interior) can achieve a simple, clear, and meaningful image. By controlling the level of detail in this way, the viewer s attention can be directed to particular items in the image. This principle forms the basis of our stipple rendering system. Stipple drawing is a pen-and-ink illustration technique where dots are deliberately placed on a surface of contrasting color to obtain subtle shifts in value. Traditional stipple drawing is a timeconsuming technique. However, points have many attractive features in computer-generated images. Points are the minimum el-
2 Figure 3: Cicadidae stipple drawing by Gerald P. Hodge [10]. Figure 2: Idol stipple drawing by George Robert Lewis [10]. 2 ement of all objects and have connatural features that make them suitable for various rendering situations, no matter whether surface or volume, concrete or implicit. Furthermore, points are the simplest and quickest element to render. By mimicking traditional stipple drawing, we can interactively visualize modestly sized simulations. When initially exploring an unknown volume dataset, this system provides an effective means to preview this data and highlight areas of interest in an illustrative fashion. The system creates artistic rendering effects and enhances the general understanding of complex structures. Once these structures are identified, the user may choose another volume rendering technique to generate a more detailed image of these structures. It is the use of non-photorealistic rendering (NPR) techniques that provides the stipple volume renderer with its interactivity and illustrative expressiveness. We refer to this type of NPR technique as illustrative rendering. NPR is a powerful tool for making comprehensible, yet simple images of complicated objects. Over the past decade, the field of NPR has developed numerous techniques to incorporate artistic effects into the rendering process [8, 27]. Various approaches have been used, including pen-and-ink illustration, silhouette edges, and stroke textures. Most of the research in the field of non-photorealistic illustration has concentrated on strokes, crosshatching, and pen and ink techniques [9, 14, 26] and most of the current research still concentrates on surface renderings, which requires surface geometry. We chose to directly render volume datasets without any additional analysis of object or structure relationships within the volume. Volume stippling not only maintains all the advantages of NPR, but it also makes interactive rendering and illustration feasible on useful-sized datasets because of two attributes of points: fast rendering speed and innate simplicity. In our system, the volume resolution is initially adjusted for optimum stipple pattern rendering, and point lists are generated corresponding to the gradient magnitude and direction. Next, a rendering mechanism is introduced that incorporates several feature enhancements for scientific illustration. These enhancements include a new method for silhouette curve generation, varying point sizes, and stipple resolution adjustments based on distance, transparency, and lighting effects. By combining these feature enhancements, datasets can be rendered in different illustration styles. R ELATED W ORK Non-photorealistic rendering has been an active area of research, with most of the work concentrating on generating images in various traditional styles. The most common techniques are sketching [30], pen-and-ink illustration [6, 23, 24, 31],silhouette rendering [14, 19, 21, 25], and painterly rendering [1, 4]. Pen-and-ink rendering uses combinations of strokes (i.e. eyelashing and crosshatching) to create textures and shading within the image. Lines, curves, and strokes are the most popular among existing NPR techniques. Praun et al. [20] presented a real-time system for rendering of hatching strokes over arbitrary surfaces by building a lapped texture parameterization where the overlapping patches align to a curvature-based direction field. Ostromoukhov [17] illustrated some basic techniques for digital facial engraving by a set of black/white and color engravings, showing different features imitating traditional copperplate engraving. Hertzmann et al. [9] presented a method for creating an image with a hand painted appearance from a photograph and an approach to designing styles of illustration. They demonstrated a technique for painting with long, curved brush strokes, aligned to the normals of image gradients, to explore the expressive quality of complex brush stokes. Winkenbach and Salesin [32] presented algorithms and techniques for rendering parametric free-form surfaces in pen and ink. Deussen et al. [5] used points for computer generated pen-andink illustrations in simulating the traditional stipple drawing style. Their method is to first render polygonal models into a continuous tone image and then convert these target images into a stipple representation. They can illustrate complex surfaces very vividly, but their method is for surface rendering, not volumes, and is too slow for interactive rendering. NPR techniques have only recently been applied to the visualization of three-dimensional (volume) data. Interrante developed a technique for using three-dimensional line integral convolution (LIC) using principal direction and curvature to effectively illustrate surfaces within a volume model [12]. Treavett and Chen also used illustration techniques to render surfaces within volumes [28, 29]. In both cases the results were compelling, but the techniques are surface-based visualization techniques, rather than direct volume rendering techniques that can show not only surfaces, but also important details of the entire volume.
3 Several NPR techniques have recently been applied to volume rendering. Ebert et al. [7] showed the power of illustrative rendering techniques for volume data; however, the renderer was based on ray-casting and too slow for interactivity or quick exploration of the data. Our current work builds upon enhancement concepts from that work yet. Furthermore, interactive volume rendering has garnered a significant amount of attention [15] and NPR methods have been applied to obtain interactive performance while producing effective volume renderings [2, 3]. Treavett et al. [29] implemented artistic procedures in various stages of the volumerendering pipeline. Techniques such as brush strokes, control volumes, paint splatting, and others were integrated into their rendering system to produce a variety of artistic effects to convey tone, texture and shape. However, tone, texture, and shape can be effectively conveyed by simply controlling the placement and density of points. Though not a primary focus in illustrative rendering systems until recently, points have been used as rendering primitives before. Levoy and Whitted [13] first demonstrated that points could be used as a display primitive and that a discrete array of points arbitrarily displaced in space, using a tabular array of perturbations, could be rendered as a continuous three-dimensional surface. Furthermore, they established that a wide class of geometrically defined objects, including both flat and curved surfaces, could be converted into points. The use of points as surface elements or surfels can produce premium quality images, which consist of highly complex shape and shade attributes, at interactive rates [18, 34]. The main difference between previous stipple and point rendering research and ours is that our system interactively renders volumes with points instead of just surfaces with points. Within volume rendering, the closest related technique is splatting [33, 22], which traditionally does not incorporate the effectiveness of illustration techniques. In the remainder of this paper, we show the effectiveness of a simple point-based interactive volume stippling system and describe how a number of illustrative enhancement techniques can be utilized to quickly convey important volume characteristics for rapid previewing and investigation of volume models. 3 THE STIPPLE VOLUME RENDERER The clarity and smoothness displayed by stippling, coupled with the speed of hardware point rendering, makes volume stippling an effective tool for illustrative rendering of volume data. As with all scientific and technical illustration, this system must perform two key tasks. First, it must determine what to show, primarily by identifying features of interest. Second, the system must carry out a method for how to show identified features. The stipple renderer consists of a point-based system architecture that behaves as a volume renderer and visually extracts various features of the data by selective enhancement of certain regions. Volume gradients are used to provide structure and feature information. With this gradient information, other features can be extracted such as the boundary regions of the structure. We can illustrate these volumes using stippling techniques with a particular set of features in mind. To effectively generate renderings of volume datasets at interactive rates, the system has two main components: a preprocessor and an interactive point renderer with feature enhancement. 4 PREPROCESSING Before interactive rendering begins, the preprocessor automatically generates an appropriate number of stipple points for each volume based on volume characteristics, including gradient properties and basic resolution requirements. This preprocessing stage handles a number of calculations that do not depend on viewpoint or enhancement parameters, including the calculation of volume gradient direction and magnitude, the initial estimation of stipple density from volume resolution, and the generation of an initial point distribution. Furthermore, the voxel values and gradients are all normalized. 4.1 Gradient Processing Gradient magnitude and direction are essential in feature enhancement techniques, especially when rendering CT data [11]. Some feature enhancements are significantly affected by the accuracy of the gradient direction, especially our light enhancement. Noisy volume data can create problems in generating correct gradient directions. Additionally, first and second derivative discontinuity in voxel gradients can affect the accuracy of feature enhancements. Initially, we tried a traditional central difference gradient method. However, Neumann et al. [16] have presented an improved gradient estimation method for volume data. Their method approximates the density function in a local neighborhood with a threedimensional regression hyperplane whose four-dimensional error function is minimized to get the smoothed dataset and estimated gradient at the same time. We have implemented their method for better gradient estimation. 4.2 Initial Resolution Adjustment When viewing an entire volume dataset, as the volumes size increases, each voxel s screen projection is reduced. Even if we assign at most one point per voxel, areas with high gradient magnitude still appear too dark. We use a simple box-filter to initially adjust the volume resolution, so that the average projection of a voxel on the screen is at least 5x5 pixels. This adjustment improves the stippling pattern in the resulting images. We define N max as the maximum number of stipples that each voxel can contain during the rendering process. After reading the dataset, we approximately calculate the maximum projection of a voxel on the screen and set the maximum number of points in the voxel to be equal to the number of pixels in the voxel projection. This reduces redundant point generation and increases the performance of the system. The following formula is used: N max = k max A vol /(X res Y res Z res) 2/3 (1) where A vol is the rendered area, k max is a scaling factor, and the volume has resolution X res Y res Z res. This is a heuristic formula because the scale of the X, Y and Z axes are not ordinarily the same. Figure 4 shows several resolutions of a dataset. In each case, most of the details of the dataset are preserved. 4.3 Initial Point Generation In several illustrative applications, units (such as points, particles or strokes) are distributed evenly after random initialization. Due to constantly changing scenes, these individual units are redistributed in every frame. This process is very time-consuming and leads to problems with frame-to-frame coherence. To alleviate this problem, we approximate a Poisson disc distribution to initially position a maximum number of stipples. According to the statistics of the gradient magnitude distribution, we generate stipples near the gradient plane for the voxels whose gradient magnitude is above a user specified threshold. We place stipples randomly, around the center of the voxel, between two planes, p1 and p2, that are parallel to the tangent plane, p0, and are separated by a distance chosen by the user. Next, we adjust the point locations in this subvolume so that they are relatively equally spaced, approximating the even
4 distribution of points in a stipple drawing. After this preprocessing step is performed and the stipple positions are determined, any processing that is subsequently performed (i.e. feature enhancements, motion), simply adjusts either the number of stipples that are drawn within each voxel or their respective size. We always select the stipples that will be drawn from a pre-generated list of stipples for each voxel, therefore, maintaining frame-to-frame coherence for the points. 5 FEATURE ENHANCEMENTS Scientific illustration produces images that are not only decorative, but also serve science [10]. Therefore, the rendering system must produce images accurately and with appropriately directed emphasis. To meet this requirement, we have explored several feature enhancements in an attempt to simulate traditional stipple illustrations. These feature enhancements are based on specific characteristics of a particular voxel: whether it is part of a boundary or silhouette, its spatial position in relation to both the entire volume and the entire scene, and its level of illumination due to a light source. In particular, silhouette curves (common in stipple drawings) are very useful for producing outlines of boundary regions and significant lines along interior boundaries and features. To enable the use of all of our feature enhancements, each voxel has the following information stored in a data structure: number of points gradient voxel scalar data value point size point list containing the x, y, z location of each point Our feature enhancements, calculated on a per frame basis, determine a point scaling factor according to the following sequence: boundary, silhouette, resolution, light, distance, and interior. For different datasets, we select a different combination of feature enhancements to achieve the best effect. The basic formula for the point count per voxel, N i, is the following: N i = N max T (2) where N max is the maximum number of points a voxel can contain, calculated according to Equation 1 from the volume s projected screen resolution, and T = T b T s T r T d T t T l (3) T b, T s, T r, T d, T t, and T l are the boundary, silhouette, resolution, distance, interior transparency, and lighting factors, respectively, described in the following sections. Each factor is normalized in the range of zero to one. If some feature enhancements are not selected, the corresponding factors will not be included in Equation (3). Besides the point count of a voxel, the point size is also an important factor to increase visualization quality. The basic point size of a voxel is calculated by the following equation: S i = V i S max (4) where S max is a user specified maximum point size. Voxels with larger gradient magnitude contain larger points, achieving the effect of smooth point size changes within the volume. The point size for each voxel is calculated in a manner similar to Equation Boundaries and Silhouettes In traditional stipple drawings, boundaries are usually represented by a high concentration of stipples that cluster on surfaces. In a scalar volume, the gradient of a voxel is a good indication of whether the voxel represents a boundary region. Boundary and silhouette enhancements are determined using volume illustration techniques [7]. The boundary enhancement factor T b for a voxel at location P i is determined from the original voxel scalar value, v i and the voxel value gradient magnitude V i using the following formula: T b = v i (k gc + k gs ( V i kge )) (5) where k gc controls the direct influence of the voxel value, k gs indicates the maximum boundary enhancement, and k ge controls the sharpness of the boundary enhancement. By making the stipple placement denser in voxels of high gradient, boundary features are selectively enhanced. This feature extraction can be further improved with silhouette enhancement techniques. In manual stipple drawings, the highest concentration of stipples is usually in areas oriented orthogonally to the view plane, forming the silhouette edge. The silhouette enhancement factor T s is constructed in a manner similar to the boundary enhancement factor. The parameters k sc, k ss, and k se are controlled by the user to adjust each part s contribution, as shown in the following formula: T s = v i (k sc + k ss (1 ( V i E )) kse ) (6) where E is the eye vector. Using the boundary and silhouette enhancement factors, we can effectively render the outline of the features in the volume. Therefore, points are dense on the outline of the objects, while sparse on other boundaries and in the interior. We render more points on and inside the volume boundaries and can, consequently, incorporate light and transparency information to more effectively enhance the rendering. Figure 1(b) shows the engine block volume rendered with stipples. Boundary areas, particularly those in silhouette, are enhanced, showing these features clearly. 5.2 Resolution Traditionally, the number of stipples used to shade a given feature depends on the viewed resolution of that feature. By using a resolution factor, we can prevent stipple points from being too dense or sparse. The resolution factor adjusts the number of points in each voxel and produces the effect that the features become larger and clearer when the volume moves closer to the viewpoint. It also helps increase rendering performance by eliminating unnecessary rendering of distant points. In order to implement resolution enhancement, we use the following formula: (Dnear + di) T r = [ (D near + d 0) ]kre (7) where D near is the location of the near plane, d i is the distance from the current location of the volume to the near plane, d 0 is the distance from the initial location of the volume to the near plane (we use this position as the reference point), and k re controls the rate of change of this resolution enhancement. When k re equals 0, there is no enhancement. The bigger the value, the more obvious the effect. The point size also varies with the change in resolution so that point sizes are small when the resolution is low and large when resolution is high. In Figure 4, the same model is viewed at three different distances, but the resulting stipple density is the same for each.
5 Figure 4: (k re=1.1) Resolution enhancement of the leg volume dataset. 5.3 Distance In resolution enhancement, we use the location of the whole volume in the scene. The location of different volume elements within the overall volume presents a different challenge. Distance is an important factor that helps us understand the relationship between elements within the volume. As in traditional illustration, we can enhance depth perception by using the position of a voxel within the volume box to generate a factor that modifies both the point count and the size of the points. We use a linear equation with different powers k de to express the function of the distance attenuation to generate this factor via the following equation: T d = 1 + ( z a )k de (8) Where ( a, a) is the original distance range in the volume, z is the depth of the current voxel, and k de controls the contribution of the distance attenuation for each voxel. k de may change from negative to positive to enhance different parts in the volume. Figure 5 shows an example of distance attenuation. Comparing this image to that in Figure 1(b), it is clear that more distant parts of the volume contain fewer and smaller points. This is most apparent in the back, right section of the engine block. 5.4 Interior Point rendering is transparent in nature, allowing background objects to show through foreground objects. By doing explicit interior enhancement, we exaggerate this effect, allowing us to observe more details inside the volume. Generally speaking, the point count of the outer volume elements should be smaller than that of the interior to allow the viewing of interior features. In our system, the number of points varies based on the gradient magnitude of a voxel to the center of the volume, thus achieving a better transparency effect: T t = V i k te (9) k te controls the falloff of the transparency enhancement. With this equation, the voxels with lower gradient magnitude become more transparent. In addition, point sizes are adjusted by the transparency factor. In Figure 6, the density of the leaves changes from sparse to dense when the gradient magnitude changes from low to high. The structure of the tree is more evident with interior enhancement. Figure 5: Distance attenuation of the engine block volume.(k de = 1.5) 5.5 Lighting Achieving compelling lighting effects within the stipple renderer presents several challenges. First, when using noisy volumes, the gradients are not adequate for expressing the shadings of some structures correctly. Second, because structures often overlap in the volume, it can still be difficult to identify to which structure a point belongs in complex scenes. Finally, the problem of capturing both the inner and outer surfaces at the same time, while their gradient directions are opposite, must be correctly handled. These issues can all significantly reduce the quality of the lighting effects. Therefore, when lighting the volume, only the front oriented voxels (where the gradient direction is within ninety degrees of the eye direction) are rendered. The following equation is used to generate a factor to modify the point count of the voxels: T l = 1 ( L V i) k le (10) where L is the light direction and k le controls the contribution of the light. In Figure 7, light is projected from the upper right corner in the image and smaller vessels have been removed with parameter settings to better highlight the shading of the main features. The main structures and their orientation are much clearer with the application of lighting effects. 5.6 Silhouette Curves Manual stipple drawings frequently contain outlines and other curves which supplement the shading cues provided by the stipples. These silhouette curves are generally drawn at two places: the outline of the objects and obvious interior curves. Searching for potential silhouette curves in the vicinity of each voxel could easily create a performance bottleneck by requiring a search in, at least, the 3x3x3 subspace around each voxel. We have implemented this more exhaustive search, as well as an alternative technique using the Laplacian of Gaussian operator (LOG) as a volumetric edge detection technique. This (LOG) edge detection technique provides virtually identical results and simplifies the boundary computation, so it is much faster to calculate per frame. In a preprocess, we compute the LOG value
6 (without interior enhancement) (with interior enhancement) Figure 6: Stipple rendering of bonsai tree volume.(k te = 0.5) (without silhouette curves) (with silhouette curves) Figure 8: Stipple rendering of the foot volume.(t h eye = 0.0, T h grad = 0.2, T h LOG = 0.12) for each voxel, then during rendering, we determine the silhouette voxels using the following criteria: 1. V i LOG(V i) < T h LOG 2. ( E V i) < T h eye 3. V i > T h grad where E is the eye (or view) vector, and T h LOG, T h eye, and T h grad are user controllable threshold values. To sketch silhouette curves, the voxels that satisfy the above conditions have a line segment drawn through the center of the voxel in the direction of V i E. Silhouette curves can be rendered at 20 to 30 frames per second and significantly improve image quality. Figure 8 shows the effectiveness of silhouette curves in highlighting structures, especially the bones of the foot dataset. 6 PERFORMANCE We are able to interactively render reasonably-sized volume datasets using illustrative enhancement with our system on modern PCs. The total number of point primitives in a typical data set ranges from 5,000 to 2,000,000, and the silhouette curves range from 1,000 to 300,000. Performance results of our stipple system are presented in Table 1. These running times were gathered from a dual processor Intel Xeon 2.00 GHz computer with a Geforce 3 Ti 500 display card. The preprocessing time varies from seconds to a minute. The frame rates can be improved by further reducing cache exchange and floating point operations. Nonetheless, the measured frame rate does provide the user with a level of interactivity necessary for exploring and illustrating various regions of interest within the volume datasets. Through the use of sliders, the user is able to quickly adjust the parameters to select the desired feature enhancement and its appropriate level. The user is able to rotate, translate, and zoom in or out of the volume while maintaining consistent shading. The system has very good temporal rendering coherence with only very subtle temporal aliasing occurring during rotation near silhouette edges and illuminated boundaries as new points are added based on the silhouette and illumination enhancement factor. We have implemented a simple partial opacity point rendering to fade the points which alleviates this problem. Dataset resolution stipples silhouettes both iron 64x64x head 256x256x engine 256x256x leg 341x341x lobster 301x324x foot 256x256x aneurysm 256x256x bonsai 256x256x Table 1: Running times (frames per second) for separate rendering techniques.
7 (without lighting) Figure 9: Head volume with silhouette, boundary, and distance enhancement and silhouette curves.(k gc = 0.4, k gs = 0.0, k ge = 10.0, k sc = 0.5, k ss = 5.0, k se = 1.0; T h eye = 0.9, T h grad = 0.2, T h LOG = 0.22) (with lighting) Figure 7: Stipple rendering of the aneurysm volume.(k le = 2.7) 7 CONCLUSIONS AND FUTURE WORK We have developed an interactive volumetric stippling system that combines the advantages of point based rendering with the expressiveness of the stippling illustration style into an effective interactive volume illustration system, as can be seen above in Figure 9. This system utilizes techniques from both hand drawn illustration and volume rendering to create a powerful new environment in which to visualize and interact with volume data. Our system demonstrates that volumetric stippling effectively illustrates complex volume data in a simple, informative manner that is valuable, especially for initial volume investigation and data previewing. For these situations, the volume stipple renderer can be used to determine and illustrate regions of interest. These regions can then be highlighted as important areas when traditional rendering methods are later used for more detailed exploration and analysis. Initial feedback from medical researchers is very positive. They are enthusiastic about the usefulness of the system for generating images for medical education and teaching anatomy and its relation to mathematics and geometry to children. Many new capabilities have recently become available on modern graphics hardware that could significantly improve the performance of our system. Programmable vertex shades can allow us to move many of our feature enhancements onto the graphics card. This is especially true for those that are view dependent. Preprocessed points can be stored as display lists or vertex arrays in the graphics card s memory, which avoids the expensive vertex download each time a frame is rendered. Vertex programs can be used to evaluate the thresholds of feature enhancements by taking advantage of the fact that we are using vertices rather than polygons. Titleholding involves simple formulae and can be easily implemented in a vertex program. When a vertex falls below the enhancement threshold its coordinates can be modified to a position off screen, effectively culling it. This culling technique is not possible, in general, for polygons since there is currently no connectivity information available in vertex programs. We plan to extend our work to improve the interactivity of the system and compare the performance to other NPR volume renderers to assess the effectiveness of using a point-based rendering system. Furthermore, we will continue to explore additional feature enhancement techniques. Though it was too soon to include the results in this paper, initial investigation into using color within our stipple rendering system has already begun. Additionally, it may be interesting to investigate the implementation of a stipple renderer using a texture-based volume rendering architecture which modulates the alpha values per-pixel in the fragment shader portion of the pipeline. 8 ACKNOWLEDGEMENTS This material is based upon work supported by the National Science Foundation under Grants: NSF ACI , NSF ACI , NSF IIS , NSF ACI , NSF MRI , NSF ACR , and DOE VIEWS program. REFERENCES [1] E. Akelman. Implicit surface painting. In Proceedings of Implicit Surfaces Conference 1998, pages 63 68, [2] B. Csébfalvi and M. Gröller. Interactive volume rendering based on a bubble model. In GI 2001, pages , June [3] B. Csébfalvi, L. Mroz, H. Hauser, A. König, and M. Gröller. Fast visualization of object contours by non-photorealistic volume rendering. Computer Graphics Forum, 20(3): , September [4] C. Curtis, S. Anderson, J. Seims, K. Fleischer, and D. Salesin. Computergenerated watercolor. In Proceedings of SIGGRAPH 1997, Computer Graphics Proceedings, Annual Conference Series, pages , August [5] O. Deussen, S. Hiller, C. van Overveld, and T. Strothotte. Floating points: A method for computing stipple drawings. Computer Graphics Forum, 19(3), August [6] O. Deussen and T. Strothotte. Computer-generated pen-and-ink illustration of trees. In Proceedings of SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages 13 18, July 2000.
8 [7] D. Ebert and P. Rheingans. Volume illustration: Non-photorealistic rendering of volume models. In IEEE Visualization 2000, pages , October [8] B. Gooch and A. Gooch. Non-Photorealistic Rendering. A. K. Peters, [9] A. Hertzmann. Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of SIGGRAPH 1998, Computer Graphics Proceedings, Annual Conference Series, pages , July [10] E. Hodges(editor). The Guild Handbook of Scientific Illustration. John Wiley and Sons, [11] K. Höhne and R. Bernstein. Shading 3D-images from CT using gray level gradients. IEEE Transactions on Medical Imaging, 5(1):45 47, October [12] V. Interrante. Illustrating surface shape in volume data via principal directiondriven 3D line integral convolution. In Proceedings of SIGGRAPH 1997, Computer Graphics Proceedings, Annual Conference Series, pages , August [13] M. Levoy and T. Whitted. The use of points as a display primitive. Technical Report , University of North Carolina-Chapel Hill Computer Science Department, January [14] K. Ma and V. Interrante. Extracting feature lines from 3D unstructured grids. In IEEE Visualization 97, pages , November [15] L. Mroz and H. Hauser. RTVR - a flexible java library for interactive volume rendering. In IEEE Visualization 2001, pages , San Diego, CA, October [16] L. Neumann, B. Csébfalvi, A. König, and E. Gröller. Gradient estimation in volume data using 4D linear regression. Computer Graphics Forum, 19(3): , August [17] V. Ostromoukhov. Digital facial engraving. In Proceedings of SIGGRAPH 1999, Computer Graphics Proceedings, Annual Conference Series, pages , August [18] H. Pfister, M. Zwicker, J. van Baar, and M. Gross. Surfels: Surface elements as rendering primitives. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages , July [19] M. Pop, C. Duncan, G. Barequet, M. Goodrich, W. Huang, and S. Kumar. Efficient perspective-accurate silhouette computation and applications. In Proceedings of the Seventeenth Annual Symposium on Computational Geometry, pages 60 68, [20] E. Praun, H. Hoppe, M. Webb, and A. Finkelstein. Real-time hatching. In Proceedings of ACM SIGGRAPH 2001, Computer Graphics Proceedings, Annual Conference Series, pages , August [21] R. Raskar and M. Cohen. Image precision silhouette edges. In 1999 ACM Symposium on Interactive 3D Graphics, pages , April [22] S. Rusinkiewicz and M. Levoy. QSplat: A multiresolution point rendering system for large meshes. In Proceedings of SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages , [23] M. Salisbury, C. Anderson, D. Lischinski, and D. H. Salesin. Scale-dependent reproduction of pen-and-ink illustrations. In Proceedings of SIGGRAPH 1996, Computer Graphics Proceedings, Annual Conference Series, pages , August [24] M. Salisbury, M. Wong, J. Hughes, and D. Salesin. Orientable textures for imagebased pen-and-ink illustration. In Proceedings of SIGGRAPH 1997, Computer Graphics Proceedings, Annual Conference Series, pages , August [25] P. Sander, X. Gu, S. Gortler, H. Hoppe, and J. Snyder. Silhouette clipping. In Proceedings of SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, pages , July [26] S. Strassmann. Hairy brushes. In Proceedings of SIGGRAPH 1986, Computer Graphics Proceedings, Annual Conference Series, pages , August [27] T. Strothotte and S. Schlechtweg. Non-Photorealistic Computer Graphics: Modeling, Rendering and Animation. Morgan Kaufmann, San Francisco, CA, [28] S. Treavett and M. Chen. Pen-and-ink rendering in volume visualisation. In IEEE Visualization 2000, pages , October [29] S. Treavett, M. Chen, R. Satherley, and M. Jones. Volumes of expression: Artistic modelling and rendering of volume datasets. In Computer Graphics International 2001, pages , July [30] M. Visvalingam. Sketch-based evaluation of line filtering algorithms. In Proceedings of GI Science, Savannah, USA, October [31] G. Winkenbach and D. Salesin. Computer-generated pen-and-ink illustration. In Proceedings of SIGGRAPH 1994, Computer Graphics Proceedings, Annual Conference Series, pages , July [32] G. Winkenbach and D. Salesin. Rendering parametric surfaces in pen and ink. In Proceedings of SIGGRAPH 1996, Computer Graphics Proceedings, Annual Conference Series, pages , August [33] M. Zwicker, H. Pfister, J. van Baar, and M. Gross. EWA volume splatting. In IEEE Visualization 2001, pages 29 36, San Diego, CA, October [34] M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Surface splatting. In Proceedings of SIGGRAPH 2001, Computer Graphics Proceedings, Annual Conference Series, pages , August 2001.
Enhanced LIC Pencil Filter
Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {[email protected], [email protected], [email protected]}
Texture Screening Method for Fast Pencil Rendering
Journal for Geometry and Graphics Volume 9 (2005), No. 2, 191 200. Texture Screening Method for Fast Pencil Rendering Ruiko Yano, Yasushi Yamaguchi Dept. of Graphics and Computer Sciences, Graduate School
Computer-Generated Photorealistic Hair
Computer-Generated Photorealistic Hair Alice J. Lin Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA [email protected] Abstract This paper presents an efficient method for
Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf. Flow Visualization. Image-Based Methods (integration-based)
Visualization and Feature Extraction, FLOW Spring School 2016 Prof. Dr. Tino Weinkauf Flow Visualization Image-Based Methods (integration-based) Spot Noise (Jarke van Wijk, Siggraph 1991) Flow Visualization:
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam [email protected] Mike Bailey [email protected] San Diego Supercomputer Center University of California San Diego Abstract With
Volume visualization I Elvins
Volume visualization I Elvins 1 surface fitting algorithms marching cubes dividing cubes direct volume rendering algorithms ray casting, integration methods voxel projection, projected tetrahedra, splatting
A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET
22nd CIPA Symposium, October 11-15, 2009, Kyoto, Japan A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET Zhiqiang Du*, Qiaoxiong Li State Key Laboratory of Information Engineering
Real-Time Hatching. Abstract. 1 Introduction
Abstract Real-Time Hatching Emil Praun Hugues Hoppe Matthew Webb Adam Finkelstein Princeton University Microsoft Research Princeton University Princeton University Drawing surfaces using hatching strokes
Consolidated Visualization of Enormous 3D Scan Point Clouds with Scanopy
Consolidated Visualization of Enormous 3D Scan Point Clouds with Scanopy Claus SCHEIBLAUER 1 / Michael PREGESBAUER 2 1 Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
3D Interactive Information Visualization: Guidelines from experience and analysis of applications
3D Interactive Information Visualization: Guidelines from experience and analysis of applications Richard Brath Visible Decisions Inc., 200 Front St. W. #2203, Toronto, Canada, [email protected] 1. EXPERT
Non-Photorealistic Rendering of Hair for Animated Cartoons
Non-Photorealistic Rendering of Hair for Animated Cartoons Martin Côté Pierre-Marc Jodoin Charles Donohue Victor Ostromoukhov Department of Computer Science and Operations Research Université de Montréal
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS
ICCVG 2002 Zakopane, 25-29 Sept. 2002 Rafal Mantiuk (1,2), Sumanta Pattanaik (1), Karol Myszkowski (3) (1) University of Central Florida, USA, (2) Technical University of Szczecin, Poland, (3) Max- Planck-Institut
Part I: Non-photorealistic Rendering
Part I: Non-photorealistic Rendering Adam Finkelstein Line Drawings from 3D Models SIGGRAPH 2005 Course Notes 1 Computer graphics today Don t Stunning try this success! at home! Final Fantasy Square 2001
The Flat Shape Everything around us is shaped
The Flat Shape Everything around us is shaped The shape is the external appearance of the bodies of nature: Objects, animals, buildings, humans. Each form has certain qualities that distinguish it from
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
Multivariate data visualization using shadow
Proceedings of the IIEEJ Ima and Visual Computing Wor Kuching, Malaysia, Novembe Multivariate data visualization using shadow Zhongxiang ZHENG Suguru SAITO Tokyo Institute of Technology ABSTRACT When visualizing
Silhouette Extraction Bruce Gooch, University of Utah
Lecture #2, Component #1 INTERPRETING FORM Silhouette Extraction Bruce Gooch, University of Utah Abstract In computer graphics, silhouette finding and rendering has a central role in a growing number of
Rendering Microgeometry with Volumetric Precomputed Radiance Transfer
Rendering Microgeometry with Volumetric Precomputed Radiance Transfer John Kloetzli February 14, 2006 Although computer graphics hardware has made tremendous advances over the last few years, there are
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007
Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total
Interactive Level-Set Deformation On the GPU
Interactive Level-Set Deformation On the GPU Institute for Data Analysis and Visualization University of California, Davis Problem Statement Goal Interactive system for deformable surface manipulation
Overview Motivation and applications Challenges. Dynamic Volume Computation and Visualization on the GPU. GPU feature requests Conclusions
Module 4: Beyond Static Scalar Fields Dynamic Volume Computation and Visualization on the GPU Visualization and Computer Graphics Group University of California, Davis Overview Motivation and applications
Scan-Line Fill. Scan-Line Algorithm. Sort by scan line Fill each span vertex order generated by vertex list
Scan-Line Fill Can also fill by maintaining a data structure of all intersections of polygons with scan lines Sort by scan line Fill each span vertex order generated by vertex list desired order Scan-Line
Perspective and Space
Perspective and Space Space: The element of art referring to the emptiness or area between, around, above, below, or within objects. FOREGROUND - Part of the picture plane that appears closest to the viewer
Direct Volume Rendering Elvins
Direct Volume Rendering Elvins 1 Principle: rendering of scalar volume data with cloud-like, semi-transparent effects forward mapping object-order backward mapping image-order data volume screen voxel
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1 Silverlight for Windows Embedded Graphics and Rendering Pipeline Windows Embedded Compact 7 Technical Article Writers: David Franklin,
Interactive Level-Set Segmentation on the GPU
Interactive Level-Set Segmentation on the GPU Problem Statement Goal Interactive system for deformable surface manipulation Level-sets Challenges Deformation is slow Deformation is hard to control Solution
The STC for Event Analysis: Scalability Issues
The STC for Event Analysis: Scalability Issues Georg Fuchs Gennady Andrienko http://geoanalytics.net Events Something [significant] happened somewhere, sometime Analysis goal and domain dependent, e.g.
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model
CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant
Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?
Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] Goals How do we draw surfaces? Approximate with polygons Draw polygons
Cartoon-Looking Rendering of 3D-Scenes
Cartoon-Looking Rendering of 3D-Scenes Philippe Decaudin 1 Research Report INRIA #2919 June 1996 Abstract We present a rendering algorithm that produces images with the appearance of a traditional cartoon
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Non-Photorealistic Rendering
Non-Photorealistic Rendering Anna Vilanova 1 Introduction Rendering in computer graphics means the process by which a virtual scene is converted into an image. Pioneer work in computer graphics was done
Lecture Notes, CEng 477
Computer Graphics Hardware and Software Lecture Notes, CEng 477 What is Computer Graphics? Different things in different contexts: pictures, scenes that are generated by a computer. tools used to make
Intersection of a Line and a Convex. Hull of Points Cloud
Applied Mathematical Sciences, Vol. 7, 213, no. 13, 5139-5149 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ams.213.37372 Intersection of a Line and a Convex Hull of Points Cloud R. P. Koptelov
Computer Applications in Textile Engineering. Computer Applications in Textile Engineering
3. Computer Graphics Sungmin Kim http://latam.jnu.ac.kr Computer Graphics Definition Introduction Research field related to the activities that includes graphics as input and output Importance Interactive
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
3D-GIS in the Cloud USER MANUAL. August, 2014
3D-GIS in the Cloud USER MANUAL August, 2014 3D GIS in the Cloud User Manual August, 2014 Table of Contents 1. Quick Reference: Navigating and Exploring in the 3D GIS in the Cloud... 2 1.1 Using the Mouse...
Visualisation in the Google Cloud
Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
Technical What s New. Autodesk Alias Product Line
Autodesk Alias Product Line Purpose-built for industrial designers and creative professionals, digital modelers/sculptors, and automotive/transportation designers, the Autodesk Alias 2010 product line
Image Synthesis. Fur Rendering. computer graphics & visualization
Image Synthesis Fur Rendering Motivation Hair & Fur Human hair ~ 100.000 strands Animal fur ~ 6.000.000 strands Real-Time CG Needs Fuzzy Objects Name your favorite things almost all of them are fuzzy!
A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow
, pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Freehand Sketching. Sections
3 Freehand Sketching Sections 3.1 Why Freehand Sketches? 3.2 Freehand Sketching Fundamentals 3.3 Basic Freehand Sketching 3.4 Advanced Freehand Sketching Key Terms Objectives Explain why freehand sketching
ACE: After Effects CC
Adobe Training Services Exam Guide ACE: After Effects CC Adobe Training Services provides this exam guide to help prepare partners, customers, and consultants who are actively seeking accreditation as
Efficient Storage, Compression and Transmission
Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation
Making natural looking Volumetric Clouds In Blender 2.48a
I think that everyone using Blender has made some trials about making volumetric clouds. The truth is that a kind of volumetric clouds is already available in Blender for a long time, thanks to the 3D
Constrained curve and surface fitting
Constrained curve and surface fitting Simon Flöry FSP-Meeting Strobl (June 20, 2006), [email protected], Vienna University of Technology Overview Introduction Motivation, Overview, Problem
Interactive Shadow design tool for Cartoon Animation -KAGEZOU-
Interactive Shadow design tool for Cartoon Animation -KAGEZOU- Shiori Sugimoto Waseda University Shiori-s@ akane.waseda.jp Hidehito Nakajima Waseda University nakajima@mlab. phys.waseda.ac.jp Yohei Shimotori
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
MASKS & CHANNELS WORKING WITH MASKS AND CHANNELS
MASKS & CHANNELS WORKING WITH MASKS AND CHANNELS Masks let you isolate and protect parts of an image. When you create a mask from a selection, the area not selected is masked or protected from editing.
An Overview of the Finite Element Analysis
CHAPTER 1 An Overview of the Finite Element Analysis 1.1 Introduction Finite element analysis (FEA) involves solution of engineering problems using computers. Engineering structures that have complex geometry
A VOXELIZATION BASED MESH GENERATION ALGORITHM FOR NUMERICAL MODELS USED IN FOUNDRY ENGINEERING
METALLURGY AND FOUNDRY ENGINEERING Vol. 38, 2012, No. 1 http://dx.doi.org/10.7494/mafe.2012.38.1.43 Micha³ Szucki *, Józef S. Suchy ** A VOXELIZATION BASED MESH GENERATION ALGORITHM FOR NUMERICAL MODELS
Pro/ENGINEER Wildfire 4.0 Basic Design
Introduction Datum features are non-solid features used during the construction of other features. The most common datum features include planes, axes, coordinate systems, and curves. Datum features do
Scanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
Visualizing Data: Scalable Interactivity
Visualizing Data: Scalable Interactivity The best data visualizations illustrate hidden information and structure contained in a data set. As access to large data sets has grown, so has the need for interactive
Shader Model 3.0. Ashu Rege. NVIDIA Developer Technology Group
Shader Model 3.0 Ashu Rege NVIDIA Developer Technology Group Talk Outline Quick Intro GeForce 6 Series (NV4X family) New Vertex Shader Features Vertex Texture Fetch Longer Programs and Dynamic Flow Control
2009-2010 CURRICULUM MAPPING
Unit/duration: Introduction to Drawing 2 days (Unit 1) Formative Summative What is the purpose of drawing? Knowledge; Comprehension Define drawing Define medium Explain reasons for drawing Describe ways
WYSIWYG NPR: Drawing Strokes Directly on 3D Models
WYSIWYG NPR: Drawing Strokes Directly on 3D Models Robert D. Kalnins 1 Lee Markosian 1 Barbara J. Meier 2 Michael A. Kowalski 2 Joseph C. Lee 2 Philip L. Davidson 1 Matthew Webb 1 John F. Hughes 2 Adam
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
1. Abstract 2. Introduction 3. Algorithms and Techniques
MS PROJECT Virtual Surgery Piyush Soni under the guidance of Dr. Jarek Rossignac, Brian Whited Georgia Institute of Technology, Graphics, Visualization and Usability Center Atlanta, GA [email protected],
The Evolution of Computer Graphics. SVP, Content & Technology, NVIDIA
The Evolution of Computer Graphics Tony Tamasi SVP, Content & Technology, NVIDIA Graphics Make great images intricate shapes complex optical effects seamless motion Make them fast invent clever techniques
Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.
Graphic Design Active Layer- When you create multi layers for your images the active layer, or the only one that will be affected by your actions, is the one with a blue background in your layers palette.
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT. Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus
ICICTE 2014 Proceedings 1 GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus Abstract This paper presents
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
A Pattern-Based Approach to. Automated Application Performance Analysis
A Pattern-Based Approach to Automated Application Performance Analysis Nikhil Bhatia, Shirley Moore, Felix Wolf, and Jack Dongarra Innovative Computing Laboratory University of Tennessee (bhatia, shirley,
Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)
Computer Graphics CS 54 Lecture 1 (Part 1) Curves Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) So Far Dealt with straight lines and flat surfaces Real world objects include
D animation. Advantages of 2-D2. Advantages of 3-D3. Related work. Key idea. Applications of Computer Graphics in Cel Animation.
Page 1 Applications of Computer Graphics in Cel Animation 3-D D and 2-D 2 D animation Adam Finkelstein Princeton University COS 426 Spring 2003 Homer 3-D3 Homer 2-D2 Advantages of 3-D3 Complex lighting
B.A IN GRAPHIC DESIGN
COURSE GUIDE B.A IN GRAPHIC DESIGN GRD 126 COMPUTER GENERATED GRAPHIC DESIGN I UNIVERSITY OF EDUCATION, WINNEBA DEPARTMENT OF GRAPHIC DESIGN Copyright Acknowledgements The facilitating agent of the course
Model Repair. Leif Kobbelt RWTH Aachen University )NPUT $ATA 2EMOVAL OF TOPOLOGICAL AND GEOMETRICAL ERRORS !NALYSIS OF SURFACE QUALITY
)NPUT $ATA 2ANGE 3CAN #!$ 4OMOGRAPHY 2EMOVAL OF TOPOLOGICAL AND GEOMETRICAL ERRORS!NALYSIS OF SURFACE QUALITY 3URFACE SMOOTHING FOR NOISE REMOVAL 0ARAMETERIZATION 3IMPLIFICATION FOR COMPLEXITY REDUCTION
Modelling 3D Avatar for Virtual Try on
Modelling 3D Avatar for Virtual Try on NADIA MAGNENAT THALMANN DIRECTOR MIRALAB UNIVERSITY OF GENEVA DIRECTOR INSTITUTE FOR MEDIA INNOVATION, NTU, SINGAPORE WWW.MIRALAB.CH/ Creating Digital Humans Vertex
Basic Theory of Intermedia Composing with Sounds and Images
(This article was written originally in 1997 as part of a larger text to accompany a Ph.D. thesis at the Music Department of the University of California, San Diego. It was published in "Monochord. De
Clustering & Visualization
Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.
BCC Multi Stripe Wipe
BCC Multi Stripe Wipe The BCC Multi Stripe Wipe is a similar to a Horizontal or Vertical Blind wipe. It offers extensive controls to randomize the stripes parameters. The following example shows a Multi
Outline. srgb DX9, DX10, XBox 360. Tone Mapping. Motion Blur
Outline srgb DX9, DX10, XBox 360 Tone Mapping Motion Blur srgb Outline srgb & gamma review Alpha Blending: DX9 vs. DX10 & XBox 360 srgb curve: PC vs. XBox 360 srgb Review Terminology: Color textures are
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA [email protected] ABSTRACT Traditional methods of identifying and
A unified representation for interactive 3D modeling
A unified representation for interactive 3D modeling Dragan Tubić, Patrick Hébert, Jean-Daniel Deschênes and Denis Laurendeau Computer Vision and Systems Laboratory, University Laval, Québec, Canada [tdragan,hebert,laurendeau]@gel.ulaval.ca
Interactive Visualization of Magnetic Fields
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 21 No. 1 (2013), pp. 107-117 Interactive Visualization of Magnetic Fields Piotr Napieralski 1, Krzysztof Guzek 1 1 Institute of Information Technology, Lodz University
1. INTRODUCTION Graphics 2
1. INTRODUCTION Graphics 2 06-02408 Level 3 10 credits in Semester 2 Professor Aleš Leonardis Slides by Professor Ela Claridge What is computer graphics? The art of 3D graphics is the art of fooling the
GIS Tutorial 1. Lecture 2 Map design
GIS Tutorial 1 Lecture 2 Map design Outline Choropleth maps Colors Vector GIS display GIS queries Map layers and scale thresholds Hyperlinks and map tips 2 Lecture 2 CHOROPLETH MAPS Choropleth maps Color-coded
Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points
Faculty of Computer Science Computer Graphics Group. Final Diploma Examination
Faculty of Computer Science Computer Graphics Group Final Diploma Examination Communication Mechanisms for Parallel, Adaptive Level-of-Detail in VR Simulations Author: Tino Schwarze Advisors: Prof. Dr.
