INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING
|
|
|
- Austen Miller
- 9 years ago
- Views:
Transcription
1 INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING AUTHORS: MICAH TAYLOR, ANISH CHANDAK, LAKULISH ANTANI, DINESH MANOCHA ABSTRACT: We describe a novel algorithm and system for sound propagation and rendering in virtual environments and media applications. Our approach uses geometric propagation techniques for fast computation of propagation paths from a source to a listener and takes into account specular reflections, diffuse reflections, and edge diffraction. In order to perform fast path computation, we use a unified ray-based representation to efficiently trace discrete rays as well as volumetric ray-frusta. Furthermore, our propagation algorithm scales well with the number of cores, and uses interactive audio rendering technique to generate spatialized audio signals. The overall approach can render sound in dynamic scenes, allowing source, listener, and obstacle motion, and we show its performance on game-like and architectural environments. To the best of our knowledge, this is the first interactive sound rendering system that can perform plausible sound propagation and rendering in dynamic virtual environments. DESCRIPTION OF TECHNIQUES UTILIZED: INTRODUCTION Realistic sound rendering can directly impact the perceived realism of users of interactive media applications. An accurate acoustic response for a virtual environment is attuned according to the geometric representation of the environment. This response can convey important details about the environment, such as the location and motion of objects. The most common approach to sound rendering is a two-stage process:
2 Sound propagation: the computation of impulse responses (IRs) that represent an acoustic space. Audio rendering: the generation of spatialized audio signal from the impulse responses and dry (anechoically recorded or synthetically generated) source signals. Sound propagation from a source to a listener conveys information about the size of the space surrounding the sound source, and conveys the source position to the listener even when the source is not directly visible. This considerably improves the immersion in virtual environments. For instance, in a first-person shooter game scenario, the cries of a monster or the steps of an opponent approaching can alert the player to danger. Sound propagation is also used for acoustic prototyping for computer games, architectural buildings, and urban scenes. Audio rendering also provides sound cues which give directional information about the 3D position of the sound source relative to a listener. For example, in a VR combat simulation it is critical to simulate the 3D sounds of machine guns, bombs, and missiles. Another application of 3D audio is user interface design, where sound cues are used to search for data on a multi-window screen. Main Results: We present an algorithm and system for interactive sound rendering in complex and dynamic virtual environments. Our approach is based on high-frequency approximations of sound waves. Many such algorithms have been proposed for interactive sound propagation [6, 12, 23]. However, they are either limited to static virtual environments or can only handle propagation paths corresponding to specular reflections. In order to perform interactive sound rendering, we use methods that can be parallelized across many processors. Our propagation algorithms use a hybrid ray-based representation that traces discrete rays [14] and ray-frusta [18]. Discrete ray tracing is used for diffuse reflections, and frustum tracing is used to compute the propagation paths for specular reflections and edge diffraction. We fill in the late reverberations using statistical methods. We also describe an audio rendering pipeline combining specular reflections, diffuse reflections, diffraction, 3D sound, and late reverberation. Our interactive sound rendering system can handle models consisting of tens of thousands of scene primitives (e.g., triangles), as well as dynamic scenes with moving sound sources, listener, and scene objects. We can perform interactive sound propagation including specular reflections, diffuse reflections, and diffraction of up to 3 orders on a multi-core PC. PREVIOUS WORK
3 In this section, we give a brief overview of prior work in acoustic simulation. In this paper, we focus on the areas of interactive sound propagation and audio rendering. SOUND PROPAGATION Sound propagation deals with modeling how sound waves propagate through a medium. Effects such as reflections, transmission, and diffraction are the important components. Sound propagation algorithms can be classified into two approaches: numerical methods and geometric methods. Numerical Methods: These methods [7, 21] solve the sound wave equation numerically to perform sound propagation. However, despite recent advances [22], these methods are too slow for interactive applications, and are limited to static scenes only. Geometric Methods: The most widely used methods for interactive sound propagation are based on geometric acoustics (GA). GA techniques represent acoustic waves as rays and can accurately model the early reflections (up to 3 6 orders). They compute propagation paths from a sound source to the listener. Specular reflections of sound are modeled with the image-source method [2]. Image-source methods recursively reflect the source point about all of the geometry in the scene to find specular reflection paths. Some methods compute exact visibility [11, 15], some use sampling [14], and some fall in between [6, 17]. There has also been work on complementing specular reflections with diffraction effects. Diffraction effects are very noticeable at corners, as the diffraction causes the sound wave to propagate in regions that are not directly visible to the sound source. Diffraction effects have been used in several interactive simulations [4, 25, 27]. Another important effect that can be modeled with GA is diffuse reflections. Diffuse reflections have been shown to be important for modeling sound propagation [8] and can be modeled with ray tracing based methods [13]. The GA methods described thus far are used to render the early reflections. The reset of the acoustic response, late reverberation, must also be calculated. This is often done through statistical methods [10] or ray tracing [9]. Sound propagation deals with modeling how sound waves propagate through a medium. Effects such as reflections, transmission, and diffraction are the important components. Sound propagation algorithms can be classified into two approaches: numerical methods and geometric methods. AUDIO RENDERING Audio rendering is the final step that generates audio for output over headphones or speakers. In the context of geometric sound propagation, it involves using the
4 propagation paths to create impulse responses which represent how an input sound is changed by the environment. Once the impulse response is known, it can be convolved with a dry input audio signal to produce the output audio. Rendering dynamic scenes is challenging, and there has been work on generating artifact-free audio rendering in such scenes [26, 28]. Introducing 3D cues in the final audio signals is an important effect, and requires convolution of an incoming sound wave with a Head Related Impulse Response (HRIR) [16]. OVERVIEW In this section, we give an overview of the wave effects our system simulates, and highlight the main components. The main components are further detailed in the following sections. ACOUSTIC MODELING All GA techniques deal with finding propagation paths between each source and the listener. The sound waves travel from a source (e.g., a speaker) and arrive at a listener (e.g., a user) by traveling along multiple propagation paths representing different sequences of reflections, diffraction, and refractions at the surfaces of the environment. Figure 1 shows an example of such paths. In this paper, we limit ourselves to reflections and diffraction paths. The overall effect of these propagation paths is to add reverberation (echoes) to the dry sound signal. Geometric propagation algorithms need to account for different wave effects that directly influence the response generated at the listener. Figure 1: Example scene showing (left) specular, (middle) diffraction, and (right) diffuse propagation paths. When a small, pointlike sound source generates nondirectional sound, the pressure wave expands out in a spherical shape. If the listener is set a short distance from the source, the wave field eventually encounters the listener. Due to the spreading of the field, the amplitude at the listener is attenuated. The corresponding GA component is a direct path from the source to the listener. This path represents the sound field that is diminished by distance attenuation. As the sound field propagates, it is likely that the sound field will also encounter objects in the scene. These objects may interact with the waves. Large objects can reflect the field specularly, as a mirror does for light waves. Surfaces that have fine details or
5 roughness of the same order as the wavelength can diffusely reflect the sound wave. This means that the wave is not specularly reflected, but reflected in a scattered manner. These diffuse reflections complement the specular components [8]. Diffraction effects occur at the edges of objects and cause the sound field to be scattered around the edge. As a listener moves behind a corner, a shadow region is encountered where sound cannot directly reach. Diffraction effects provide a smooth transition as the listener moves into this region. The previous effects contribute to the early reflection components of the IR. As propagation continues, the amplitude of the sound slowly decays, forming the late reverberation components. Late reverberation is related to the scene size [10] and conveys an important sense of space. SYSTEM COMPONENTS Our system consists of three main processing steps. These are outlined in Figure 2. For details on the system components, refer to [24]. Figure 2: The main components of our sound pipeline: scene preprocessing; geometric propagation for specular, diffuse, and diffraction components; estimation of reverberation from impulse response; and final audio rendering. Preprocessing: Our system uses a unified ray representation for specular reflections, diffuse reflections, and diffraction path computations. Thus, as part of preprocessing, a bounding volume hierarchy is created for the scene. Recent work allows such hierarchies to be built in parallel on multi-core systems [19]. This hierarchy allows fast intersection tests and is updated when the objects in the scene move [20]. The edges of objects in the scene are also analyzed to determine appropriate edges for diffraction. Interactive Sound Propagation: This stage computes the paths between the source and the listener. A volumetric frustum tracer is used to find the specular and edge diffraction paths. A stochastic ray tracer is used to compute the diffuse paths. Audio Rendering: After the paths are computed, they need to be auralized. A statistical reverberation filter is estimated using the path data. Using the paths and the
6 reverberation filter as input, the waveform is attenuated by the auralization system. The resulting signal represents the acoustic response and is output to the system speakers. INTERACTIVE SOUND PROPAGATION In this section, we give an overview of our sound propagation algorithm. Propagation is the most expensive step in the overall sound rendering pipeline. This large computational cost arises from the calculation of the acoustic paths that sound takes as it is reflected or scattered by the objects in the scene. The direct sound contribution is easily modeled by casting a ray between the source and listener. If the path is not obstructed, there is a direct contribution from the source to the listener. The other propagation components are more expensive to compute, so we parallelize all possible ray operations across multiple cores. FRUSTUM TRACING We use volumetric frustum tracing [18] to calculate the specular paths between the source and listener. The frusta are constructed of 4 bounding rays. From a point sound source, we cast many of these frustum primitives such that all the space around the source is covered. Each frustum is intersected with the scene primitives. The frustum is specularly reflected, and this gives rise to another frustum that is recursively propagated. It is also possible to create diffraction frusta [25] using the Uniform Theory of Diffraction (UTD). When reflecting a frustum off a triangle face, the triangle edges are checked whether they are marked as diffracting edges. If so, a new diffraction frustum is swept out in the shadow region behind the triangle. This frustum then propagates through the scene as normal. Reflection and diffraction continue until a specified order of recursion is achieved. To increase the intersection accuracy, we adaptively subdivide each frustum into new frusta at each reflection of diffraction [6]. For any of these frusta, if the listener is contained within the volume, there must exist some sound path from the source to the listener. The path is extrapolated through all parent frusta back to the source position to find the path. The path distance is used to calculate the time it takes for the sound to reach the listener and the reflection and diffraction count is used to attenuate the amplitude. Figure 3(left) shows a visual overview of steps in the frustum engine.
7 Figure 3: Unified ray engine: Both (left) frustum tracing and (right) ray tracing share similar rendering pipeline. RAY TRACING In order to compute sound reflected off diffuse materials, we use a stochastic ray tracer (Figure 3 (right)). Rays are traced out from the sound source in all the directions. When a ray encounters a triangle, it is reflected and tracing continues. The reflection direction is determined by the surface material. The listener is modeled by a sphere that approximates the listener s head. As the rays propagate, we check for intersections with this sphere. If there is an intersection, the path distance and the surfaces encountered are recorded for the audio rendering step. All contribution paths are adjusted [9] based on the reflection materials encountered. The accumulated paths are then sent to the audio rendering stage for output. REVERBERATION ESTIMATION
8 The propagation paths computed by the frustum tracer and stochastic ray tracer described in Section 4 are used only for the early reflections that reach the listener. While they provide important perceptual cues for spatial localization of the source, capturing late reflections (reverberation) contributes significantly to the perceived realism of the sound simulation. We use well-known statistical acoustics models to estimate the reverberant tail of the energy IR. The Eyring model [10] is one such model that describes the energy decay within a single room as a function of time. Given the energy IR computed using GA, we fit a curve to the data. From the curve, we can estimate the RT60, which is defined as the time required for the energy to decay by 60 db (Figure 4). The RT60 value is used in the audio rendering step to generate late reverberation effects. Figure 4: Extrapolating the IR to estimate late reverberation: The red curve is obtained from a least-squares fit (in log-space) of the energy IR computed by GA, and is used to add the reverberant tail to the IR. AUDIO RENDERING Audio rendering is the process of generating an audio signal which can be heard by a listener using headphones or speakers. In this section, we provide details on the realtime audio rendering pipeline implemented in our interactive sound propagation system. Our sound propagation algorithm generates a list of specular, diffuse, and diffracted paths from each source to the listener. These paths are accessed asynchronously by the audio rendering pipeline as shown in Figure 5. The direction of the contribution paths
9 arriving at the listener is used to introduce 3D sound cues in the final audio. Additionally, since the source, listener, and scene objects can move dynamically, the path data sent to audio rendering may vary greatly from one propagation cycle to the next. Thus, our approach mitigates the occurrence of artifacts by various means. Our system also uses the previously described reverberation data to construct the appropriate sound filters. Figure 5: An overview of the integration of audio rendering system with the sound propagation engine. The propagation paths are input to the audio rendering system. The audio rendering system queries the buffer and performs convolution and 3D audio mixing. Crossfading and interpolation components smooth the final audio output signal. INTEGRATION WITH SOUND PROPAGATION Using the contribution paths from the propagation stage, an impulse response (IR) is created. These impulse responses store the acoustic response of the room, that is, the delay and attenuation effects that the room has on sound signals. The IRs are then convolved with the input audio to compute output audio. ISSUES WITH DYNAMIC SCENES Our sound propagation system is general and can handle moving sources, moving listener, and dynamic geometric primitives. Due to the motion of the sources, listener, and scene objects, the propagation paths could change dramatically and thus the IRs computed could change dramatically. Therefore, we impose physical restrictions on the motion of sources, listener, and the geometric primitives to produce artifact-free audio rendering. To further mitigate the effects of the changing paths, we use the current and previous IR data to crossfade when outputting a changing audio signal. 3D SOUND RENDERING In a typical sound simulation, many sound waves reach the listener from different directions. These waves diffract around the listener s head and provide cues regarding
10 the direction of the incoming wave. This diffraction effect can be encoded in a Head- Related Impulse Response (HRIR) [1]. Thus, to produce a realistic 3D sound rendering effect, each incoming path to the listener can be convolved with an HRIR. However, for large numbers of contributions this computation can quickly become expensive and it may not be possible to perform audio rendering in real-time. Thus, only direct and first order reflections are used in 3D audio output. ADDING LATE REVERBERATION To add late reverberation effects, we use an artificial reverberation filter, which can add late decay effects to a sound signal. The previously calculated RT60 value is used to set the decay of the filter. This approach provides a simple, efficient way of complementing the computed IRs with late reverberation effects. PERFORMANCE Our system makes use of several levels of parallel algorithms to accelerate the computation. Ray tracing is known to be a highly parallelizable algorithm, and our system threads to take advantage of multi-core computers. Our results show that frustum tracing is also highly parallelizable (Figure 6). Additionally, frustum tracing uses vector instructions to perform operations on a frustum s bounding rays in parallel. Using these optimizations, our system achieves interactive performance on common multicore PCs. Scene Triangles Theater 54 Game 14k Cathedral 71k Figure 6: Parallel scaling: frustum tracing scales nearly linearly with number of cores on large scenes In this section, we highlight the performance of our system. We show the parallel nature of our algorithms and highlight each subsystem s performance on a varying set of scenes. The details of the scenes and system performance are presented in Table 1, and the scenes are visually shown in Figure 7. For performance timings, we ran on a multicore Intel Xeon X Ghz processor-based system; the number of cores varies
11 per component. Only the propagation components are parallelized using multiple cores, as the reverberation and rendering components have low time cost. Specular + diffraction (3 orders) Specular + Diffraction (1 order) Diffuse (3 orders) Scene Triangles Time Frusta Paths Time Frusta Paths Time Paths (ms) (ms) (ms) Room 6k k k Conference 282k k k Sibenik 71k k k Sponza 66k k While the ease of parallel ray tracing is known, the nature of frustum tracing is unknown. When testing frustum tracing on large multi-core PCs, we have recorded nearly linear speedup with addition processing cores, as shown in Figure 7. As shown in the Theater scene, even very small workloads scale well using over 10 cores. Figure 7: Test scenes used (left to right): Room, Conference, Sibenik, and Sponza. Specular and Diffraction: We generate two separate IRs using frustum tracing. One IR includes only the first order specular and diffraction contributions. Since these paths are fast to compute, we devote one core to this task. The other IR we generate includes the contributions for 3 orders of reflection and 2 orders of diffraction. This is done using 7 cores. The performance details for both simulation cycles are described in Table 1. Diffuse tracing: Our diffuse tracer stochastically samples the scene space during propagation. As such, the rays are largely incoherent and it is difficult to use ray packets. Nonetheless, even when tracing individual rays, our system can render at interactive rates as shown in the performance table. The timings are for 200k rays with 3 reflections using 7 cores. QUALITY AND LIMITATIONS
12 While the underlying algorithms are based on the physical properties of high frequency acoustic waves, there are limitations to our methods. As such, we discuss the limitations of each component in our system. We also note the benefits that such an approach offers over simpler audio rendering systems. LIMITATIONS Our algorithm has several limitations. The accuracy of our algorithm is limited by the use of underlying GA algorithms. In practice, GA is only accurate for higher frequencies. Moreover, the accuracy of our frustum-tracing reflection and diffraction varies as a function of maximum subdivision. Our diffraction formulation is based on the UTD and assumes that the edge lengths are significantly larger than the wavelength. Also, frustum tracing based diffraction also is limited in the types of diffraction paths that can be found. Our approach for computing the diffuse IR is subject to statistical error [9] that must be overcome with dense sampling. In terms of audio rendering, we impose physical restrictions on the motion of the source, listener, and scene objects to generate an artifactfree rendering. BENEFITS Interactive audio simulations used in current applications are often very simple and use precomputed reverberation effects and arbitrary attenuations. In our approach, the delays and attenuations for both reflection and diffraction are based on physical approximations. This allows the resulting system to generate acoustic responses that are expected given scene materials and layout. In addition to calculating physically based attenuations and delays, our method also provides accurate acoustic spatialization. Simple binaural rendering often only uses the direct path, which may not be valid. With our approach, the reflection and diffraction path directions are included. Consider a situation when the sound source is hidden from the listener s view (Figure 8). In this case, without reflection and diffraction, the directional component of the sound field appears to pass through the occluder. However, propagation paths generated by our system arrive at the listener with a physically accurate directional component.
13 Figure 8: Path direction from sound source S to listener L: (left) The time direct path is physically impossible, but (middle) diffraction and (right) reflection paths direct the listener as physically expected. CONCLUSION We have presented an interactive sound rendering system for dynamic virtual environments. Our system uses GA methods to compute the propagation paths. By using an underlying ray-based representation, we compute specular reflections, diffuse reflections, and edge diffraction in parallel on multi-core systems. We also use statistical late reverberation estimation techniques and present an interactive audio rendering algorithm for dynamic virtual environments. We believe our system is the first to generate plausible interactive sound rendering in complex, dynamic virtual environments. There are many areas to explore in the future. We are adapting our system to perform objectaccurate specular reflections [5] and more advanced edge diffraction [3]. ACKNOWLEDGMENTS This research is supported in part by ARO Contract W911NF , NSF award , DARPA/RDECOM Contracts N C-0043 and WR91CRB-08-C-0137, Intel, and Microsoft. REFERENCES: 1) V. Algazi, R. Duda, and D. Thompson. The CIPIC HRTF Database. In IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, 2001.
14 2) J. B. Allen and D. A. Berkley. Image method for efficiently simulating small-room acoustics. The Journal of theacoustical Society of America, 65(4): , April ) L. Antani, A. Chandak, M. Taylor, and D. Manocha. Fast geometric sound propagation with finite-edge diffraction. Technical report, Department of Computer Science, University of North Carolina at Chapel Hill, ) F. Antonacci, M. Foco, A. Sarti, and S. Tubaro. Fast modeling of acoustic reflections and diffraction in complex environments using visibility diagrams. In Proceedings of 12th European Signal Processing Conference (EUSIPCO 04), pages , September ) A. Chandak, L. Antani, M. Taylor, and D. Manocha. Fastv: From-point visibility culling on complex models. In Eurographics Symposium on Rendering, ) A. Chandak, C. Lauterbach, M. Taylor, Z. Ren, and D. Manocha. AD-Frustum: Adaptive Frustum Tracing for Interactive Sound Propagation. IEEE Transactions onvisualization and Computer Graphics, 14(6): , Nov.-Dec ) R. Ciskowski and C. Brebbia. Boundary Element methods in acoustics. Computational Mechanics Publications and Elsevier Applied Science, ) B.-I. Dalenb ack, M. Kleiner, and P. Svensson. A Macroscopic View of Diffuse Reflection. Journal of the Audio Engineering Society (JAES), 42(10): , October ) J. J. Embrechts. Broad spectrum diffusion model for room acoustics ray-tracing algorithms. The Journal of the Acoustical Society of America, 107(4): , ) C. F. Eyring. Reverberation time in dead rooms. The Journal of the Acoustical Society of America, 1(2A): , January ) T. Funkhouser, I. Carlbom, G. Elko, G. Pingali, M. Sondhi, and J. West. A beam tracing approach to acoustic modeling for interactive virtual environments. In Proc. Of ACM SIGGRAPH, pages 21 32, ) T. Funkhouser, N. Tsingos, and J.-M. Jot. Survey of Methods for Modeling Sound Propagation in Interactive Virtual Environment Systems. Presence and Teleoperation, ) B. Kapralos, M. Jenkin, and E. Milios. Acoustic Modeling Utilizing an Acoustic Version of Phonon Mapping. In Proc.of IEEE Workshop on HAVE, ) A. Krokstad, S. Strom, and S. Sorsdal. Calculating the acoustical room response by the use of a ray tracing technique. Journal of Sound and Vibration, 8(1): , July ) S. Laine, S. Siltanen, T. Lokki, and L. Savioja. Accelerated beam tracing algorithm. Applied Acoustic, 70(1): , ) V. Larcher, O. Warusfel, J.-M. Jot, and J. Guyard. Study and comparison of efficient methods for 3-d audio spatialization based on linear decomposition of hrtf data. InAudio Engineering Society 108th Convention preprints, page preprint no. 5097, January 2000.
15 17) C. Lauterbach, A. Chandak, and D. Manocha. Adaptive sampling for frustumbased sound propagation in complex and dynamic environments. In Proceedings of the 19 th International Congress on Acoustics, ) C. Lauterbach, A. Chandak, and D. Manocha. Interactive sound rendering in complex and dynamic scenes using frustum tracing. IEEE Transactions on Visualization and Computer Graphics, 13(6): , Nov.-Dec ) C. Lauterbach, M. Garland, S. Sengupta, D. Luebke, and D. Manocha. Fast bvh construction on gpus. In Proc. Eurographics 09, ) C. Lauterbach, S. Yoon, D. Tuft, and D. Manocha. RT-DEFORM: Interactive Ray Tracing of Dynamic Scenes using BVHs. IEEE Symposium on Interactive Ray Tracing, ) P. Monk. Finite Element Methods for Maxwell s Equations. Oxford University Press, ) N. Raghuvanshi, N. Galoppo, and M. C. Lin. Accelerated wave-based acoustics simulation. In ACM Solid and Physical Modeling Symposium, ) S. Siltanen, T. Lokki, S. Kiminki, and L. Savioja. The room acoustic rendering equation. The Journal of the Acoustical Society of America, 122(3): , September ) M. Taylor, A. Chandak, L. Antani, and D. Manocha. Resound: interactive sound rendering for dynamic virtual environments. In MM 09: Proceedings of the seventeen ACM international conference on Multimedia, pages , New York, NY, USA, ACM. 25) M. Taylor, A. Chandak, Z. Ren, C. Lauterbach, and D. Manocha. Fast Edge- Diffraction for Sound Propagation in Complex Virtual Environments. In EAA Auralization Symposium, Espoo, Finland, June ) N. Tsingos. A versatile software architecture for virtual audio simulations. In International Conference on Auditory Display (ICAD), Espoo, Finland, ) N. Tsingos, T. Funkhouser, A. Ngan, and I. Carlbom. Modeling acoustics in virtual environments using the uniform theory of diffraction. In Proc. of ACM SIGGRAPH, pages , ) E. Wenzel, J. Miller, and J. Abel. A software-based system for interactive spatial sound synthesis. In International Conference on Auditory Display (ICAD), Atlanta, GA, April FURTHER RESOURCES: Sound Synthesis and Propagation: RESound Project: RESound work on youtube:
16
Efficient Auralization for Moving Sources and Receiver
Efficient Auralization for Moving Sources and Receiver Anish Chandak University of North Carolina at Chapel Hill Chapel Hill, NC, USA [email protected] Lakulish Antani University of North Carolina at
GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS
25 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 8-2, 25, New Paltz, NY GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS Hequn Bai,Gaël Richard Institut
Interactive Sound Rendering
Interactive Sound Rendering Dinesh Manocha Ming C. Lin Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 {dm,lin}@cs.unc.edu http://gamma.cs.unc.edu/sound Abstract
DAAD: A New Software for Architectural Acoustic Design
The 33 rd International Congress and Exposition on Noise Control Engineering DAAD: A New Software for Architectural Acoustic Design Enis Ozgur a, Feridun Ozis b, Adil Alpkocak a a Dokuz Eylul University,
CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS
ICCVG 2002 Zakopane, 25-29 Sept. 2002 Rafal Mantiuk (1,2), Sumanta Pattanaik (1), Karol Myszkowski (3) (1) University of Central Florida, USA, (2) Technical University of Szczecin, Poland, (3) Max- Planck-Institut
The Use of Computer Modeling in Room Acoustics
The Use of Computer Modeling in Room Acoustics J. H. Rindel Technical University of Denmark Abstract: After decades of development room acoustical computer models have matured. Hybrid methods combine the
Hardware design for ray tracing
Hardware design for ray tracing Jae-sung Yoon Introduction Realtime ray tracing performance has recently been achieved even on single CPU. [Wald et al. 2001, 2002, 2004] However, higher resolutions, complex
Spatialized Audio Rendering for Immersive Virtual Environments
Spatialized Audio Rendering for Immersive Virtual Environments Martin Naef, Markus Gross Computer Graphics Laboratory, ETH Zurich Oliver Staadt Computer Science Department, UC Davis 1 Context: The blue-c
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping. Lecture 15 Taku Komura
Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping Lecture 15 Taku Komura In the previous lectures We did ray tracing and radiosity Ray tracing is good to render specular
Direct and Reflected: Understanding the Truth with Y-S 3
Direct and Reflected: Understanding the Truth with Y-S 3 -Speaker System Design Guide- December 2008 2008 Yamaha Corporation 1 Introduction Y-S 3 is a speaker system design software application. It is
Room Acoustic Reproduction by Spatial Room Response
Room Acoustic Reproduction by Spatial Room Response Rendering Hoda Nasereddin 1, Mohammad Asgari 2 and Ayoub Banoushi 3 Audio Engineer, Broadcast engineering department, IRIB university, Tehran, Iran,
THE SIMULATION OF MOVING SOUND SOURCES. John M. Chowning
THE SIMULATION OF MOVING SOUND SOURCES John M. Chowning The Center for Computer Research in Music and Acoustics (CCRMA) Department of Music Stanford University, Stanford, CA 94306 [email protected] (first
Acoustic Terms, Definitions and General Information
Acoustic Terms, Definitions and General Information Authored by: Daniel Ziobroski Acoustic Engineer Environmental and Acoustic Engineering GE Energy Charles Powers Program Manager Environmental and Acoustic
A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS
abstract n. 135 - Farina - pag. 1 RAMSETE - A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS Angelo Farina University of Parma, Dep. of Industrial Engineering, Via delle Scienze, 43100
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
What Audio Engineers Should Know About Human Sound Perception. Part 2. Binaural Effects and Spatial Hearing
What Audio Engineers Should Know About Human Sound Perception Part 2. Binaural Effects and Spatial Hearing AES 112 th Convention, Munich AES 113 th Convention, Los Angeles Durand R. Begault Human Factors
Effects of microphone arrangement on the accuracy of a spherical microphone array (SENZI) in acquiring high-definition 3D sound space information
November 1, 21 1:3 1 Effects of microphone arrangement on the accuracy of a spherical microphone array (SENZI) in acquiring high-definition 3D sound space information Shuichi Sakamoto 1, Jun ichi Kodama
VIRTUAL SPACE 4D PROJECT: THE ROOM ACOUSTICAL TOOL. J. Keränen, P. Larm, V. Hongisto
VIRTUAL SPACE 4D PROJECT: THE ROOM ACOUSTICAL TOOL David Oliva University of Turku Department of Physics FIN-20014 Turku, Finland. [email protected] J. Keränen, P. Larm, V. Hongisto Finnish Institute
Computer Applications in Textile Engineering. Computer Applications in Textile Engineering
3. Computer Graphics Sungmin Kim http://latam.jnu.ac.kr Computer Graphics Definition Introduction Research field related to the activities that includes graphics as input and output Importance Interactive
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Faculty of Computer Science Computer Graphics Group. Final Diploma Examination
Faculty of Computer Science Computer Graphics Group Final Diploma Examination Communication Mechanisms for Parallel, Adaptive Level-of-Detail in VR Simulations Author: Tino Schwarze Advisors: Prof. Dr.
SGRT: A Mobile GPU Architecture for Real-Time Ray Tracing
SGRT: A Mobile GPU Architecture for Real-Time Ray Tracing Won-Jong Lee 1, Youngsam Shin 1, Jaedon Lee 1, Jin-Woo Kim 2, Jae-Ho Nah 3, Seokyoon Jung 1, Shihwa Lee 1, Hyun-Sang Park 4, Tack-Don Han 2 1 SAMSUNG
Sensor Modeling for a Walking Robot Simulation. 1 Introduction
Sensor Modeling for a Walking Robot Simulation L. France, A. Girault, J-D. Gascuel, B. Espiau INRIA, Grenoble, FRANCE imagis, GRAVIR/IMAG, Grenoble, FRANCE Abstract This paper proposes models of short-range
A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET
22nd CIPA Symposium, October 11-15, 2009, Kyoto, Japan A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET Zhiqiang Du*, Qiaoxiong Li State Key Laboratory of Information Engineering
Distributed Area of Interest Management for Large-Scale Immersive Video Conferencing
2012 IEEE International Conference on Multimedia and Expo Workshops Distributed Area of Interest Management for Large-Scale Immersive Video Conferencing Pedram Pourashraf ICT Research Institute University
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
A mixed structural modeling approach to personalized binaural sound rendering
A mixed structural modeling approach to personalized binaural sound rendering Federico Avanzini Michele Geronazzo Simone Spagnol Sound and Music Computing Group Dept. of Information Engineering University
Introduction to acoustic imaging
Introduction to acoustic imaging Contents 1 Propagation of acoustic waves 3 1.1 Wave types.......................................... 3 1.2 Mathematical formulation.................................. 4 1.3
Computer-Generated Photorealistic Hair
Computer-Generated Photorealistic Hair Alice J. Lin Department of Computer Science, University of Kentucky, Lexington, KY 40506, USA [email protected] Abstract This paper presents an efficient method for
3D Audio and Acoustic Environment Modeling
3D Audio and Acoustic Environment Modeling William G. Gardner, Ph.D. Wave Arts, Inc. 99 Massachusetts Avenue, Suite 7 Arlington, MA 02474 [email protected] http://www.wavearts.com March 15, 1999 1. INTRODUCTION
The promise of ultrasonic phased arrays and the role of modeling in specifying systems
1 modeling in specifying systems ABSTRACT Guillaume Neau and Deborah Hopkins This article illustrates the advantages of phased-array systems and the value of modeling through several examples taken from
ROOM ACOUSTIC PREDICTION MODELLING
XXIII ENCONTRO DA SOCIEDADE BRASILEIRA DE ACÚSTICA SALVADOR-BA, 18 A 21 DE MAIO DE 2010 ROOM ACOUSTIC PREDICTION MODELLING RINDEL, Jens Holger Odeon A/S, Scion DTU, Diplomvej 381, DK-2800 Kgs. Lyngby,
3D Interactive Information Visualization: Guidelines from experience and analysis of applications
3D Interactive Information Visualization: Guidelines from experience and analysis of applications Richard Brath Visible Decisions Inc., 200 Front St. W. #2203, Toronto, Canada, [email protected] 1. EXPERT
PART 5D TECHNICAL AND OPERATING CHARACTERISTICS OF MOBILE-SATELLITE SERVICES RECOMMENDATION ITU-R M.1188
Rec. ITU-R M.1188 1 PART 5D TECHNICAL AND OPERATING CHARACTERISTICS OF MOBILE-SATELLITE SERVICES Rec. ITU-R M.1188 RECOMMENDATION ITU-R M.1188 IMPACT OF PROPAGATION ON THE DESIGN OF NON-GSO MOBILE-SATELLITE
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics Wolfgang Ahnert, S. Feistel, Bruce C. Olson (ADA), Oliver Schmitz, and Michael Vorländer
Doppler. Doppler. Doppler shift. Doppler Frequency. Doppler shift. Doppler shift. Chapter 19
Doppler Doppler Chapter 19 A moving train with a trumpet player holding the same tone for a very long time travels from your left to your right. The tone changes relative the motion of you (receiver) and
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
8 Room and Auditorium Acoustics
8 Room and Auditorium Acoustics Acoustical properties of rooms and concert halls are mainly determined by the echo that depends on the reflection and absorption of sound by the walls. Diffraction of sound
Various Technics of Liquids and Solids Level Measurements. (Part 3)
(Part 3) In part one of this series of articles, level measurement using a floating system was discusses and the instruments were recommended for each application. In the second part of these articles,
GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT. Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus
ICICTE 2014 Proceedings 1 GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus Abstract This paper presents
Aircraft cabin noise synthesis for noise subjective analysis
Aircraft cabin noise synthesis for noise subjective analysis Bruno Arantes Caldeira da Silva Instituto Tecnológico de Aeronáutica São José dos Campos - SP [email protected] Cristiane Aparecida Martins
High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC
High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC Masahiro Matsuda Kazuki Matsui Yuichi Sato Hiroaki Kameyama Thin client systems on smart devices have been attracting interest from
Parallel Simplification of Large Meshes on PC Clusters
Parallel Simplification of Large Meshes on PC Clusters Hua Xiong, Xiaohong Jiang, Yaping Zhang, Jiaoying Shi State Key Lab of CAD&CG, College of Computer Science Zhejiang University Hangzhou, China April
Antennas & Propagation. CS 6710 Spring 2010 Rajmohan Rajaraman
Antennas & Propagation CS 6710 Spring 2010 Rajmohan Rajaraman Introduction An antenna is an electrical conductor or system of conductors o Transmission - radiates electromagnetic energy into space o Reception
Grid Computing for Artificial Intelligence
Grid Computing for Artificial Intelligence J.M.P. van Waveren May 25th 2007 2007, Id Software, Inc. Abstract To show intelligent behavior in a First Person Shooter (FPS) game an Artificial Intelligence
A wave lab inside a coaxial cable
INSTITUTE OF PHYSICS PUBLISHING Eur. J. Phys. 25 (2004) 581 591 EUROPEAN JOURNAL OF PHYSICS PII: S0143-0807(04)76273-X A wave lab inside a coaxial cable JoãoMSerra,MiguelCBrito,JMaiaAlves and A M Vallera
The Application of Land Use/ Land Cover (Clutter) Data to Wireless Communication System Design
Technology White Paper The Application of Land Use/ Land Cover (Clutter) Data to Wireless Communication System Design The Power of Planning 1 Harry Anderson, Ted Hicks, Jody Kirtner EDX Wireless, LLC Eugene,
I. Wireless Channel Modeling
I. Wireless Channel Modeling April 29, 2008 Qinghai Yang School of Telecom. Engineering [email protected] Qinghai Yang Wireless Communication Series 1 Contents Free space signal propagation Pass-Loss
List of Publications
A. Peer-reviewed scientific articles List of Publications Lauri Savioja, D.Sc.(Tech.) [1] L. Savioja and U. P. Svensson, Overview of geometrical room acoustic modeling techniques, The Journal of the Acoustical
SGRT: A Scalable Mobile GPU Architecture based on Ray Tracing
SGRT: A Scalable Mobile GPU Architecture based on Ray Tracing Won-Jong Lee, Shi-Hwa Lee, Jae-Ho Nah *, Jin-Woo Kim *, Youngsam Shin, Jaedon Lee, Seok-Yoon Jung SAIT, SAMSUNG Electronics, Yonsei Univ. *,
Time-domain Beamforming using 3D-microphone arrays
Time-domain Beamforming using 3D-microphone arrays Gunnar Heilmann a gfai tech Rudower Chaussee 30 12489 Berlin Germany Andy Meyer b and Dirk Döbler c GFaI Society for the promotion of applied computer
Doppler Effect Plug-in in Music Production and Engineering
, pp.287-292 http://dx.doi.org/10.14257/ijmue.2014.9.8.26 Doppler Effect Plug-in in Music Production and Engineering Yoemun Yun Department of Applied Music, Chungwoon University San 29, Namjang-ri, Hongseong,
PHOTON mapping is a practical approach for computing global illumination within complex
7 The Photon Mapping Method I get by with a little help from my friends. John Lennon, 1940 1980 PHOTON mapping is a practical approach for computing global illumination within complex environments. Much
AURALIZATION APPLYING THE PARAMETRIC ROOM ACOUSTIC MODELING TECHNIQUE - THE DIVA AURALIZATION SYSTEM
AURALIZATION APPLYING THE PARAMETRIC ROOM ACOUSTIC MODELING TECHNIQUE - THE DIVA AURALIZATION SYSTEM Lauri Savioja and Tapio Lokki Telecommunications Software and Multimedia Laboratory Helsinki University
ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU
Computer Science 14 (2) 2013 http://dx.doi.org/10.7494/csci.2013.14.2.243 Marcin Pietroń Pawe l Russek Kazimierz Wiatr ACCELERATING SELECT WHERE AND SELECT JOIN QUERIES ON A GPU Abstract This paper presents
Introduction to Fourier Transform Infrared Spectrometry
Introduction to Fourier Transform Infrared Spectrometry What is FT-IR? I N T R O D U C T I O N FT-IR stands for Fourier Transform InfraRed, the preferred method of infrared spectroscopy. In infrared spectroscopy,
Section 5.0 : Horn Physics. By Martin J. King, 6/29/08 Copyright 2008 by Martin J. King. All Rights Reserved.
Section 5. : Horn Physics Section 5. : Horn Physics By Martin J. King, 6/29/8 Copyright 28 by Martin J. King. All Rights Reserved. Before discussing the design of a horn loaded loudspeaker system, it is
Go to contents 18 3D Visualization of Building Services in Virtual Environment
3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer
SPATIAL IMPULSE RESPONSE RENDERING: A TOOL FOR REPRODUCING ROOM ACOUSTICS FOR MULTI-CHANNEL LISTENING
SPATIAL IMPULSE RESPONSE RENDERING: A TOOL FOR REPRODUCING ROOM ACOUSTICS FOR MULTI-CHANNEL LISTENING VILLE PULKKI AND JUHA MERIMAA Laboratory of Acoustics and Audio Signal Processing, Helsinki University
Waves Sound and Light
Waves Sound and Light r2 c:\files\courses\1710\spr12\wavetrans.doc Ron Robertson The Nature of Waves Waves are a type of energy transmission that results from a periodic disturbance (vibration). They are
AUDIO POINTER V2 JOYSTICK & CAMERA IN MATLAB
AUDIO POINTER V2 JOYSTICK & CAMERA IN MATLAB F. Rund Department of Radioelectronics, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic Abstract A virtual sound source
Computers in Film Making
Computers in Film Making Snow White (1937) Computers in Film Making Slide 1 Snow White - Disney s Folly Moral: Original Budget $250,000 Production Cost $1,488,422 Frames 127,000 Production time 3.5 years
Monte Carlo Path Tracing
HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on Computer Graphics Spring 2002: Advanced Rendering Techniques Monte Carlo Path Tracing
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS
REAL-TIME IMAGE BASED LIGHTING FOR OUTDOOR AUGMENTED REALITY UNDER DYNAMICALLY CHANGING ILLUMINATION CONDITIONS Tommy Jensen, Mikkel S. Andersen, Claus B. Madsen Laboratory for Computer Vision and Media
Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect
Objectives: PS-7.1 Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect Illustrate ways that the energy of waves is transferred by interaction with
MUSC 1327 Audio Engineering I Syllabus Addendum McLennan Community College, Waco, TX
MUSC 1327 Audio Engineering I Syllabus Addendum McLennan Community College, Waco, TX Instructor Brian Konzelman Office PAC 124 Phone 299-8231 WHAT IS THIS COURSE? AUDIO ENGINEERING I is the first semester
Digital 3D Animation
Elizabethtown Area School District Digital 3D Animation Course Number: 753 Length of Course: 1 semester 18 weeks Grade Level: 11-12 Elective Total Clock Hours: 120 hours Length of Period: 80 minutes Date
High-fidelity electromagnetic modeling of large multi-scale naval structures
High-fidelity electromagnetic modeling of large multi-scale naval structures F. Vipiana, M. A. Francavilla, S. Arianos, and G. Vecchi (LACE), and Politecnico di Torino 1 Outline ISMB and Antenna/EMC Lab
Simplifying Opto- Mechanical Product Development: How a new product reduces cost, stress, and time to market
Simplifying Opto- Mechanical Product Development: How a new product reduces cost, stress, and time to market Innovation in optical design is exploding. From cell phone cameras to space telescopes, driverless
Advancements in High Frequency, High Resolution Acoustic Micro Imaging for Thin Silicon Applications
Advancements in High Frequency, High Resolution Acoustic Micro Imaging for Thin Silicon Applications Janet E. Semmens Sonoscan, Inc. 2149 E. Pratt Boulevard Elk Grove Village, IL 60007 USA Phone: (847)
Volumetric Meshes for Real Time Medical Simulations
Volumetric Meshes for Real Time Medical Simulations Matthias Mueller and Matthias Teschner Computer Graphics Laboratory ETH Zurich, Switzerland [email protected], http://graphics.ethz.ch/ Abstract.
Attenuation (amplitude of the wave loses strength thereby the signal power) Refraction Reflection Shadowing Scattering Diffraction
Wireless Physical Layer Q1. Is it possible to transmit a digital signal, e.g., coded as square wave as used inside a computer, using radio transmission without any loss? Why? It is not possible to transmit
GUI GRAPHICS AND USER INTERFACES. Welcome to GUI! Mechanics. Mihail Gaianu 26/02/2014 1
Welcome to GUI! Mechanics 26/02/2014 1 Requirements Info If you don t know C++, you CAN take this class additional time investment required early on GUI Java to C++ transition tutorial on course website
Software Audio Processor. Users Guide
Waves R360 Surround Reverb Software Audio Processor Users Guide Waves R360 software guide page 1 of 8 Introduction Introducing the Waves 360 Surround Reverb. This Software audio processor is dedicated
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
Lecture Notes, CEng 477
Computer Graphics Hardware and Software Lecture Notes, CEng 477 What is Computer Graphics? Different things in different contexts: pictures, scenes that are generated by a computer. tools used to make
4.4 WAVE CHARACTERISTICS 4.5 WAVE PROPERTIES HW/Study Packet
4.4 WAVE CHARACTERISTICS 4.5 WAVE PROPERTIES HW/Study Packet Required: READ Hamper pp 115-134 SL/HL Supplemental: Cutnell and Johnson, pp 473-477, 507-513 Tsokos, pp 216-242 REMEMBER TO. Work through all
Encoded Phased Array Bridge Pin Inspection
Encoded Phased Array Bridge Pin Inspection James S. Doyle Baker Testing Services, Inc. 22 Reservoir Park Dr. Rockland, MA 02370 (781) 871-4458; fax (781) 871-0123; e-mail [email protected] Product
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
Rendering Area Sources D.A. Forsyth
Rendering Area Sources D.A. Forsyth Point source model is unphysical Because imagine source surrounded by big sphere, radius R small sphere, radius r each point on each sphere gets exactly the same brightness!
Proceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Architectural Acoustics Session 5aAAa: Room Acoustics Computer Simulation
Waves - Transverse and Longitudinal Waves
Waves - Transverse and Longitudinal Waves wave may be defined as a periodic disturbance in a medium that carries energy from one point to another. ll waves require a source and a medium of propagation.
Omni Antenna vs. Directional Antenna
Omni Antenna vs. Directional Antenna Document ID: 82068 Contents Introduction Prerequisites Requirements Components Used Conventions Basic Definitions and Antenna Concepts Indoor Effects Omni Antenna Pros
EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak
Path Loss Radio Wave Propagation The wireless radio channel puts fundamental limitations to the performance of wireless communications systems Radio channels are extremely random, and are not easily analyzed
Computer Graphics. Introduction. Computer graphics. What is computer graphics? Yung-Yu Chuang
Introduction Computer Graphics Instructor: Yung-Yu Chuang ( 莊 永 裕 ) E-mail: [email protected] Office: CSIE 527 Grading: a MatchMove project Computer Science ce & Information o Technolog og Yung-Yu Chuang
Hi everyone, my name is Michał Iwanicki. I m an engine programmer at Naughty Dog and this talk is entitled: Lighting technology of The Last of Us,
Hi everyone, my name is Michał Iwanicki. I m an engine programmer at Naughty Dog and this talk is entitled: Lighting technology of The Last of Us, but I should have called it old lightmaps new tricks 1
Extend Table Lens for High-Dimensional Data Visualization and Classification Mining
Extend Table Lens for High-Dimensional Data Visualization and Classification Mining CPSC 533c, Information Visualization Course Project, Term 2 2003 Fengdong Du [email protected] University of British Columbia
Optical modeling of finite element surface displacements using commercial software
Optical modeling of finite element surface displacements using commercial software Keith B. Doyle, Victor L. Genberg, Gregory J. Michels, Gary R. Bisson Sigmadyne, Inc. 803 West Avenue, Rochester, NY 14611
