Interactive Sound Rendering
|
|
|
- Laura Singleton
- 9 years ago
- Views:
Transcription
1 Interactive Sound Rendering Dinesh Manocha Ming C. Lin Department of Computer Science University of North Carolina Chapel Hill, NC Abstract Extending the frontier of visual computing, sound rendering utilizes auditory display to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, sound rendering can further enhance a user s experience in a multimodal virtual world. In addition to immersive environments, auditory displays can provide a natural and intuitive human-computer interface for many desktop applications. In this paper, we give a brief overview of recent work at UNC Chapel Hill on fast algorithms for sound synthesis and sound propagation. These include physically-based sound synthesis for rigid bodies and liquid sound generation, as well as numeric and geometric algorithms for sound propagation. We highlight their performance on different benchmarks and briefly discuss some problems for future research. 1. Introduction Traditionally, the focus in interactive applications has been on generating realistic images. These developments have been well supported by high growth rates and programmability of current graphics hardware. At the same time it is also important to develop interactive algorithms for other forms of rendering, including sound and haptics. Sound rendering utilizes auditory display to communicate information to a user and offers an alternative means of visualization. By harnessing the sense of hearing, audio rendering can further enhance a user s experience in a multimodal virtual world [17], [31]. Ultimately, the audio cues combined with visual rendering provide a far more immersive experience. In addition to immersive environments, auditory display can also provide a natural and intuitive human-computer interface for many desktop applications Furthermore, audio interfaces are increasingly used to develop assistive technologies for the visually impaired. Despite the significance of hearing as one of the dominant senses, auditory displays have not received as much attention as graphical rendering in the literature. The design of auditory display involves audio hardware technology, software rendering pipeline, modeling and simulation of acoustic spaces, signal processing, sonification techniques, perceptual evaluation, and application development. In this paper, we primarily focus on /09/$ IEEE two compute-intensive components of auditory displays: sound synthesis and sound propagation. Sound synthesis deals with how sound is generated using physical principles [16], [55], [35], [40], [3], [34]. Sound propagation deals with computational modeling of acoustic spaces that takes into account the knowledge of sound sources, listener locations, 3D models of the environments, and material absorption data to compute impulse response (IR) of the space [19], [48]. These IRs are convolved with recorded or synthetically produced sound signals to generate spatialized sound effects, i.e. auralization. The driving impetus of recent research in sound rendering has mainly come from interactive applications, including computer gaming, training systems, desktop interfaces, education, scientific visualization and computer-aided design. All these applications need capabilities for real-time sound synthesis and propagation techniques. Realistic sound quality can directly impact the perceived realism and full immersion of players and users of these interactive applications. Furthermore, interactive modeling and simulation of acoustic spaces can significantly enhance numerous scientific and engineering applications [11]. For examples, designers of man-made structures, such as urban layouts, habitats [54], and mechanical CAD structures [21], can benefit tremendously from interactive acoustic simulation technologies in terms of improved design and virtual prototyping. The current market for acoustic simulation software for engineering design alone is estimated to be around $300M a year. The audible sonic frequencies for humans range from 20 Hz to 20,000 Hz with wavelengths that fall exactly in the range of dimensions of common objects, i.e. a few centimeters to a few meters. Consequently sound diffracts (bends) appreciably, especially at low frequencies. Due to its low speed, sound reflections is also easily perceptible in a room in terms of delays and echos. The combination of sound diffraction and reflections makes interactive sound propagation a major computational challenge [48]. As a result, most interactive applications currently use sound effects based on fixed models of propagation or statically designed environment reverberation filers. In terms of synthesis, most interactive applications use pre-recorded sound clips for providing sounds corresponding to object interactions in a scene. Although this
2 approach has the advantages of sounds being realistic and the generation process is quite fast, there are many physical effects which cannot be captured by such a technique. The resulting sound often lacks variation in tones and timbers or typically appears repetitive. Physics-based sound synthesis can reproduce sound using physical principles, adding noticeable realism by introducing natural variations due to object interaction, but poses substantial computational challenges for interactive applications. Challenges in Sound Rendering: The current systems for sound synthesis and propagation can not offer interactive performance in complex, dynamic scenes or may not be able to synthesize different type of sounds. We need significant advances and also need to integrate the resulting algorithms with different applications. These include: Sound Synthesis: Most of existing synthesis approaches have focused on sound mixing or sound generation due to colliding solids in the air. We need to develop new algorithms for interactive sound synthesis techniques for complex environments consisting of hundreds of objects and sound generated in liquid medium and coupled fluid-solid interaction. Sound Propagation: The sensation of sound is due to small variations in air pressure. The variations are governed by the three-dimensional wave equation, a second-order linear partial differential equation, which relates the temporal and spatial derivatives of the pressure field [48]. We need clever domain decomposition techniques for low- and medium-frequency sources and novel geometric propagation algorithms for high-frequency sources. These algorithms can be further accelerated by using the capabilities of multi-core and many-core commodity processors for interactive computations. In the rest of this paper, we will describe some of our recent work on interactive sound synthesis and sound propagation. 2. Real-time Sound Synthesis Sound is produced by surface vibrations of an object under external forces. These vibrations travel through the surrounding medium. The pressure waves within the human audible range are sensed by the ear and render the perception of sound. One may consider the sound synthesis pipeline consists of two parts: the interaction model and the resonators. The forces from object interactions drive the resonators. The pipeline of our sound synthesis system is shown in Figure 1. The green boxes indicate the components that allow user input. The blue boxes refer to the preprocessing parts. The orange boxes show the processes that are computed at run-time. The interaction handling module detects different types of interactions among the objects and converts them into excitation forces for the sound synthesis module, which is composed of modal synthesis [35], [40], vibration calculation, and the real-time audio engine. The sound synthesis module approximates the oscillating responses of surfaces when external forces are applied and generates sound from the approximate vibration. Figure 1. System overview. The green boxes indicate the parts users can interact with. The blue boxes indicate the pre-processing parts. The orange modules are the runtime components Interactive Synthesis for Complex Scenes Typically, the number of modes of an object with a few thousand vertices is in the range of a few thousands and the basic synthesis procedure runs in real-time [40]. But as the number of objects increases beyond two or three, the performance can degrade. In this case, our goal is to decrease the number of modes mixed and utilize the listener s perception from noticing the difference. Here we briefly discuss a few techniques to improve the performance and ensure that the method works well for interactive applications. For more details, we refer to [40] Mode Truncation The sound of a typical object on being struck consists of a transient response composed of a blend of high frequencies, followed by a set of lower frequencies with low amplitude. The transient attack is essential to the quality of sound as it is perceived as the characteristic timbre of the object. The idea behind mode truncation is to stop mixing a mode as soon as its contribution to the total sound falls below a certain preset threshold, τ. Since mode truncation preserves the initial transient response of the object when τ is suitably set, the resulting degradation in quality is minimal Mode Compression A perceptual study described in [45] showed that humans have a limited capacity to discriminate between frequencies which are close to each other. That is, if two close enough frequencies are played in succession, the average human listener is unable to tell whether they were two different frequencies or the same frequency played out twice. [40] lists the frequency discrimination at different frequencies. For example, at 2 KHz the frequency discrimination is more than 1 Hz. That is, a human subject cannot tell apart 1999 Hz from 2000 Hz. Note that the frequency discrimination deteriorates
3 a lot as the frequencies are increased to higher values. Therefore, while synthesizing any sound consisting of many frequencies we can easily cheat the listener s perception by replacing multiple frequencies close to each other by a single one, which represents all of them. This saves computation because mixing one frequency is much cheaper than mixing many. This is the main idea behind mode compression and leads to large gains in performance Quality Scaling Mode compression, as described above, is aimed at increasing the efficiency of sound synthesis for a single object. However, when the number of sounding objects in a scene grows beyond a few tens, increasing the efficiency of individual objects is not sufficient. Additionally, it is critical for games that the sound system has a graceful way of varying quality in response to variable time constraints. This flexibility is achieved by scaling the sound quality for the objects. The sound quality of an object is changed by controlling the number of modes being mixed for synthesizing its sound. The main idea is that in most cases of scenes with many sounding objects, the listener s attention is on the objects in the foreground, that is, the objects which contribute the most to the total sound in terms of amplitude. Therefore, if it is ensured that the foreground sounds are mixed at high quality while the background sounds are mixed at a relatively lower quality, the resulting degradation in perceived aural quality should be reduced. Quality scaling achieves this by assigning time quotas to all objects prioritized on the loudness of the sound they re producing and then scaling their quality to force them to complete within the assigned time quota. This simple technique performs quite well in practice. We have integrated our sound system with publicly available game engines, such as Open Source 3D Graphics Engine (ORGE) and Crystal Space, and physics engines, such as NVIDIA PhysX and Open Dynamics Engine (ODE). We are able to achieve interactive sound synthesis for complex scenes consisting of hundreds of objects at a steady frame rates of at least 100 frames per second (FPS) and upto FPS, with the sound system taking approximately 10% of the CPU time [40] Liquid Sound Generation Most of the prior sound synthesis research has focused on simulating sounds from interaction between rigid and deformable bodies [16], [55], [35], [40], [3]. The limited previous work on simulating sounds in a liquid medium has focused either on heuristic methods decoupled from actual fluid dynamics [55] or methods that provide no physical basis for the sounds generated from the liquids [22]. Although these methods eliminate the need for recording sounds generated by actual liquids, which can be difficult and expensive, they still require a great deal of user s time and energy to tweak the various parameters by a plethora of sliders to generate desired sound effects and provide little ability to generate sounds from real-time fluid simulations. One of our goals is to generalize and expand the physicsbased sound synthesis techniques to all media, including liquids, and all type of interaction, including fluid-solid interaction, and thereby make the paradigm applicable to a much broader class of scenarios. Sound in liquid is primarily due to bubble formation. We refer the reader to Leighton s excellent text on acoustics due to bubble resonance for more detail [30]. Minnearts formula, which derives the resonant frequency of a perfectly spherical bubble in an infinite volume of water from the radius, provides the physical basis for generating sounds in liquid. Recently, Moss et al. [34] have developed the first automatic sound synthesis algorithm. It is based on the fact that the sound generated by a bubble is dominated by the resonant frequency, since all other frequencies will rapidly die out. The resonant frequency is dependent on the restoring force, which is the result of the pressure in the bubble. The resulting approach takes into account both the spherical and non-spherical bubbles. The resulting simulator is coupled with a fluid simulator that computes liquid dynamics, which subsequently continues to drive the bubble resonance in liquid media. There are many challenging computational issues in this coupling. The first issue is what type of fluid simulation should be used. Three broad categories exist for fluid dynamics computation in visual simulation: grid-based methods, smoothed particle hydrodynamics (SPH), and shallow-water approximations and for different applications, all these three type of simulators are used. In practice, our overall approach can generate plausible liquid sounds for different scenarios, as described in [34]. 3. Physics-Based Sound Propagation The most common approach to acoustic simulation is a two-stage process: the computation of impulse responses (IR) representing an acoustic space, and the convolution of the impulse responses with dry (anechoically recorded or synthetically generated) source signals. The IR computation relies on an accurate calculation for modeling the sound field. The sensation of sound is due to small variations in air pressure. Physics-based approaches solve the wave equation numerically to obtain the exact behavior of wave propagation in a domain. We assume that the input to our acoustics simulation is a 3D geometric model, along with the boundary conditions and the locations of the sound sources and listener. The propagation of sound in the domain is governed by the Acoustic Wave Equation: 2 p t 2 c2 2 p = F (x,t), (1) This equation captures the complete wave nature of sound, which is treated as a time-varying pressure field p (x,t) in space. The speed of sound is c = 340ms 1 and F (x,t) is the forcing term corresponding to sound sources present in the scene. The operator 2 = 2 x + 2
4 2 y z is the Laplacian in 3D Related Work Numerical methods for solving the wave equation rely on discretizing space and time for solution on a computer. Depending on the underlying discretization method, numerical approaches for acoustics may be roughly classified into: Finite Element Method (FEM), Boundary Element Method (BEM), Digital Waveguide Mesh (DWM), Finite Difference Time Domain (FDTD), and Functional Transform Method (FTM) [36], [38]. The FEM and BEM have traditionally been employed mainly for the steady-state frequency domain response, as opposed to a full time domain solution of the wave equation. FEM is applied mainly to the interior and BEM to exterior scattering problems [26]. DWM approaches [56], on the other hand, are specific to the Wave Equation and use discrete waveguide elements, each of which is assumed to carry waves along its length along a single dimension [24], [43]. However, such approaches suffer from directional dispersion of sound, that is, sound doesn t travel with the same speed in different directions on the spatial grid. The most commonly used method for time-domain wave simulations is the Finite Difference Time Domain (FDTD) method, owing to its simplicity and efficiency. FDTD has been an active area of research for more than a decade [4], [5] and was first proposed for electromagnetic simulations [46]. FDTD works on a uniform Cartesian grid and solves for the field values at each simulation cell over time. Initial investigations into FDTD were hampered by the lack of computational power and memory, limiting its application to mostly small models in 2D. Over the last decade, the possibility of applying FDTD to medium sized models in 3D has been explored [42]. However, the computational and memory requirements for FDTD are beyond the capability of most desktop systems today [41], requiring days of computation on a small cluster even for medium-sized 3D models for simulating frequencies below 1 khz. Other methods are based on spectral techniques [6], which are a class of very high order numerical schemes in which the complete field is expressed in terms of global basis functions, which virtually eliminates spatial approximation errors Adaptive Spatial Decomposition Physics-based approaches for sound propagation attempt to directly solve the Acoustic Wave Equation numerically, which governs all linear sound propagation phenomena, and are thus capable of performing a full transient solution which correctly accounts for all wave phenomena, including diffraction, elegantly in one framework. However, as mentioned above, computational costs for acoustic simulation can be prohibitively high. Recently Raghuvanshi et al. [39] have developed a novel approach based on adaptive rectangular decomposition of the free space in the domain to address this issue. Fig. 2 gives an overview of this approach. Many problems of interest for the purpose of acoustic simulation necessarily have large empty spaces in their Figure 2. A schematic overview of of the adaptive rectangular propagation algorithm of [Raghuvanshi et al. 2009]. The scene is first discretized with a Cartesian grid at a prescribed cell size depending on the highest simulated frequency. An adaptive rectangular decomposition is then performed on the scene and the partitions distributed between different cores/processors. At each time-step, the field in each partition is independently updated. interior. Consider a simulation on a large scene like an auditorium with an impulse triggered near the floor. With FDTD, this impulse would travel upwards and would accumulate numerical dispersion error at each spatial cell it crosses. Typically, an impulse would cross thousands of cells just to travel from one end of the small scene to the other, accumulating a lot of error in the process. However, if we fit a rectangle in the scene extending from the bottom to the top, the impulse would have no propagation error, since we can utilize the analytical solution within a rectangle. This observation is the main motivation for computing a rectangular decomposition. In order to achieve high spectral accuracy, both in time and space, in the interior of the rectangular partitions, Raghuvanshi et al. [39] use sixth-order spatial accuracy on the interface between partitions. Fig. 3 shows the results of our acoustic simulations after 5, 20, 50, 100 msec respectively for sound propagation within a cathedral scene. From the perspective of parallelizing numerical solvers, Domain Decomposition Methods (DDM) have been widely studied for scientific computation [14], [37], [51]. The rectangular partitioning also provides a natural domain decomposition and therefore, is a good candidate for parallelization on GPU clusters or many-core processors. We plan to further investigate the parallelization of numerical acoustic simulations using commodity hardware. 4. Geometric Sound Propagation The exact solution of sound propagation reduces to solving the wave equation, as described in the previous discussion. However, numerical methods to solve wave equation are mainly limited to static scenes. Given that the underlying complexity of numerical methods increases as fourth power of sound frequency, they are mostly useful for low or medium frequency
5 Figure 4. Beam vs. frustum tracing: Our rayfrustum tracing approach compared to beam tracing for a simple example.(left): beam tracing. (Right): ray-frustum tracing. The discrete sampling in our frustum based approach underestimates the size of the exact reflection and transmission frustum for primitive 1 and overestimates the size for primitives 2 and 3. Figure 3. Sound simulation on a Cathedral. The dimensions of this scene are 35m 15m 26m. We are able to perform numerical sound simulation on this complex scene on a desktop computer and pre-compute a 1 second long impulse response in about 29 minutes, taking less than 1 GB of memory. A commonly used approach that we compare against, Finite Difference Time Domain (FDTD), would take 1 week of computation and 25GB of memory for this scene to achieve competitive accuracy. The auralization, or sound rendering at runtime consists of convolution of the calculated impulse responses with arbitrary source signals, that can be computed efficiently. sound sources. The alternate methods for handling highfrequency sound sources are based on sound field decomposition or geometric propagation [19], [47], [48]. These algorithms model the propagation of sound based on rectilinear propagation of waves and can accurately model the early reflections (up to 4 6). These equivalent sources conceptually radiate in free space and are, in principle, supposed to fulfill the boundary condition of the acoustic space. The various equivalent sources are used to compute a list of elementary waves that arrive at the receiver. This includes the source data, propagation data that consists of frequency dependent attenuation due to distance spreading, accumulated absorption loss at wall reflections, accumulated scattering loss at wall reflections, and air absorption Prior Work Some of the earliest methods for geometric sound propagation were based on tracing sampled-rays [27] or sound-particles (phonons) [23], [2], [15] from a source to the listener. However, these methods are susceptible to aliasing problems. As a result, accurate methods for geometric sound propagation are either based on image source methods or volumetric propagation. Image source methods are the easiest and most popular for computing specular reflections [1], [13]. However, they can only handle simple static scenes or very low order of reflections at interactive rates [28]. Many hybrid combinations [12] of ray-based and image source methods have been proposed and used in commercial room acoustics prediction software (e.g. ODEON). But they are limited to rather simple static scenes and are not fast enough for interactive applications. The volumetric geometric methods trace pyramidal or volumetric beams to compute an accurate geometric solution. These include beam tracing that has been used for specular reflection and edge diffraction for interactive sound propagation [20], [28], [52]. The results of beam tracing can be used to guide sampling for path tracing [18]. However, they are only limited to very simple CAD models and it is rather difficult to make them work on complex CAD models due to robustness problems Ray-Frustum Tracing Recently, we have introduced a new volumetric approach, called ray-frustum tracing [29], [10]. This formulation uses a volumetric frustum tracing and utilizes the recent developments in interactive ray tracing to perform fast volumetric tracing using commodity processors. The overall approach is built on fast raytracing algorithms and represent each frustum using four corner rays. Unlike, prior beam tracing algorithm performs discrete clipping at the intersections, as shown in Fig. 4. The resulting approach can easily exploit the current hardware features in terms of data-parallelism and thread-level parallelism and thereby makes it possible to perform specular reflections and direct contributions at interactive rates. Moreover, it uses bounding volume hierarchies to accelerate the intersection tests and can also handle dynamic scenes. Recently, Chandak et al. [9] have presented a practical and conservative
6 visibility culling algorithm and used that for exact sound propagation. It is almost times faster than prior beam-tracing algorithms and can accurately compute the propagation paths from the source to the listener. The initial frustum-tracing algorithm been primarily used for handling specular reflections in complex, dynamic scenes. Recently, we combined the frustumtracing formulation with the UTD formulation to perform edge-diffraction in complex virtual environments [50]. This approach is primarily designed for scenes with very large (or infinite-sized) edges, e.g. outdoor scenes with buildings. In terms of future work, it may be useful to combine this approach with the Biot-Tolstoy-Medwin (BTM) method [7], [25]. The BTM method is more accurate than UTD and can be formulated for use with finite edge [49]. However, it involves solving the BTM line integral over the edge length exposed to the sound field. As a result, BTM-based methods has a higher computational cost and recent research has focused on noninteractive acceleration techniques [32] used in offline simulations [33], [44] and edge subdivision techniques for interactive simulation of relative simple scenes [49], [8], [7]. Recen 4.3. Acoustic Levels-of-detail The 3D models used in most desktop and CAD applications are highly detailed. Such detail is often needed for visually realistic rendering of the scene. Acoustic simulation, on the contrary, requires much simpler 3D models with lower level of detail. A detailed 3D model may increase the computation cost of the simulation without any perceptual differences in sound rendering [53]. In order to accelerate the computations, a separate acoustic geometric representation may be needed and used for interactive frustum tracing Audio Rendering and 3D Audio Audio rendering is the process of producing the final output audio signal and is an important building block of interactive sound rendering. In order to render audio from geometric sound propagation we first construct an impulse response (IR) for each source listener pair from the sound paths that reach the listener from the source. A source listener pair s IR is then convolved with the input audio signal of the source to produce the final output audio signal. In terms of extending this pipeline to complex, dynamic scenes, there are two main challenges; firstly, the IRs are constantly changing due to the dynamic nature of the application. Secondly, the incoming path to the listener has a direction relative to the listener and a path length associated with it. Therefore, we treat it as a 3D sound source for realistic audio rendering. The computation of 3D sound involves convolution of incoming sound with the head related impulse response (HRIR) for every path. This can become computationally expensive and can be challenging when sound propagation is performed for a large number of sound sources. Some exciting work has recently been done to perform 3D audio rendering of large number of sound sources using sampling-based techniques [57]. 5. Conclusions In this paper, we gave a brief overview of our recent work on sound synthesis and propagation. This includes fast algorithms to generate sound from rigid bodies and liquid sounds. Moreover, we also gave an overview of new algorithms for numeric and geometric propagation, which are considerably faster than prior approaches. We also highlighted a few areas for future research. Acknowledgements: The work described in this paper represents joint work of the authors with Anish Chandak, Christian Lauterbach, William Moss, Nikunj Raghuvanshi, Zhimin Ren, Micah Taylor and Hengchin Yeh at UNC Chapel Hill. This research was supported in part by ARO Contract W911NF , NSF awards and , DARPA/RDECOM Contracts N C-0043 and WR91CRB-08-C-0137, Intel, and Microsoft. References [1] Jont B. Allen and David A. Berkley. Image method for efficiently simulating small-room acoustics. The Journal of the Acoustical Society of America, 65(4): , April [2] M. Bertram, E. Deines, J. Mohring, J. Jegorovs, and H. Hagen. Phonon tracing for auralization and visualization of sound. In Proceedings of IEEE Visualization, pages , [3] Nicolas Bonneel, George Drettakis, Nicolas Tsingos, Isabelle V. Delmon, and Doug James. Fast modal sounds with scalable frequency-domain synthesis, August [4] D. Botteldooren. Acoustical finite-difference timedomain simulation in a quasi-cartesian grid. The Journal of the Acoustical Society of America, 95(5): , [5] D. Botteldooren. Finite-difference time-domain simulation of low-frequency room acoustic problems. Acoustical Society of America Journal, 98: , December [6] John P. Boyd. Chebyshev and Fourier Spectral Methods : Second Revised Edition. Dover Publications, December [7] P. Calamia and U. P. Svensson. Fast time-domain edgediffraction calculations for interactive acoustic simulations. EURASIP Journal on Advances in Signal Processing, [8] P. T. Calamia and U. P. Svensson. Edge subdivision for fast diffraction calculations. In IEEE Trans. Vehicular Technology, pages , [9] A. Chandak, L. Antani, M. Taylor, and D. Manocha. Fastv: From-point visibility culling on complex models. Eurographics Symposium on Rendering, [10] Anish Chandak, Christian Lauterbach, Micah Taylor, Zhimin Ren, and Dinesh Manocha. Ad-frustum: Adaptive frustum tracing for interactive sound propagation. In Proc. IEEE Visualization, [11] M. J. Crocker. Handbook of Acoustics. Wiley-IEEE, 1998.
7 Figure 5. The left image shows the frusta generated by our adaptive volume tracing algorithm inside the 2.5 million triangle model of Soda Hall. The simulation is run with up to four specular reflections, edge diffraction and transmission. The right hand images show the new frusta generated on the fly, as we vary the dimensions of the room. The AD- FRUSTA algorithm can compute the propagation paths at five frames per second in this dynamic scene by using seven cores on a high PC. [12] B. Dalenbäck. Room acoustic prediction based on a unified treatment of diffuse and specular reflection. The Journal of the Acoustical Society of America, 100(2): , [13] Bengt-Inge Dalenbäck and M. Strömberg. Real time walkthrough auralization - the first year. Proceedings of the Institute of Acoustics, 28(2), [14] Carlos A. de Moura. Parallel numerical methods for differential equations - a survey. [15] E. Deines, M. Bertram, J. Mohring, J. Jegorovs, F. Michel, H. Hagen, and G.M. Nielson. Comparative visualization for wave-based and geometric acoustics. IEEE Transactions on Visualization and Computer Graphics, 12(5), [16] Yoshinori Dobashi, Tsuyoshi Yamamoto, and Tomoyuki Nishita. Real-time rendering of aerodynamic sound using sound textures based on computational fluid dynamics. ACM Trans. Graph., 22(3): , July [17] N.I. Durlach and A.S. Mavor. Virtual Reality Scientific and Technological Challenges. National Academy Press, [18] T. Funkhouser, N. Tsingos, I. Carlbom, G. Elko, M. Sondhi, and J. West. Modeling sound reflection and diffraction in architectural environments with beam tracing. In Forum Acusticum, September [19] T. Funkhouser, N. Tsingos, and Jean-Marc Jot. Survey of Methods for Modeling Sound Propagation in Interactive Virtual Environment Systems. Presence and Teleoperation, [20] Thomas Funkhouser, Ingrid Carlbom, Gary Elko, Gopal Pingali, Mohan Sondhi, and Jim West. A beam tracing approach to acoustic modeling for interactive virtual environments. In Proc. of ACM SIGGRAPH, pages 21 32, [21] K. Genuit and W. Bray. A virtual car: Prediction of sound and vibration in an interactive simulation environment. SAE Transactions, 110: , [22] Masataka Imura, Yoshinobu Nakano, Yoshihiro Yasumuro, Yoshitsugu Manabe, and Kunihiro Chihara. Real-time generation of CG and sound of liquid with bubble. In ACM SIGGRAPH posters, page 97, San Diego, California, [23] B. Kapralos, M. Jenkin, and E. Milios. Acoustic Modeling Utilizing an Acoustic Version of Phonon Mapping. In Proc. of IEEE Workshop on HAVE, [24] Matti Karjalainen and Cumhur Erkut. Digital waveguides versus finite difference structures: equivalence and mixed modeling. EURASIP J. Appl. Signal Process., 2004(1): , January [25] Joseph B. Keller. Geometrical theory of diffraction. Journal of the Optical Society of America, 52(2): , [26] Mendel Kleiner, Bengt-Inge Dalenbäck, and Peter Svensson. Auralization - an overview. JAES, 41: , [27] A. Krokstad, S. Strom, and S. Sorsdal. Calculating the acoustical room response by the use of a ray tracing technique. Journal of Sound and Vibration, 8(1): , July [28] S. Laine, S. Siltanen, T. Lokki, and L. Savioja. Accelerated beam tracing algorithm. Applied Acoustic, 70(1): , 2009.
8 [29] Christian Lauterbach, Anish Chandak, and Dinesh Manocha. Interactive sound rendering in complex and dynamic scenes using frustum tracing. IEEE Transactions on Visualization and Computer Graphics, 13(6): , [30] T. G. Leighton. The acoustic bubble. Academic Press, [31] R. Bowen Loftin. Multisensory perception: Beyond the visual in visualization. Computing in Science and Engineering, 05(4):56 58, [32] T. Lokki, L. Savioja, R. Vaananen, J. Huopaniemi, and T. Takala. Creating interactive virtual auditory environments. IEEE Computer Graphics and Applications, 22(4):49 57, [33] T. Lokki, L. Savioja, R. Väänänen, J. Huopaniemi, and T. Takala. Creating interactive virtual auditory environments. IEEE Comput. Graph. Appl., 22(4):49 57, [34] W. Moss, H. Yeh, J. Hong, M. Lin, and D. Manocha. Sounding liquids: Automatic sound synthesis from fluid simulations. Technical report, University of North Carolina-Chapel Hill, [35] James F. O Brien, Chen Shen, and Christine M. Gatchalian. Synthesizing sounds from rigid-body simulations. In SCA 02: Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages , New York, NY, USA, ACM. [36] R. Petrausch and S. Rabenstein. Simulation of room acoustics via block-based physical modeling with the functional transformation method. Applications of Signal Processing to Audio and Acoustics, IEEE Workshop on, pages , Oct [37] Alfio Quarteroni and Alberto Valli. Domain Decomposition Methods for Partial Differential Equations. Oxford University Press, [38] R. Rabenstein, S. Petrausch, A. Sarti, G. De Sanctis, C. Erkut, and M. Karjalainen. Block-based physical modeling for digital sound synthesis. Signal Processing Magazine, IEEE, 24(2):42 54, [39] N. Raghuvanshi, R. Narain, and M. C. Lin. Efficient and accurate sound propagation using adaptive rectangular propagation. IEEE Transactions on Visualization and Computer Graphics, [40] Nikunj Raghuvanshi and Ming C. Lin. Interactive sound synthesis for large scale environments. In SI3D 06: Proceedings of the 2006 symposium on Interactive 3D graphics and games, pages , New York, NY, USA, ACM Press. [41] S. Sakamoto, T. Yokota, and H. Tachibana. Numerical sound field analysis in halls using the finite difference time domain method. In RADS 2004, Awaji, Japan, [42] Shinichi Sakamoto, Takuma Seimiya, and Hideki Tachibana. Visualization of sound reflection and diffraction using finite difference time domain method. Acoustical Science and Technology, 23(1):34 39, [43] L. Savioja. Modeling Techniques for Virtual Acoustics. Doctoral thesis, Helsinki University of Technology, Telecommunications Software and Multimedia Laboratory, Report TML-A3, [44] L. Savioja, T. Lokki, and J. Huopaniemi. Auralization applying the parametric room acoustic modeling technique - the diva auralization system [45] A. Sek and B. C. Moore. Frequency discrimination as a function of frequency, measured in several ways. J. Acoust. Soc. Am., 97(4): , April [46] K. L. Shlager and J. B. Schneider. A selective survey of the finite-difference time-domain literature. Antennas and Propagation Magazine, IEEE, 37(4):39 57, [47] Samuel Siltanen, Tapio Lokki, Sami Kiminki, and Lauri Savioja. The room acoustic rendering equation. The Journal of the Acoustical Society of America, 122(3): , September [48] P. Svensson and R. Kristiansen. Computational modelling and simulation of acoustic spaces. In 22nd International Conference: Virtual, Synthetic, and Entertainment Audio, June [49] U. P. Svensson, R. I. Fred, and J. Vanderkooy. An analytic secondary source model of edge diffraction impulse responses. Acoustical Society of America Journal, 106: , November [50] Micah Taylor, Anish Chandak, Zhimin Ren, Christian Lauterbach, and Dinesh Manocha. Fast Edge-Diffraction for Sound Propagation in Complex Virtual Environments. In EAA Auralization Symposium, Espoo, Finland, June [51] Andrea Toselli and Olof Widlund. Domain Decomposition Methods. Springer, 1 edition, November [52] N. Tsingos, T. Funkhouser, A. Ngan, and I. Carlbom. Modeling acoustics in virtual environments using the uniform theory of diffraction. In Proc. of ACM SIGGRAPH, pages , [53] Nicolas Tsingos, Carsten Dachsbacher, Sylvain Lefebvre, and Matteo Dellepiane. Extending Geometrical Acoustics to Highly Detailed Architectural Environments. In 19th Intl. Congress on Acoustics, sep [54] J. Tsou, B. Chow, and S. Lam. Performance-based simulation for the planning and design of hyper-dense urban habitation. Computer Aided Architectural Design Research, 12(5): , [55] Kees van den Doel, Paul G. Kry, and Dinesh K. Pai. Foleyautomatic: physically-based sound effects for interactive simulation and animation. In SIGGRAPH 01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages , New York, NY, USA, ACM Press. [56] S. Van Duyne and J. O. Smith. The 2-d digital waveguide mesh. In Applications of Signal Processing to Audio and Acoustics, Final Program and Paper Summaries., 1993 IEEE Workshop on, pages , [57] Michael Wand and Wolfgang Straßer. Multi-resolution sound rendering. In SIGGRAPH 04: ACM SIGGRAPH 2004 Sketches, page 47, 2004.
INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING
INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING AUTHORS: MICAH TAYLOR, ANISH CHANDAK, LAKULISH ANTANI, DINESH MANOCHA ABSTRACT: We describe a novel algorithm and system for sound propagation and
Efficient Auralization for Moving Sources and Receiver
Efficient Auralization for Moving Sources and Receiver Anish Chandak University of North Carolina at Chapel Hill Chapel Hill, NC, USA [email protected] Lakulish Antani University of North Carolina at
DAAD: A New Software for Architectural Acoustic Design
The 33 rd International Congress and Exposition on Noise Control Engineering DAAD: A New Software for Architectural Acoustic Design Enis Ozgur a, Feridun Ozis b, Adil Alpkocak a a Dokuz Eylul University,
The Use of Computer Modeling in Room Acoustics
The Use of Computer Modeling in Room Acoustics J. H. Rindel Technical University of Denmark Abstract: After decades of development room acoustical computer models have matured. Hybrid methods combine the
Spatialized Audio Rendering for Immersive Virtual Environments
Spatialized Audio Rendering for Immersive Virtual Environments Martin Naef, Markus Gross Computer Graphics Laboratory, ETH Zurich Oliver Staadt Computer Science Department, UC Davis 1 Context: The blue-c
CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS
ICCVG 2002 Zakopane, 25-29 Sept. 2002 Rafal Mantiuk (1,2), Sumanta Pattanaik (1), Karol Myszkowski (3) (1) University of Central Florida, USA, (2) Technical University of Szczecin, Poland, (3) Max- Planck-Institut
Go to contents 18 3D Visualization of Building Services in Virtual Environment
3D Visualization of Building Services in Virtual Environment GRÖHN, Matti Gröhn; MANTERE, Markku; SAVIOJA, Lauri; TAKALA, Tapio Telecommunications Software and Multimedia Laboratory Department of Computer
List of Publications
A. Peer-reviewed scientific articles List of Publications Lauri Savioja, D.Sc.(Tech.) [1] L. Savioja and U. P. Svensson, Overview of geometrical room acoustic modeling techniques, The Journal of the Acoustical
Direct and Reflected: Understanding the Truth with Y-S 3
Direct and Reflected: Understanding the Truth with Y-S 3 -Speaker System Design Guide- December 2008 2008 Yamaha Corporation 1 Introduction Y-S 3 is a speaker system design software application. It is
Electromagnetic Wave Simulation Software Poynting
Electromagnetic Wave Simulation Software Poynting V Takefumi Namiki V Yoichi Kochibe V Takatoshi Kasano V Yasuhiro Oda (Manuscript received March 19, 28) The analysis of electromagnetic behavior by computer
Acoustic Terms, Definitions and General Information
Acoustic Terms, Definitions and General Information Authored by: Daniel Ziobroski Acoustic Engineer Environmental and Acoustic Engineering GE Energy Charles Powers Program Manager Environmental and Acoustic
Volumetric Meshes for Real Time Medical Simulations
Volumetric Meshes for Real Time Medical Simulations Matthias Mueller and Matthias Teschner Computer Graphics Laboratory ETH Zurich, Switzerland [email protected], http://graphics.ethz.ch/ Abstract.
Room Acoustic Reproduction by Spatial Room Response
Room Acoustic Reproduction by Spatial Room Response Rendering Hoda Nasereddin 1, Mohammad Asgari 2 and Ayoub Banoushi 3 Audio Engineer, Broadcast engineering department, IRIB university, Tehran, Iran,
Waves Sound and Light
Waves Sound and Light r2 c:\files\courses\1710\spr12\wavetrans.doc Ron Robertson The Nature of Waves Waves are a type of energy transmission that results from a periodic disturbance (vibration). They are
Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect
Objectives: PS-7.1 Physical Science Study Guide Unit 7 Wave properties and behaviors, electromagnetic spectrum, Doppler Effect Illustrate ways that the energy of waves is transferred by interaction with
Doppler Effect Plug-in in Music Production and Engineering
, pp.287-292 http://dx.doi.org/10.14257/ijmue.2014.9.8.26 Doppler Effect Plug-in in Music Production and Engineering Yoemun Yun Department of Applied Music, Chungwoon University San 29, Namjang-ri, Hongseong,
INTERFERENCE OF SOUND WAVES
1/2016 Sound 1/8 INTERFERENCE OF SOUND WAVES PURPOSE: To measure the wavelength, frequency, and propagation speed of ultrasonic sound waves and to observe interference phenomena with ultrasonic sound waves.
PRODUCT INFORMATION. Insight+ Uses and Features
PRODUCT INFORMATION Insight+ Traditionally, CAE NVH data and results have been presented as plots, graphs and numbers. But, noise and vibration must be experienced to fully comprehend its effects on vehicle
GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT. Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus
ICICTE 2014 Proceedings 1 GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus Abstract This paper presents
MODULE VII LARGE BODY WAVE DIFFRACTION
MODULE VII LARGE BODY WAVE DIFFRACTION 1.0 INTRODUCTION In the wave-structure interaction problems, it is classical to divide into two major classification: slender body interaction and large body interaction.
VIRTUAL SPACE 4D PROJECT: THE ROOM ACOUSTICAL TOOL. J. Keränen, P. Larm, V. Hongisto
VIRTUAL SPACE 4D PROJECT: THE ROOM ACOUSTICAL TOOL David Oliva University of Turku Department of Physics FIN-20014 Turku, Finland. [email protected] J. Keränen, P. Larm, V. Hongisto Finnish Institute
Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics Wolfgang Ahnert, S. Feistel, Bruce C. Olson (ADA), Oliver Schmitz, and Michael Vorländer
Fluid structure interaction of a vibrating circular plate in a bounded fluid volume: simulation and experiment
Fluid Structure Interaction VI 3 Fluid structure interaction of a vibrating circular plate in a bounded fluid volume: simulation and experiment J. Hengstler & J. Dual Department of Mechanical and Process
Parallel Simplification of Large Meshes on PC Clusters
Parallel Simplification of Large Meshes on PC Clusters Hua Xiong, Xiaohong Jiang, Yaping Zhang, Jiaoying Shi State Key Lab of CAD&CG, College of Computer Science Zhejiang University Hangzhou, China April
High-fidelity electromagnetic modeling of large multi-scale naval structures
High-fidelity electromagnetic modeling of large multi-scale naval structures F. Vipiana, M. A. Francavilla, S. Arianos, and G. Vecchi (LACE), and Politecnico di Torino 1 Outline ISMB and Antenna/EMC Lab
Lecture Notes, CEng 477
Computer Graphics Hardware and Software Lecture Notes, CEng 477 What is Computer Graphics? Different things in different contexts: pictures, scenes that are generated by a computer. tools used to make
On One Approach to Scientific CAD/CAE Software Developing Process
ISSN (Online): 1694-0784 ISSN (Print): 1694-0814 9 On One Approach to Scientific CAD/CAE Software Developing Process George Sergia 1, Alexander Demurov 2, George Petrosyan 3, Roman Jobava 4 1 Exact and
Finite Element Analysis for Acoustic Behavior of a Refrigeration Compressor
Finite Element Analysis for Acoustic Behavior of a Refrigeration Compressor Swapan Kumar Nandi Tata Consultancy Services GEDC, 185 LR, Chennai 600086, India Abstract When structures in contact with a fluid
Building Design for Advanced Technology Instruments Sensitive to Acoustical Noise
Building Design for Advanced Technology Instruments Sensitive to Acoustic Noise Michael Gendreau Colin Gordon & Associates Presentation Outline! High technology research and manufacturing instruments respond
GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS
25 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 8-2, 25, New Paltz, NY GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS Hequn Bai,Gaël Richard Institut
ABSTRACT FOR THE 1ST INTERNATIONAL WORKSHOP ON HIGH ORDER CFD METHODS
1 ABSTRACT FOR THE 1ST INTERNATIONAL WORKSHOP ON HIGH ORDER CFD METHODS Sreenivas Varadan a, Kentaro Hara b, Eric Johnsen a, Bram Van Leer b a. Department of Mechanical Engineering, University of Michigan,
2.5 Physically-based Animation
2.5 Physically-based Animation 320491: Advanced Graphics - Chapter 2 74 Physically-based animation Morphing allowed us to animate between two known states. Typically, only one state of an object is known.
ROOM ACOUSTIC PREDICTION MODELLING
XXIII ENCONTRO DA SOCIEDADE BRASILEIRA DE ACÚSTICA SALVADOR-BA, 18 A 21 DE MAIO DE 2010 ROOM ACOUSTIC PREDICTION MODELLING RINDEL, Jens Holger Odeon A/S, Scion DTU, Diplomvej 381, DK-2800 Kgs. Lyngby,
A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS
abstract n. 135 - Farina - pag. 1 RAMSETE - A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS Angelo Farina University of Parma, Dep. of Industrial Engineering, Via delle Scienze, 43100
Computer Graphics AACHEN AACHEN AACHEN AACHEN. Public Perception of CG. Computer Graphics Research. Methodological Approaches - - - - - - - - - -
Public Perception of CG Games Computer Graphics Movies Computer Graphics Research algorithms & data structures fundamental continuous & discrete mathematics optimization schemes 3D reconstruction global
L20: GPU Architecture and Models
L20: GPU Architecture and Models scribe(s): Abdul Khalifa 20.1 Overview GPUs (Graphics Processing Units) are large parallel structure of processing cores capable of rendering graphics efficiently on displays.
The Design and Implementation of Multimedia Software
Chapter 10 Auditory Content The Design and Implementation of Multimedia Software David Bernstein Jones and Bartlett Publishers www.jbpub.com David Bernstein (jbpub.com) Multimedia Software Jones and Bartlett
THE SIMULATION OF MOVING SOUND SOURCES. John M. Chowning
THE SIMULATION OF MOVING SOUND SOURCES John M. Chowning The Center for Computer Research in Music and Acoustics (CCRMA) Department of Music Stanford University, Stanford, CA 94306 [email protected] (first
www.integratedsoft.com Electromagnetic Sensor Design: Key Considerations when selecting CAE Software
www.integratedsoft.com Electromagnetic Sensor Design: Key Considerations when selecting CAE Software Content Executive Summary... 3 Characteristics of Electromagnetic Sensor Systems... 3 Basic Selection
Convention Paper Presented at the 118th Convention 2005 May 28 31 Barcelona, Spain
Audio Engineering Society Convention Paper Presented at the 118th Convention 25 May 28 31 Barcelona, Spain 6431 This convention paper has been reproduced from the author s advance manuscript, without editing,
A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET
22nd CIPA Symposium, October 11-15, 2009, Kyoto, Japan A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET Zhiqiang Du*, Qiaoxiong Li State Key Laboratory of Information Engineering
Development of Optical Wave Microphone Measuring Sound Waves with No Diaphragm
Progress In Electromagnetics Research Symposium Proceedings, Taipei, March 5 8, 3 359 Development of Optical Wave Microphone Measuring Sound Waves with No Diaphragm Yoshito Sonoda, Takashi Samatsu, and
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
Doppler. Doppler. Doppler shift. Doppler Frequency. Doppler shift. Doppler shift. Chapter 19
Doppler Doppler Chapter 19 A moving train with a trumpet player holding the same tone for a very long time travels from your left to your right. The tone changes relative the motion of you (receiver) and
Various Technics of Liquids and Solids Level Measurements. (Part 3)
(Part 3) In part one of this series of articles, level measurement using a floating system was discusses and the instruments were recommended for each application. In the second part of these articles,
CAD and Creativity. Contents
CAD and Creativity K C Hui Department of Automation and Computer- Aided Engineering Contents Various aspects of CAD CAD training in the university and the industry Conveying fundamental concepts in CAD
SOFTWARE FOR GENERATION OF SPECTRUM COMPATIBLE TIME HISTORY
3 th World Conference on Earthquake Engineering Vancouver, B.C., Canada August -6, 24 Paper No. 296 SOFTWARE FOR GENERATION OF SPECTRUM COMPATIBLE TIME HISTORY ASHOK KUMAR SUMMARY One of the important
Module 13 : Measurements on Fiber Optic Systems
Module 13 : Measurements on Fiber Optic Systems Lecture : Measurements on Fiber Optic Systems Objectives In this lecture you will learn the following Measurements on Fiber Optic Systems Attenuation (Loss)
A Pattern-Based Approach to. Automated Application Performance Analysis
A Pattern-Based Approach to Automated Application Performance Analysis Nikhil Bhatia, Shirley Moore, Felix Wolf, and Jack Dongarra Innovative Computing Laboratory University of Tennessee (bhatia, shirley,
Aircraft cabin noise synthesis for noise subjective analysis
Aircraft cabin noise synthesis for noise subjective analysis Bruno Arantes Caldeira da Silva Instituto Tecnológico de Aeronáutica São José dos Campos - SP [email protected] Cristiane Aparecida Martins
An Overview of the Finite Element Analysis
CHAPTER 1 An Overview of the Finite Element Analysis 1.1 Introduction Finite element analysis (FEA) involves solution of engineering problems using computers. Engineering structures that have complex geometry
GPR Polarization Simulation with 3D HO FDTD
Progress In Electromagnetics Research Symposium Proceedings, Xi an, China, March 6, 00 999 GPR Polarization Simulation with 3D HO FDTD Jing Li, Zhao-Fa Zeng,, Ling Huang, and Fengshan Liu College of Geoexploration
Faculty of Computer Science Computer Graphics Group. Final Diploma Examination
Faculty of Computer Science Computer Graphics Group Final Diploma Examination Communication Mechanisms for Parallel, Adaptive Level-of-Detail in VR Simulations Author: Tino Schwarze Advisors: Prof. Dr.
Finite Element Method (ENGC 6321) Syllabus. Second Semester 2013-2014
Finite Element Method Finite Element Method (ENGC 6321) Syllabus Second Semester 2013-2014 Objectives Understand the basic theory of the FEM Know the behaviour and usage of each type of elements covered
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals. Introduction
Computer Networks and Internets, 5e Chapter 6 Information Sources and Signals Modified from the lecture slides of Lami Kaya ([email protected]) for use CECS 474, Fall 2008. 2009 Pearson Education Inc., Upper
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG
FRIEDRICH-ALEXANDER-UNIVERSITÄT ERLANGEN-NÜRNBERG INSTITUT FÜR INFORMATIK (MATHEMATISCHE MASCHINEN UND DATENVERARBEITUNG) Lehrstuhl für Informatik 10 (Systemsimulation) Massively Parallel Multilevel Finite
Acousto-optic modulator
1 of 3 Acousto-optic modulator F An acousto-optic modulator (AOM), also called a Bragg cell, uses the acousto-optic effect to diffract and shift the frequency of light using sound waves (usually at radio-frequency).
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
8 Room and Auditorium Acoustics
8 Room and Auditorium Acoustics Acoustical properties of rooms and concert halls are mainly determined by the echo that depends on the reflection and absorption of sound by the walls. Diffraction of sound
STUDY OF DAM-RESERVOIR DYNAMIC INTERACTION USING VIBRATION TESTS ON A PHYSICAL MODEL
STUDY OF DAM-RESERVOIR DYNAMIC INTERACTION USING VIBRATION TESTS ON A PHYSICAL MODEL Paulo Mendes, Instituto Superior de Engenharia de Lisboa, Portugal Sérgio Oliveira, Laboratório Nacional de Engenharia
Overview Motivation and applications Challenges. Dynamic Volume Computation and Visualization on the GPU. GPU feature requests Conclusions
Module 4: Beyond Static Scalar Fields Dynamic Volume Computation and Visualization on the GPU Visualization and Computer Graphics Group University of California, Davis Overview Motivation and applications
SOFA an Open Source Framework for Medical Simulation
SOFA an Open Source Framework for Medical Simulation J. ALLARD a P.-J. BENSOUSSAN b S. COTIN a H. DELINGETTE b C. DURIEZ b F. FAURE b L. GRISONI b and F. POYER b a CIMIT Sim Group - Harvard Medical School
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Optical modeling of finite element surface displacements using commercial software
Optical modeling of finite element surface displacements using commercial software Keith B. Doyle, Victor L. Genberg, Gregory J. Michels, Gary R. Bisson Sigmadyne, Inc. 803 West Avenue, Rochester, NY 14611
Finite Difference Time Domain and BPM: Flexible Algorithm Selection Technology
Finite Difference Time Domain and BPM: Flexible Algorithm Selection Technology 1. Introduction This application note shows the use of the Finite Difference Time Domain (FDTD) module in the calculation
Optical Design Tools for Backlight Displays
Optical Design Tools for Backlight Displays Introduction Backlights are used for compact, portable, electronic devices with flat panel Liquid Crystal Displays (LCDs) that require illumination from behind.
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies
COMP175: Computer Graphics Lecture 1 Introduction and Display Technologies Course mechanics Number: COMP 175-01, Fall 2009 Meetings: TR 1:30-2:45pm Instructor: Sara Su ([email protected]) TA: Matt Menke
Rendering Microgeometry with Volumetric Precomputed Radiance Transfer
Rendering Microgeometry with Volumetric Precomputed Radiance Transfer John Kloetzli February 14, 2006 Although computer graphics hardware has made tremendous advances over the last few years, there are
System-Level Display Power Reduction Technologies for Portable Computing and Communications Devices
System-Level Display Power Reduction Technologies for Portable Computing and Communications Devices Achintya K. Bhowmik and Robert J. Brennan Intel Corporation 2200 Mission College Blvd. Santa Clara, CA
Back to Elements - Tetrahedra vs. Hexahedra
Back to Elements - Tetrahedra vs. Hexahedra Erke Wang, Thomas Nelson, Rainer Rauch CAD-FEM GmbH, Munich, Germany Abstract This paper presents some analytical results and some test results for different
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC
High-Speed Thin Client Technology for Mobile Environment: Mobile RVEC Masahiro Matsuda Kazuki Matsui Yuichi Sato Hiroaki Kameyama Thin client systems on smart devices have been attracting interest from
Dispersion diagrams of a water-loaded cylindrical shell obtained from the structural and acoustic responses of the sensor array along the shell
Dispersion diagrams of a water-loaded cylindrical shell obtained from the structural and acoustic responses of the sensor array along the shell B.K. Jung ; J. Ryue ; C.S. Hong 3 ; W.B. Jeong ; K.K. Shin
Multiresolution 3D Rendering on Mobile Devices
Multiresolution 3D Rendering on Mobile Devices Javier Lluch, Rafa Gaitán, Miguel Escrivá, and Emilio Camahort Computer Graphics Section Departament of Computer Science Polytechnic University of Valencia
Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park
NSF GRANT # 0727380 NSF PROGRAM NAME: Engineering Design Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park Atul Thakur
Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected].
Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected] Outline Motivation why do we need GPUs? Past - how was GPU programming
Waves: Recording Sound Waves and Sound Wave Interference (Teacher s Guide)
Waves: Recording Sound Waves and Sound Wave Interference (Teacher s Guide) OVERVIEW Students will measure a sound wave by placing the Ward s DataHub microphone near one tuning fork A440 (f=440hz). Then
What Audio Engineers Should Know About Human Sound Perception. Part 2. Binaural Effects and Spatial Hearing
What Audio Engineers Should Know About Human Sound Perception Part 2. Binaural Effects and Spatial Hearing AES 112 th Convention, Munich AES 113 th Convention, Los Angeles Durand R. Begault Human Factors
Basic Theory of Intermedia Composing with Sounds and Images
(This article was written originally in 1997 as part of a larger text to accompany a Ph.D. thesis at the Music Department of the University of California, San Diego. It was published in "Monochord. De
Intersection of a Line and a Convex. Hull of Points Cloud
Applied Mathematical Sciences, Vol. 7, 213, no. 13, 5139-5149 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ams.213.37372 Intersection of a Line and a Convex Hull of Points Cloud R. P. Koptelov
Efficient Storage, Compression and Transmission
Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation
Efficient numerical simulation of time-harmonic wave equations
Efficient numerical simulation of time-harmonic wave equations Prof. Tuomo Rossi Dr. Dirk Pauly Ph.Lic. Sami Kähkönen Ph.Lic. Sanna Mönkölä M.Sc. Tuomas Airaksinen M.Sc. Anssi Pennanen M.Sc. Jukka Räbinä
Music technology. Draft GCE A level and AS subject content
Music technology Draft GCE A level and AS subject content July 2015 Contents The content for music technology AS and A level 3 Introduction 3 Aims and objectives 3 Subject content 4 Recording and production
Co-simulation of Microwave Networks. Sanghoon Shin, Ph.D. RS Microwave
Co-simulation of Microwave Networks Sanghoon Shin, Ph.D. RS Microwave Outline Brief review of EM solvers 2D and 3D EM simulators Technical Tips for EM solvers Co-simulated Examples of RF filters and Diplexer
SGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
HPC enabling of OpenFOAM R for CFD applications
HPC enabling of OpenFOAM R for CFD applications Towards the exascale: OpenFOAM perspective Ivan Spisso 25-27 March 2015, Casalecchio di Reno, BOLOGNA. SuperComputing Applications and Innovation Department,
