Efficient Auralization for Moving Sources and Receiver
|
|
|
- Chastity Douglas
- 10 years ago
- Views:
Transcription
1 Efficient Auralization for Moving Sources and Receiver Anish Chandak University of North Carolina at Chapel Hill Chapel Hill, NC, USA Lakulish Antani University of North Carolina at Chapel Hill Chapel Hill, NC, USA Dinesh Manocha University of North Carolina at Chapel Hill Chapel Hill, NC, USA ABSTRACT We address the problem of generating smooth, spatialized sound for interactive multimedia applications, such as VoIPenabled virtual environments and video games. Such applications have moving sound sources as well as moving receiver (MS-MR). As a receiver moves, it receives sound emitted from prior positions of a given source. We present an efficient algorithm that can correctly model auralization for MS-MR scenarios by performing sound propagation and signal processing from multiple source positions. Our formulation only needs to compute a portion of the response from various source positions using sound propagation algorithms and can be efficiently combined with signal processing techniques to generate smooth, spatialized audio. Moreover, we present an efficient signal processing pipeline, based on block convolution, which makes it easy to combine different portions of the response from different source positions. Finally, we show that our algorithm can be easily combined with well-known geometric sound propagation methods for efficient auralization in MS-MR scenarios, with a low computational overhead (less than 25%) over auralization for static sources and a static receiver (SS-SR) scenarios. 1. INTRODUCTION Auralization is the technique of creating audible sound from computer-based simulation. It involves modeling sound propagation from a source, followed by signal processing to recreate the binaural listening experience at the receiver. Auralization is important in many interactive multimedia applications including games, telepresence, and virtual reality. For example, in massive multiplayer online (MMO) game such as World of Warcraft, spatialization of voice chat using Voice over IP (VoIP) could greatly enhance the gaming experience for multiple players [?]. In MMOs, the game players are moving and correspond to the location of sound sources (source of voice chat) as well as the receivers in the game. In tele-collaboration systems, spatial reproduction of sound is necessary to achieve realistic immersion [?]. Aural- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WOODSTOCK 97 El Paso, Texas USA Copyright 20XX ACM X-XXXXX-XX-X/XX/XX...$ ization can also provide environmental context by modeling acoustics in absence of visual cues, provide a feeling of a human condition, or set the mood [?]. For example, reverberation provides a sense of warmth or immersion in the environment. Auralization is indispensable in telepresence systems as it can provide important sound cues, like the direction of sound sources and the size of the environment, and is frequently used for training [?], therapy [?], tourism [?], exploration [?], and learning [?]. There are recent efforts to provide distributed musical performances using telepresence systems and the acoustic reproduction of the virtual environment is very important [?]. In many of these applications moving sound sources and receivers are common, e.g. in virtual reality exposure (VRE) therapy (virtual Iraq) [?] to treat post-traumatic stress disorder (PTSD), the game player (receiver) is moving as are the sound sources (e.g., helicopters, tanks). The simultaneous movement of the sources as well as the receiver results in additional challenges with respect to computational overhead and accurate modeling of the acoustic response at each receiver position. The two key components of auralization are sound propagation and signal processing. During sound propagation, an impulse response (IR) is computed for every source-receiver pair, which encodes reflections, diffraction, and scattering of sound in the scene from the source position to the receiver position. During the signal processing stage, the IR computed during propagation is convolved with the anechoic (dry) audio signal emitted by the source, yielding the audio signal heard at the receiver. Since sound may arrive at the receiver from many previous source positions, there are three main challenges in auralization of MS-MR scenes. Firstly, IRs need to be computed between many previous source positions and the current receiver position; the number of IRs that need to be computed is proportional to the maximum delay modeled in the IR. Secondly, during signal processing, many different IRs need to be convolved with the dry audio signal, incurring a substantial computational overhead. Finally, the convolved signals due to different IRs need to be combined to generate a smooth, correct audio signal at the receiver. Various approaches have been proposed to handle sound propagation and signal processing for auralization in dynamic scenes. However, current methods do not accurately model dynamic scenes where both the sources and receivers are moving simultaneously. We present novel auralization techniques for such dynamic scenes. Some of the new components of our work include: Auralization for MS-MR Scenes: We present an
2 algorithm to accurately model auralization for dynamic scenes with simultaneously moving sources and a moving receiver (MS-MR). We show that our technique can also be used to perform efficient auralization for moving sources and a static receiver (MS-SR) as well as static sources and a moving receiver (SS-MR). Efficient Signal Processing Pipeline: We present a signal processing pipeline, based on block convolution, that efficiently computes the final audio signal for MS-MR scenarios by convolving appropriate blocks of different IRs with blocks of the input source audio. Modified Sound Propagation Algorithms: We extend existing sound propagation algorithms, based on the image-source method and pre-computed acoustic transfer, to efficiently model propagation for MS- MR scenarios. Our modified algorithms are quite general and applicable to all MS-MR scenarios. Low Computational Overhead: We show that our sound propagation techniques and signal processing pipeline have a low computational overhead (less than 25%) over auralization for SS-SR scenarios. The rest of the paper is organized in the following manner. We discuss prior work related to sound propagation and signal processing for auralization in Section 2. Section 3 gives a brief overview of auralization. In Section 4, we present our algorithm for auralization in MS-MR scenes. We present a signal processing pipeline, and extend two recent sound propagation algorithms for MS-MR scenes in Section 4.2 and in Section 5, respectively. Finally, we describe our results in Section RELATED WORK In this section, we briefly review prior work related to sound propagation and signal processing for auralization. 2.1 Sound Propagation The propagation of sound in a medium is described by the acoustic wave equation, a second-order partial differential equation. Various methods have been proposed to solve the wave equation, and we summarize them below. Numerical Methods: Various classes of numerical methods have been applied to solve the wave equation [?], such as the Finite Element Method (FEM), the Boundary Element Method (BEM), the Finite-Difference Time-Domain (FDTD) method, and Digital Waveguide Meshes (DWM). FDTD methods are popularly used in room acoustics [?] due to their simplicity. However, FDTD methods are computationally intensive, and scale as the fourth power of the maximum simulated frequency and linearly with the scene volume. These methods, when applied to medium-sized scenes using a cluster of computers, can take tens of GBs of memory and tens of hours of computation time [?]. Recently, an Adaptive Rectangular Decomposition (ARD) technique [?] was proposed, which achieves two orders of magnitude speed-up over FDTD methods and other state-of-the-art numerical techniques. In practice, these numerical algorithms can compute an accurate acoustic response. However, they are quite expensive in terms of handling large acoustic spaces or dynamic scenes. Geometric Methods: Geometric acoustics provides approximate solutions to the wave equation for high-frequency sound sources. Broadly, geometric acoustics algorithms can be divided into pressure-based and energy-based methods. Pressure-based methods model specular reflections and edge diffraction, and are essentially variations of the imagesource method [?]. Ray tracing [?], beam tracing [?], frustum tracing [?], and several other techniques [?] have been proposed to accelerate the image-source method for specular reflections. Edge diffraction is modeled by the Uniform Theory of Diffraction (UTD) [?] or the Biot-Tolstoy-Medwin (BTM) model of diffraction [?]. Energy-based methods are typically used to model diffuse reflections and propagation effects when interference of sound waves is not important. The room acoustic rendering equation [?] is an integral equation generalizing energybased methods. Many different methods, including ray tracing [?], phonon tracing [?], and radiosity [?], have been applied to solve the room acoustic rendering equation. Geometric methods are efficient and can handle dynamic scenes, but cannot accurately model low-frequency interactions with objects in a scene. Precomputation-based Methods: Sound propagation is computationally challenging, and many interactive applications like video games allow a very small compute and memory budget (< 10% of the total compute and memory budget) for auralization. Hence, precomputation-based auralization approaches are becoming increasingly popular. Precomputation-based methods using ARD [?] have recently been developed. They compute the acoustic response of a scene from several sampled source positions; at run-time these responses are interpolated given the actual position of a moving source. Energy-based precomputation approaches that can model high orders of reflection using precomputed surface responses [?] or pre-computed transfer operators [?] have also been proposed. These methods precompute acoustic response on points on the surface of the scene and use them at run-time to compute the impulse response based on the source and receiver positions. Pressure-based precomputation methods based on image-source gradients have also been proposed [?]. 2.2 Signal Processing for Auralization Auralization is a vital component in some VR systems, such as DIVA at the Helsinki University of Technology [?], and RAVEN at Aachen University [?]. In addition, realtime auralization systems have been developed based on beam tracing [?] and frustum tracing [?]. These systems use sound propagation engines that compute IRs at interactive rates and signal processing pipelines that use these IRs to generate smooth, spatialized audio signals for a receiver. Artifact-Free Audio: IRs change when sources and receivers move in a dynamic scene. This could lead to artifacts in the final audio signal. Approaches based on parametric interpolation of delays and attenuations [?,?], image-source interpolation [?,?], and windowing-based filtering [?] have been proposed to handle discontinuities in IRs and audio signals.
3 Figure 1: An overview of auralization for multimedia applications. Source and receiver configuration is sent by the application to the sound propagation engine. Sound propagation engine asynchronously updates the impulse response (IR) for the source and receiver configuration to the signal processing pipeline. The signal processing pipeline buffers the incoming audio frames, interpolate IRs, convolve IRs with the buffered input audio, and interpolate the convolve result to generate smooth final audio signal. Efficient Signal Processing: Processing a large number of IRs from a large number of sound sources at interactive rates is computationally challenging. Efficient techniques to compute the final audio signal based on perceptual optimizations [?], Fourier-domain representations [?], and clustering of sound sources [?,?] have been proposed. 3. BACKGROUND In this section, we present an overview of auralization for multimedia applications (see Figure 1). 3.1 Impulse Responses Most indoor sound propagation algorithms operate on the assumption that the propagation of sound from a source to a receiver can be modeled by a linear, time-invariant (LTI) system. As a result, the propagation effects are described by computing an impulse response (IR) between the source and the receiver. The IR encodes spatial information about the size of the scene and the positions of objects in it by storing the arrival of sound waves from the source to the receiver as a function of time. Given an arbitrary sound signal emitted by the source, the signal heard at the receiver can be obtained by convolving the source signal with the IR. Figure 2 shows an example of an IR; it is usually divided into direct response (DR), early response (ER), and late response (LR). For example, in a church or a cathedral, an IR could be 2-3 seconds long, as sound emitted by a source will reflect and scatter and reach the receiver with a delay of upto 2-3 seconds, decaying until it is not audible. For such an IR, the DR is the direct sound from the source to the receiver, ER is typically the sound reaching the receiver within the first ms, and LR is the sound reaching the receiver after 100 ms of being emitted from the source. However, this assumption is only valid for SS-SR scenarios. For MS-MR scenarios, sound propagation cannot be modeled as an LTI system. Therefore, there is no welldefined IR between the source and the receiver. Fortunately, propagation for MS-MR scenarios can be modeled using time-varying IRs. We shall use this observation in Section 4 to develop an auralization framework for MS-MR scenarios. 3.2 Auralization Pipeline There are two key components of an auralization pipeline: (a) the sound propagation engine, and (b) the signal process- Figure 2: Example of a typical IR. It is composed of direct response (DR), early response (ER), and late response (LR). DR is the direct sound from the source to the receiver, ER is typically the sound arriving at the receiver in the first ms, and LR is the sound arriving at the receiver after 100 ms. ing pipeline. Figure 1 shows an overview of auralization for interactive multimedia applications. The simple convolution framework described in Section 1 is sufficient for acoustical analysis of a space, where sound sources and receiver are fixed and the user is not actively interacting with the system. But this is not sufficient for interactive multimedia applications and they pose many challenges. Firstly, due to the interactive nature of such applications, audio must be streamed to the user in real-time. For example, voice chat among players in MMOs must be played back to the player in real-time for effective in-game communication. Therefore, the audio played at the source is sent to the signal processing pipeline in small chunks, called audio frames. Sound propagation and binaural audio effects are applied to each audio frame. The size of the audio frames is chosen by the application, typically based on allowable latency in the system and the time required to apply propagation effects on a given audio frame. The size of an audio frame could be anywhere from ms depending on the application, and therefore audio frames need to be processed per second. Thus, a signal processing pipeline
4 (a) (b) Figure 3: Sound propagation for MS-MR. (a) Path for moving source and moving receiver. Source and receiver positions are updated during each frame. S k and R k denote the position of source and receiver during the k th frame. (b) The sound received at the receiver R k comes from the source positions {S k (t), S k 1,..., S k L+1 }, where corresponding audio frames {s k (t), s k 1 (t),..., s k L+1 (t)} are modified by direct and early response for the latest source positions {S k, S k 1 } and by late response from the earlier source positions {S k 2,..., S k L+1 }. for auralization in interactive applications needs to handle streaming audio frames in real-time. Secondly, these interactive applications could involve complex scenes, and current algorithms may not be able to perform sound propagation at frames per second. Therefore, sound propagation is performed asynchronously with respect to signal processing, as shown in Figure 1. For interactive applications, ms is an acceptable latency to update the IRs in a scene [?] and therefore, the sound propagation engine should be able to asynchronously update the IRs at FPS. Thirdly, as these interactive applications involve dynamic MS-MR scenes, it is important to reduce any artifacts due to the dynamic scenarios, and the final audio signal must be smooth. Due to the movement of sources and receiver, the IRs used for two subsequent audio frames may be different, as these IRs are updated asynchronously. This could lead to a discontinuity in the final audio signal at the boundary between two audio frames. Hence, interpolation of IRs or smoothing of the audio at the frame boundaries is performed to ensure an artifact-free final audio signal. Finally, as the application may have a large number of sound sources, the signal processing pipeline must be efficient and should be able to handle the convolution of IRs for a reasonable number of sound sources at FPS. 4. AURALIZATION FOR MS-MR In this section, we present our auralization technique for MS-MR scenarios. Moreover, our technique can also be specialized for MS-SR and SS-MR scenarios. Table 1 presents the notation used throughout this section and the rest of the paper. Note that the audio emitted in different frames could overlap in time, depending on the window function chosen. A windowing function limits the support of a signal. For clarity of exposition, we use a square window function such that w(t) = 1 for 0 t < T, and w(t) = 0 elsewhere. However, to compute a smooth audio signal, an overlapping windowing function like Hamming window is used. Next, we present a mathematical formulation to compute the audio signal heard at the receiver in a given frame, r k (t), for MS-MR scenarios (see Figure 3 and Figure 4). In a given MS-MR scenario, sound reaching R k arrives from many previous source positions {S k, S k 1,..., S k L+1 }, Notation multiplication operator convolution operator t time in seconds T length of an audio frame in seconds N frame rate (= 1/ T ) k frame number (range from 0 to ) S k position of source in frame k R k position of receiver in frame k w(t) window function (non-zero for T time period) w k (t) window function used to select frame k = w(t k T ) s(t) signal emitted by source s k (t) signal emitted by source in frame k = s(t) w k (t) r k (t) signal received by receiver in frame k r(t) signal received by receiver = k=0 r k(t) g k1,k 2 (t) impulse response between R k1 and S k2 l length of impulse response in seconds g k1,k 2 (t) = 0 for t > L L length of impulse response in frames = l/ T Table 1: Definitions of symbols and other notation used in this paper. as the sound emitted by previous source positions (upto L audio frames ago) propagates through the scene before arriving at R k. For any previous source position S k i, the sound reaching the listener can be obtained by convolving s k i (t), the sound emitted from the source in frame k i, with g k,k i (t), the IR between S k i and R k. Since the listener is at R k only during audio frame k, we isolate the relevant portion of the convolved signal by applying a square window w k (t). This results in the following equation: L 1 r k (t) = [g k,k i (t) s k i (t)] w k (t) (1) i=0 Equation 1 correctly models the audio signal that will
5 Figure 4: Signal processing pipeline for MS-MR: The input audio signal s(t) is multiplied with window function w(t) to generate the audio frames {s k (t), s k 1 (t),..., s k L+1 (t)} for source positions {S k, S k 1,..., S k L+1 }. These audio frames are convolved with the corresponding IRs {g k,k (t), g k,k 1 (t),..., g k,k L+1 (t)} and multiplied with the window function w k (t) to generate the audio frame for the receiver position R k. reach the receiver R k from prior source positions. Note that none of the IRs {g k,k (t), g k,k 1 (t),..., g k,k L+1 (t)} were computed in previous frames, and therefore L new IRs need to be computed during every frame between each of the prior L source positions, {S k, S k 1,..., S k L+1 }, and the current receiver position R k. A naïve auralization technique may compute L new IRs during sound propagation per frame and may perform L full convolutions of the IRs with the current audio frame. For example, for an IR of length 1 second and audio frames of size 100 ms, sound propagations for 10 different IRs would be performed per frame and 10 full convolutions between the IRs and the current audio frame would be computed per frame. This may not fit within the convolution budget of ms per frame or the ms budget per frame for sound propagation, and may lead to glitches in the final audio signal or high latency in updating the IRs. Key Observations: Our formulation is based on the following property: Lemma 1: For a given receiver position R k and source position S k i, only the interval of the IR g k,k i (t) in [(i 1) T, (i + 1) T ] will contribute to r k (t). Intuitively, the sound emitted at source position S k i can arrive at R k only within a delay of [(i 1) T, (i + 1) T ], assuming that a square window w k (t) is applied to compute the final audio signal at R k. Thus, only an interval of the IR g k,k i (t) needs to be computed to generate the final audio signal at R k. Furthermore, a block convolution framework is well-suited for signal processing in MS-MR scenarios as different blocks of IRs can be convolved with corresponding audio blocks. To compute the final audio signal at R k, the block of the IR g k,k i (t) in the interval [(i 1) T, (i + 1) T ] is convolved with the input audio frame s k i (t) to generate the audio signal at R k. This minimizes the computation in the signal processing pipeline, and we will show in Section 6 that this leads to efficient auralization for MS-MR scenarios. 4.1 Auralization for SS-MR and MS-SR Equation 1 can be specialized to derive similar equations for SS-MR and MS-SR scenarios. In SS-MR scenarios, the source is static, i.e., S 1 = S 2 = S 3... = S k. Therefore, g k,k i (t) = g k,k (t), and g k,k i (t) can be moved out of the summation in Equation 1, yielding the following simplified equation: L 1 r k (t) = [g k,k (t) s k i (t)] w k (t) (2) i=0 Hence, SS-MR scenarios are easier to model, as they require computation of only one IR, g k,k (t), per frame. However, the past audio frames {s k (t), s k 1 (t),..., s k L+1 (t)} need to be buffered in memory. In MS-SR scenarios, the receiver is static, i.e., R 1 = R 2 = R 3... = R k. Therefore, g k i,k (t) = g k,k (t), i.e., of the L IRs needed in every frame, L 1 would have been computed in the previous frames, and can be re-used. This observation results in the following simplified equation: L 1 r k (t) = [ r k,k i (t)] w k (t) (3) k=0 r k,k i (t) = g k,k i (t) s k i (t) (4)
6 Thus, MS-SR scenarios are easier to model, as they require the computation of only one IR, g k,k (t), per frame. Moreover, the convolved output from the previous L 1 frames, {r k,k 1 (t), r k,k 1 (t),..., r k,k L+1 (t)}, can be cached in a buffer and used to compute r k (t). 4.2 Signal Processing for Auralization In this section, we present our signal processing pipeline. The pipeline uses block convolution to compute the final audio signal for MS-MR scenarios. During block convolution for receiver R k, the block [(i 1) T, (i + 1) T ] of IR g k,k i (t) is convolved with the source signal block s k i (t). The results from these block convolutions for the past L source positions are added together and played back at the receiver after applying the window function w k (t). In our formulation, the block convolution is implemented by computing a short-time Fourier transform (STFT) of each frame of input audio and the IRs. Let the STFT of s k (t) be s k (ω) and the STFT of g k,k i (t) for the interval [(i 1) T, (i + 1) T ] be g k,k i (ω). Then the final audio r k (ω) can be computed as follows: L 1 r k (ω) = g k,k i (ω) s k i (ω) (5) i=0 Finally, we compute an inverse-stft to compute r k (t) from r k (ω). Note that as the IRs change from one frame to another, there may be discontinuities in the final audio signal between the boundaries of consecutive frames. Therefore windowing functions, such as the Hamming window, are used instead of a square window to minimize such artifacts. 5. SOUND PROPAGATION FOR MS-MR In this section, we present two efficient modifications of sound propagation algorithms for modeling MS-MR scenarios. First, based on Lemma 1, we can efficiently compute L IRs from the prior source positions, {S k, S k 1,..., S k L+1 } using the image-source method. Second, we extend a precomputation based sound propagation algorithm [?] that stores responses from the sources at the surface samples, by observing that the response at the surface samples from past source locations {S k, S k 1,..., S k L+1 } would have been computed in previous frames and can be stored to efficiently generate L IRs corresponding to the new receiver position. 5.1 MS-MR Image Source Method Figure 5 gives a short overview of the image-source method. This method is used to compute specular reflections and can be extended to handle edge diffraction. The image-source method can be divided into two main steps: (a) image tree construction based on the geometric representation of the environment, and (b) path validation to compute valid specular paths in the image tree. In the first step, we construct a image tree from receiver R k up to a user-specified m orders of reflection. It is constructed by recursively reflecting the receiver or an image of the receiver over the scene primitives and storing the imagesources as nodes in the tree. Each node in the tree thus corresponds to a single specular propagation path. We denote this image tree by T (R k, m). We construct an image tree using the receiver position as the root of the tree as it leads to a more efficient implementation over constructing an image tree from the source position. Figure 5 shows an example of an image tree. To compute the final specular paths from the image tree, the source position is required. Hence, in the second step, given a source position S k, we traverse the tree, and for each node in T (R k, m), we determine which of the corresponding propagation paths are valid. Some of the paths may not be valid because of the following reasons: (a) a path from the source S k to an image-source or between two image-sources might be obstructed by other objects in the scene, or (b) a path may not lie within the scene primitive which induced the image source. In such cases, a specular path does not exist for the corresponding node in the image-source tree. In MS-MR scenarios, path validation needs to be performed from all the source positions {S k, S k 1,..., S k L+1 }. However, we require only a portion of the IR in the interval [(i 1) T, (i + 1) T ] for a given receiver position R k and source position S k i. Therefore, we use the time interval and compute the distance interval [(i 1)c T, (i + 1)c T ], where c is the speed of the sound, to eliminate paths during the path validation step. A path is not considered valid if the length of the path lies outside this distance interval. We will show in Section 6 that our formulation substantially reduces the overhead of computing image-source IRs for MS-MR scenarios. 5.2 MS-MR Direct-to-Indirect Acoustic Radiance Transfer Direct-to-indirect (D2I) acoustic radiance transfer is a recently proposed precomputation-based approach to compute high orders (up to 100 or more) of diffuse reflections interactively [?]. The key idea is to sample the scene surfaces and precompute a complex-valued transfer matrix, which encodes higher-order diffuse reflections between the surface samples. At run-time, the IR is computed as follows: (a) Given the source position, compute a direct response at each sample point. (b) Given the direct response at the sample points, indirect responses are computed at these sample points by performing a matrix-vector multiplication with the transfer matrix. (c) Given the direct and indirect responses, the final IR is computed at the receiver by tracing rays (or gathering ) from the receiver. In MS-MR scenarios, we need to compute IRs for L previous source locations {S k, S k 1,..., S k L+1 }. A naïve implementation would perform all three steps above L times. This is highly inefficient, and would lead to an L-fold slow-down over SS-SR scenarios. We observe that in this case, all the indirect responses from the previous L 1 source locations {S k 1,..., S k L+1 } can be stored at the surface samples (as they are independent of the receiver position), and therefore only one indirect response from source position S k needs to be computed for each surface sample per frame. In the final step, the final IR can be gathered from L different indirect responses. This is much more efficient than the naïve implementation, as can be seen by the overall performance of direct-to-indirect acoustic radiance transfer for MS-MR in Section RESULTS In this section, we present the results of performance evaluation and benchmarking experiments carried out on our proposed framework. All tests were carried out on an Intel Core 2 Quad desktop with 4GB RAM running Windows 7.
7 (a) (b) Figure 5: Our modified image-source method. (a) An image tree is constructed by reflecting the receiver recursively across the surfaces in the scenes. Here, receiver R is reflected across the surfaces {A, B, C, D, E, F } to construct first order images. Image RA is reflected across surface B to construct higher order image RAB. An image tree, T (R, m), is constructed from these image sources. (b) A path validation is performed by attaching the source S to nodes in the image tree to compute valid specular paths from receiver R to the source S. A path is invalid if it is obstructed by objects in the scenes, e.g. image RA, or does not lie within the scene primitive, e.g. image RB. For MS-MR scenarios, we test that the delay of the path lie with the appropriate delay range ([(i 1) T, (i + 1) T ]) for receiver Rk and source Sk i. (a) (b) (c) Figure 6: An overview of our modified Direct-to-Indirect Acoustic Radiance Transfer method. (a) Direct responses are computed at the sample points from the source. (b) Indirect responses are computed at these sample points by performing a matrix-vector multiplication with the transfer matrix. The transfer matrix is computed in a pre-processing step. (c) The final IR is computed at the receiver by tracing rays (or gathering ) from the receiver. For MS-MR scenarios, the response from the past sources is stored at the surface samples and gathered at run time. (a) (b) (c) Figure 7: Benchmark scenes. (a) Room (876 triangles), (b) Hall (180 triangles), (c) Sigyn (566 triangles). Figure 7 shows the scenes we used for benchmarking. We first summarize the performance of our signal processing pipeline (as described in Section 4.2). All timings are reported for a single CPU core. As the table shows, a signal processing pipeline based on block convolution can efficiently handle a large number of sound sources per audio frame. Table 6 quantifies the computational cost of modeling MS- MR scenarios in the image-source method for sound propagation (see Section 5). In our implementation, the image tree is computed using a single CPU core, whereas path validation is performed using all 4 CPU cores. Column 3 shows the time taken for computing the image tree for 2 orders of reflection. Columns 4 and 6 show the time taken for performing path validation, for SS-SR scenarios and MSMR scenarios respectively. The computational overhead is shown in Column 8. Table 6 quantifies the computational cost of modeling MS-MR scenarios using direct-to-indirect acoustic radiance transfer (see Section 5). Our implementation of this algorithm uses all 4 CPU cores. Column 3 shows the number of surface samples used for each scene. Column 4 shows the time taken to compute the direct response at each surface sample, and apply the transfer matrix to compute the indirect response at each surface sample. Columns 5 and 7 show the time taken for gathering the final IR at the receiver, for SS-SR scenarios and MS-MR scenarios respectively. The computational overhead is shown in Column 9.
8 Frame Size 10 ms 20 ms 50 ms 100 ms IR length 1 sec 50.6 ms 18.3 ms 10.4 ms 7.9 ms 2 sec 97.8 ms 36.9 ms 19.1 ms 14.8 ms 3 sec ms 54.4 ms 27.0 ms 21.2 ms Table 2: Timing results for signal processing pipeline based on STFT. The table shows the time needed to compute 1 second of final audio output for different frame sizes and the length of the IR. Thus, for IR of length 2 sec and audio frames of size 50 ms, our signal processing pipeline takes about 19 ms to compute 1 second of output audio, and therefore it can efficiently handle up to 50 sound sources. Since higher-order reflections are precomputed and stored in a transfer matrix, the overhead percentages shown are constant regardless of the number of orders of reflections. 7. CONCLUSION AND FUTURE WORK Real-time auralization for dynamic scenes is a challenging problem. We have presented a framework for performing real-time auralization in scenes where both the sound sources and the receiver may move. A key component of this framework is an equation describing how time-dependent impulse responses can be convolved with streaming audio emitted from a sound source to determine the sound signal arriving at the receiver. This equation can be used to derive simpler equations which describe auralization in situations where either the sources or the receiver or both may be static. Another key component is the insight that for the receiver position in a given audio frame, we do not need full impulse responses from all previous source positions. This insight allows us to suitably modify sound propagation algorithms (such as the image-source method or direct-toindirect acoustic radiance transfer) to handle moving sources and a moving receiver without incurring a significant computational cost as compared to the naïve approach of computing full impulse responses from all previous source positions. Correctly performing auralization for scenes with dynamic geometry remains an interesting avenue for future work. Such scenes commonly occur in interactive virtual environments and video games. Another important question that remains to be addressed is that of the perceptual impact of correctly modeling auralization for dynamic scenes. 8. REFERENCES [1] J. B. Allen and D. A. Berkley. Image method for efficiently simulating small-room acoustics. The Journal of the Acoustical Society of America, 65(4): , April [2] L. Antani, A. Chandak, M. Taylor, and D. Manocha. Direct-to-Indirect Acoustic Radiance Transfer. IEEE Transactions on Visualization and Computer Graphics, to appear. [3] C. Avendano. Audio Signal Processing for Next-Generation Multimedia Communication Systems, chapter 13. Kluwer Academic Publishers, Norwell, MA, USA, [4] [5] D. Botteldooren. Finite-difference time-domain simulation of low-frequency room acoustic problems. Acoustical Society of America Journal, 98: , December [6] A. Chandak, L. Antani, M. Taylor, and D. Manocha. Fast and Accurate Geometric Sound Propagation Using Visibility Computations. In International Symposium on Room Acoustics, ISRA, Melbourne, Australia, August [7] J. R. Cooperstock. Multimodal telepresence systems. Signal Processing Magazine, IEEE, 28(1):77 86, [8] T. Funkhouser, N. Tsingos, I. Carlbom, G. Elko, M. Sondhi, J. E. West, G. Pingali, P. Min, and A. Ngan. A beam tracing method for interactive architectural acoustics. The Journal of the Acoustical Society of America, 115(2): , [9] T. Funkhouser, N. Tsingos, and J.-M. Jot. Survey of Methods for Modeling Sound Propagation in Interactive Virtual Environment Systems. Presence and Teleoperation, [10] M. Gerardi, B. O. Rothbaum, K. Ressler, M. Heekin, and A. Rizzo. Virtual reality exposure therapy using a virtual iraq: Case report. Journal of Traumatic Stress, 21(2): , [11] B. Kapralos. The Sonel Mapping Acoustical Modeling Method. PhD thesis, York University, Toronto, Ontario, September [12] M. Kleiner, B.-I. Dalenbäck, and P. Svensson. Auralization - an overview. JAES, 41: , [13] R. G. Kouyoumjian and P. H. Pathak. A uniform geometrical theory of diffraction for an edge in a perfectly conducting surface. Proceedings of the IEEE, 62(11): , November [14] A. Krokstad, S. Strom, and S. Sorsdal. Calculating the acoustical room response by the use of a ray tracing technique. Journal of Sound and Vibration, 8(1): , July [15] T. Lentz, D. Schröder, M. Vorländer, and I. Assenmacher. Virtual reality system with integrated sound field simulation and reproduction. EURASIP J. Appl. Signal Process., 2007: , January [16] B. L. Mann. The evolution of multimedia sound. Computers & Education, 50(4): , [17] D. McGookin, S. Brewster, and P. Priego. Audio bubbles: Employing non-speech audio to support tourist wayfinding. In M. Altinsoy, U. Jekosch, and S. Brewster, editors, Haptic and Audio Interaction Design, volume 5763 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, [18] H. Medwin, E. Childs, and G. M. Jebsen. Impulse studies of double diffraction: A discrete huygens interpretation. The Journal of the Acoustical Society of America, 72(3): , [19] A. Melzer, M. Kindsmuller, and M. Herczeg. Audioworld: A spatial audio tool for acoustic and cognitive learning. In R. Nordahl, S. Serafin, F. Fontana, and S. Brewster, editors, Haptic and Audio Interaction Design, volume 6306 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, [20] T. Moeck, N. Bonneel, N. Tsingos, G. Drettakis,
9 Scene Triangles Tree Construction SS-SR MS-MR Overhead Validation Total Validation Total Hall s 1.49 ms s 3.91 ms s 1.3 % Room s ms s ms s 0.3 % Sigyn s ms s ms s 0.8 % Table 3: Computational overhead due to modeling MS-MR scenarios using the image-source method. Scene Triangles Surface Direct + Indirect SS-SR MS-MR Overhead Samples Response Gather Total Gather Total Hall ms 31.2 ms ms 52.8 ms ms 14.6 % Room ms 36.4 ms ms 76.9 ms ms 24.8 % Sigyn ms ms ms ms ms 22.7 % Table 4: Computational overhead due to modeling MS-MR scenarios using direct-to-indirect acoustic radiance transfer. I. Viaud-Delmon, and D. Alloza. Progressive perceptual audio rendering of complex scenes. In I3D 07: Proceedings of the 2007 symposium on Interactive 3D graphics and games, pages , New York, NY, USA, ACM. [21] E.-M. Nosal, M. Hodgson, and I. Ashdown. Improved algorithms and methods for room sound-field prediction by acoustical radiosity in arbitrary polyhedral rooms. The Journal of the Acoustical Society of America, 116(2): , [22] M. Pielot, N. Henze, W. Heuten, and S. Boll. Tangible user interface for the exploration of auditory city maps. In I. Oakley and S. Brewster, editors, Haptic and Audio Interaction Design, volume 4813 of Lecture Notes in Computer Science, pages Springer Berlin / Heidelberg, [23] N. Raghuvanshi, R. Narain, and M. C. Lin. Efficient and accurate sound propagation using adaptive rectangular decomposition. IEEE Trans. Vis. Comput. Graph., 15(5): , [24] N. Raghuvanshi, J. Snyder, R. Mehra, M. C. Lin, and N. K. Govindaraju. Precomputed wave simulation for real-time sound propagation of dynamic sources in complex scenes. ACM Trans. Graph., 29(4), [25] S. Sakamoto, T. Yokota, and H. Tachibana. Numerical sound field analysis in halls using the finite difference time domain method. In RADS 2004, Awaji, Japan, [26] L. Savioja, J. Huopaniemi, T. Lokki, and R. Väänänen. Creating Interactive Virtual Acoustic Environments. J. Audio Eng. Soc., 47(9): , [27] S. Siltanen, T. Lokki, S. Kiminki, and L. Savioja. The room acoustic rendering equation. The Journal of the Acoustical Society of America, 122(3): , September [28] S. Siltanen, T. Lokki, and L. Savioja. Frequency domain acoustic radiance transfer for real-time auralization. Acta Acustica united with Acustica, 95: (12), [29] M. T. Taylor, A. Chandak, L. Antani, and D. Manocha. RESound: interactive sound rendering for dynamic virtual environments. In MM 09: Proceedings of the seventeen ACM international conference on Multimedia, pages , New York, NY, USA, ACM. [30] C. M. J. Tsilfidis, Alexandros; Papadakos. Hierarchical perceptual mixing. In Audio Engineering Society Convention 126, [31] N. Tsingos. Artifact-free asynchronous geometry-based audio rendering. In Proceedings of the Acoustics, Speech, and Signal Processing, on IEEE International Conference - Volume 05, pages , Washington, DC, USA, IEEE Computer Society. [32] N. Tsingos. A versatile software architecture for virtual audio simulations. In International Conference on Auditory Display (ICAD), Espoo, Finland, [33] N. Tsingos. Scalable perceptual mixing and filtering of audio signals using an augmented spectral representation. In Proceedings of the International Conference on Digital Audio Effects, September Madrid, Spain. [34] N. Tsingos. Precomputing geometry-based reverberation effects for games. In Audio Engineering Society Conference: 35th International Conference: Audio for Games, [35] M. Vorlander. Simulation of the transient and steady-state sound propagation in rooms using a new combined ray-tracing/image-source algorithm. The Journal of the Acoustical Society of America, 86(1): , [36] M. Wand and W. Straßer. Multi-Resolution Sound Rendering. In SPBG 04 Symposium on Point-Based Graphics 2004, pages 3 11, Zürich, Switzerland, [37] E. Wenzel, J. Miller, and J. Abel. A software-based system for interactive spatial sound synthesis. In International Conference on Auditory Display (ICAD), Atlanta, GA, April [38] G. R. White, G. Fitzpatrick, and G. McAllister. Toward accessible 3d virtual environments for the blind and visually impaired. In Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts, DIMEA 08, pages , New York, NY, USA, ACM.
INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING
INTERACTIVE GEOMETRIC SOUND PROPAGATION AND RENDERING AUTHORS: MICAH TAYLOR, ANISH CHANDAK, LAKULISH ANTANI, DINESH MANOCHA ABSTRACT: We describe a novel algorithm and system for sound propagation and
DAAD: A New Software for Architectural Acoustic Design
The 33 rd International Congress and Exposition on Noise Control Engineering DAAD: A New Software for Architectural Acoustic Design Enis Ozgur a, Feridun Ozis b, Adil Alpkocak a a Dokuz Eylul University,
The Use of Computer Modeling in Room Acoustics
The Use of Computer Modeling in Room Acoustics J. H. Rindel Technical University of Denmark Abstract: After decades of development room acoustical computer models have matured. Hybrid methods combine the
Interactive Sound Rendering
Interactive Sound Rendering Dinesh Manocha Ming C. Lin Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 {dm,lin}@cs.unc.edu http://gamma.cs.unc.edu/sound Abstract
Spatialized Audio Rendering for Immersive Virtual Environments
Spatialized Audio Rendering for Immersive Virtual Environments Martin Naef, Markus Gross Computer Graphics Laboratory, ETH Zurich Oliver Staadt Computer Science Department, UC Davis 1 Context: The blue-c
Faculty of Computer Science Computer Graphics Group. Final Diploma Examination
Faculty of Computer Science Computer Graphics Group Final Diploma Examination Communication Mechanisms for Parallel, Adaptive Level-of-Detail in VR Simulations Author: Tino Schwarze Advisors: Prof. Dr.
By choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
THE SIMULATION OF MOVING SOUND SOURCES. John M. Chowning
THE SIMULATION OF MOVING SOUND SOURCES John M. Chowning The Center for Computer Research in Music and Acoustics (CCRMA) Department of Music Stanford University, Stanford, CA 94306 [email protected] (first
List of Publications
A. Peer-reviewed scientific articles List of Publications Lauri Savioja, D.Sc.(Tech.) [1] L. Savioja and U. P. Svensson, Overview of geometrical room acoustic modeling techniques, The Journal of the Acoustical
GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS
25 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 8-2, 25, New Paltz, NY GEOMETRIC-BASED REVERBERATOR USING ACOUSTIC RENDERING NETWORKS Hequn Bai,Gaël Richard Institut
Multiresolution 3D Rendering on Mobile Devices
Multiresolution 3D Rendering on Mobile Devices Javier Lluch, Rafa Gaitán, Miguel Escrivá, and Emilio Camahort Computer Graphics Section Departament of Computer Science Polytechnic University of Valencia
What Audio Engineers Should Know About Human Sound Perception. Part 2. Binaural Effects and Spatial Hearing
What Audio Engineers Should Know About Human Sound Perception Part 2. Binaural Effects and Spatial Hearing AES 112 th Convention, Munich AES 113 th Convention, Los Angeles Durand R. Begault Human Factors
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics
AURA - Analysis Utility for Room Acoustics Merging EASE for sound reinforcement systems and CAESAR for room acoustics Wolfgang Ahnert, S. Feistel, Bruce C. Olson (ADA), Oliver Schmitz, and Michael Vorländer
A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song
, pp.347-354 http://dx.doi.org/10.14257/ijmue.2014.9.8.32 A Sound Analysis and Synthesis System for Generating an Instrumental Piri Song Myeongsu Kang and Jong-Myon Kim School of Electrical Engineering,
8 Room and Auditorium Acoustics
8 Room and Auditorium Acoustics Acoustical properties of rooms and concert halls are mainly determined by the echo that depends on the reflection and absorption of sound by the walls. Diffraction of sound
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation
Establishing How Many VoIP Calls a Wireless LAN Can Support Without Performance Degradation ABSTRACT Ángel Cuevas Rumín Universidad Carlos III de Madrid Department of Telematic Engineering Ph.D Student
LCMON Network Traffic Analysis
LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia [email protected] Abstract The Swinburne
PHOTON mapping is a practical approach for computing global illumination within complex
7 The Photon Mapping Method I get by with a little help from my friends. John Lennon, 1940 1980 PHOTON mapping is a practical approach for computing global illumination within complex environments. Much
Grid Computing for Artificial Intelligence
Grid Computing for Artificial Intelligence J.M.P. van Waveren May 25th 2007 2007, Id Software, Inc. Abstract To show intelligent behavior in a First Person Shooter (FPS) game an Artificial Intelligence
Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping. Lecture 15 Taku Komura
Computer Graphics Global Illumination (2): Monte-Carlo Ray Tracing and Photon Mapping Lecture 15 Taku Komura In the previous lectures We did ray tracing and radiosity Ray tracing is good to render specular
SGN-1158 Introduction to Signal Processing Test. Solutions
SGN-1158 Introduction to Signal Processing Test. Solutions 1. Convolve the function ( ) with itself and show that the Fourier transform of the result is the square of the Fourier transform of ( ). (Hints:
Final Year Project Progress Report. Frequency-Domain Adaptive Filtering. Myles Friel. Supervisor: Dr.Edward Jones
Final Year Project Progress Report Frequency-Domain Adaptive Filtering Myles Friel 01510401 Supervisor: Dr.Edward Jones Abstract The Final Year Project is an important part of the final year of the Electronic
New Insights into WiFi-based Device-free Localization
New Insights into WiFi-based Device-free Localization Heba Aly Dept. of Computer and Systems Engineering Alexandria University, Egypt [email protected] Moustafa Youssef Wireless Research Center Egypt-Japan
PARALLEL PROGRAMMING
PARALLEL PROGRAMMING TECHNIQUES AND APPLICATIONS USING NETWORKED WORKSTATIONS AND PARALLEL COMPUTERS 2nd Edition BARRY WILKINSON University of North Carolina at Charlotte Western Carolina University MICHAEL
Locality Based Protocol for MultiWriter Replication systems
Locality Based Protocol for MultiWriter Replication systems Lei Gao Department of Computer Science The University of Texas at Austin [email protected] One of the challenging problems in building replication
SPATIAL IMPULSE RESPONSE RENDERING: A TOOL FOR REPRODUCING ROOM ACOUSTICS FOR MULTI-CHANNEL LISTENING
SPATIAL IMPULSE RESPONSE RENDERING: A TOOL FOR REPRODUCING ROOM ACOUSTICS FOR MULTI-CHANNEL LISTENING VILLE PULKKI AND JUHA MERIMAA Laboratory of Acoustics and Audio Signal Processing, Helsinki University
Lecture Notes, CEng 477
Computer Graphics Hardware and Software Lecture Notes, CEng 477 What is Computer Graphics? Different things in different contexts: pictures, scenes that are generated by a computer. tools used to make
FREQUENTLY ASKED QUESTIONS ABOUT DOLBY ATMOS FOR THE HOME
FREQUENTLY ASKED QUESTIONS ABOUT DOLBY ATMOS FOR THE HOME August 2014 Q: What is Dolby Atmos? Dolby Atmos is a revolutionary new audio technology that transports you into extraordinary entertainment experiences.
CHAPTER FIVE RESULT ANALYSIS
CHAPTER FIVE RESULT ANALYSIS 5.1 Chapter Introduction 5.2 Discussion of Results 5.3 Performance Comparisons 5.4 Chapter Summary 61 5.1 Chapter Introduction This chapter outlines the results obtained from
Dirac Live & the RS20i
Dirac Live & the RS20i Dirac Research has worked for many years fine-tuning digital sound optimization and room correction. Today, the technology is available to the high-end consumer audio market with
Analysis/resynthesis with the short time Fourier transform
Analysis/resynthesis with the short time Fourier transform summer 2006 lecture on analysis, modeling and transformation of audio signals Axel Röbel Institute of communication science TU-Berlin IRCAM Analysis/Synthesis
High-fidelity electromagnetic modeling of large multi-scale naval structures
High-fidelity electromagnetic modeling of large multi-scale naval structures F. Vipiana, M. A. Francavilla, S. Arianos, and G. Vecchi (LACE), and Politecnico di Torino 1 Outline ISMB and Antenna/EMC Lab
A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS
abstract n. 135 - Farina - pag. 1 RAMSETE - A NEW PYRAMID TRACER FOR MEDIUM AND LARGE SCALE ACOUSTIC PROBLEMS Angelo Farina University of Parma, Dep. of Industrial Engineering, Via delle Scienze, 43100
Parallel Simplification of Large Meshes on PC Clusters
Parallel Simplification of Large Meshes on PC Clusters Hua Xiong, Xiaohong Jiang, Yaping Zhang, Jiaoying Shi State Key Lab of CAD&CG, College of Computer Science Zhejiang University Hangzhou, China April
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
ROOM ACOUSTIC PREDICTION MODELLING
XXIII ENCONTRO DA SOCIEDADE BRASILEIRA DE ACÚSTICA SALVADOR-BA, 18 A 21 DE MAIO DE 2010 ROOM ACOUSTIC PREDICTION MODELLING RINDEL, Jens Holger Odeon A/S, Scion DTU, Diplomvej 381, DK-2800 Kgs. Lyngby,
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
System Identification for Acoustic Comms.:
System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation
Interactive Visualization of Magnetic Fields
JOURNAL OF APPLIED COMPUTER SCIENCE Vol. 21 No. 1 (2013), pp. 107-117 Interactive Visualization of Magnetic Fields Piotr Napieralski 1, Krzysztof Guzek 1 1 Institute of Information Technology, Lodz University
Distributed Area of Interest Management for Large-Scale Immersive Video Conferencing
2012 IEEE International Conference on Multimedia and Expo Workshops Distributed Area of Interest Management for Large-Scale Immersive Video Conferencing Pedram Pourashraf ICT Research Institute University
Bachelor of Games and Virtual Worlds (Programming) Subject and Course Summaries
First Semester Development 1A On completion of this subject students will be able to apply basic programming and problem solving skills in a 3 rd generation object-oriented programming language (such as
CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS
ICCVG 2002 Zakopane, 25-29 Sept. 2002 Rafal Mantiuk (1,2), Sumanta Pattanaik (1), Karol Myszkowski (3) (1) University of Central Florida, USA, (2) Technical University of Szczecin, Poland, (3) Max- Planck-Institut
Parallel Computing for Option Pricing Based on the Backward Stochastic Differential Equation
Parallel Computing for Option Pricing Based on the Backward Stochastic Differential Equation Ying Peng, Bin Gong, Hui Liu, and Yanxin Zhang School of Computer Science and Technology, Shandong University,
An introduction to Global Illumination. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology
An introduction to Global Illumination Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Isn t ray tracing enough? Effects to note in Global Illumination image:
A Game of Numbers (Understanding Directivity Specifications)
A Game of Numbers (Understanding Directivity Specifications) José (Joe) Brusi, Brusi Acoustical Consulting Loudspeaker directivity is expressed in many different ways on specification sheets and marketing
Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test
Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important
Introduction Computer stuff Pixels Line Drawing. Video Game World 2D 3D Puzzle Characters Camera Time steps
Introduction Computer stuff Pixels Line Drawing Video Game World 2D 3D Puzzle Characters Camera Time steps Geometry Polygons Linear Algebra NURBS, Subdivision surfaces, etc Movement Collisions Fast Distances
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
Application Notes. Introduction. Contents. Managing IP Centrex & Hosted PBX Services. Series. VoIP Performance Management. Overview.
Title Series Managing IP Centrex & Hosted PBX Services Date July 2004 VoIP Performance Management Contents Introduction... 1 Quality Management & IP Centrex Service... 2 The New VoIP Performance Management
Cork Education and Training Board. Programme Module for. 3 Dimensional Computer Graphics. Leading to. Level 5 FETAC
Cork Education and Training Board Programme Module for 3 Dimensional Computer Graphics Leading to Level 5 FETAC 3 Dimensional Computer Graphics 5N5029 3 Dimensional Computer Graphics 5N5029 1 Version 3
Latency on a Switched Ethernet Network
Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency
Audio Analysis & Creating Reverberation Plug-ins In Digital Audio Engineering
, pp.201-208 http:// dx.doi.org/10.14257/ijmue.2013.8.6.20 Audio Analysis & Creating Reverberation Plug-ins In Digital Audio Engineering Yoemun Yun Dept. of Applied Music, Chungwoon University Daehakgil-25,
ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks
ENSC 427: Communication Networks Spring 2010 OPNET Final Project Analysis of Voice over IP performance on Wi-Fi networks Group 14 members: Farzad Abasi ([email protected]) Ehsan Arman ([email protected]) http://www.sfu.ca/~faa6
Room Acoustic Reproduction by Spatial Room Response
Room Acoustic Reproduction by Spatial Room Response Rendering Hoda Nasereddin 1, Mohammad Asgari 2 and Ayoub Banoushi 3 Audio Engineer, Broadcast engineering department, IRIB university, Tehran, Iran,
Building Technology and Architectural Design. Program 4th lecture 9.00-9.45 Case studies Room Acoustics. 10.00 10.45 Case studies Room Acoustics
Building Technology and Architectural Design Program 4th lecture 9.00-9.45 Case studies Room Acoustics 9.45 10.00 Break 10.00 10.45 Case studies Room Acoustics Lecturer Poul Henning Kirkegaard 10-10-2006
Acoustic analysis by computer simulation for building restoration
Acoustic analysis by computer simulation for building restoration Enrico Dassori & Tiziana Ottonello Istituto di Architettura e tecnica Urbanistica (IATU), Università di Genova Architecture and town planing
Effects of microphone arrangement on the accuracy of a spherical microphone array (SENZI) in acquiring high-definition 3D sound space information
November 1, 21 1:3 1 Effects of microphone arrangement on the accuracy of a spherical microphone array (SENZI) in acquiring high-definition 3D sound space information Shuichi Sakamoto 1, Jun ichi Kodama
SQUARES AND SQUARE ROOTS
1. Squares and Square Roots SQUARES AND SQUARE ROOTS In this lesson, students link the geometric concepts of side length and area of a square to the algebra concepts of squares and square roots of numbers.
Video compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality
BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality Henry Fuchs University of North Carolina at Chapel Hill (USA) and NSF Science and Technology Center for Computer Graphics and
Compact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
Black-box Performance Models for Virtualized Web. Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang [email protected]
Black-box Performance Models for Virtualized Web Service Applications Danilo Ardagna, Mara Tanelli, Marco Lovera, Li Zhang [email protected] Reference scenario 2 Virtualization, proposed in early
A mixed structural modeling approach to personalized binaural sound rendering
A mixed structural modeling approach to personalized binaural sound rendering Federico Avanzini Michele Geronazzo Simone Spagnol Sound and Music Computing Group Dept. of Information Engineering University
Visibility Map for Global Illumination in Point Clouds
TIFR-CRCE 2008 Visibility Map for Global Illumination in Point Clouds http://www.cse.iitb.ac.in/ sharat Acknowledgments: Joint work with Rhushabh Goradia. Thanks to ViGIL, CSE dept, and IIT Bombay (Based
2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
Integration of acoustics in parametric architectural design
Toronto, Canada International Symposium on Room Acoustics 2013 June 9-11 ISRA 2013 Integration of acoustics in parametric architectural design Dr Thomas Scelo ([email protected]) Marshall Day Acoustics
Tracking Groups of Pedestrians in Video Sequences
Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal
Mathematics Discrete Mathematics (F06-S07), Finite Mathematics (F00-S01, S07), Intermediate Algebra
Curriculum Vitae J.F. (Jim) Nystrom, Ph.D. Visiting Assistant Professor School of Natural Sciences and Mathematics Shepherd University Shepherdstown, WV 25443-3210 Research Interest Areas Algorithm design
Mesh Generation and Load Balancing
Mesh Generation and Load Balancing Stan Tomov Innovative Computing Laboratory Computer Science Department The University of Tennessee April 04, 2012 CS 594 04/04/2012 Slide 1 / 19 Outline Motivation Reliable
How To Monitor Performance On Eve
Performance Monitoring on Networked Virtual Environments C. Bouras 1, 2, E. Giannaka 1, 2 Abstract As networked virtual environments gain increasing interest and acceptance in the field of Internet applications,
GPR Polarization Simulation with 3D HO FDTD
Progress In Electromagnetics Research Symposium Proceedings, Xi an, China, March 6, 00 999 GPR Polarization Simulation with 3D HO FDTD Jing Li, Zhao-Fa Zeng,, Ling Huang, and Fengshan Liu College of Geoexploration
Direct and Reflected: Understanding the Truth with Y-S 3
Direct and Reflected: Understanding the Truth with Y-S 3 -Speaker System Design Guide- December 2008 2008 Yamaha Corporation 1 Introduction Y-S 3 is a speaker system design software application. It is
VALAR: A BENCHMARK SUITE TO STUDY THE DYNAMIC BEHAVIOR OF HETEROGENEOUS SYSTEMS
VALAR: A BENCHMARK SUITE TO STUDY THE DYNAMIC BEHAVIOR OF HETEROGENEOUS SYSTEMS Perhaad Mistry, Yash Ukidave, Dana Schaa, David Kaeli Department of Electrical and Computer Engineering Northeastern University,
Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc
(International Journal of Computer Science & Management Studies) Vol. 17, Issue 01 Performance Evaluation of AODV, OLSR Routing Protocol in VOIP Over Ad Hoc Dr. Khalid Hamid Bilal Khartoum, Sudan [email protected]
Oracle8i Spatial: Experiences with Extensible Databases
Oracle8i Spatial: Experiences with Extensible Databases Siva Ravada and Jayant Sharma Spatial Products Division Oracle Corporation One Oracle Drive Nashua NH-03062 {sravada,jsharma}@us.oracle.com 1 Introduction
Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network
Performance Evaluation of VoIP Services using Different CODECs over a UMTS Network Jianguo Cao School of Electrical and Computer Engineering RMIT University Melbourne, VIC 3000 Australia Email: [email protected]
MUSIC-like Processing of Pulsed Continuous Wave Signals in Active Sonar Experiments
23rd European Signal Processing Conference EUSIPCO) MUSIC-like Processing of Pulsed Continuous Wave Signals in Active Sonar Experiments Hock Siong LIM hales Research and echnology, Singapore hales Solutions
What is Visualization? Information Visualization An Overview. Information Visualization. Definitions
What is Visualization? Information Visualization An Overview Jonathan I. Maletic, Ph.D. Computer Science Kent State University Visualize/Visualization: To form a mental image or vision of [some
STORE VIEW: Pervasive RFID & Indoor Navigation based Retail Inventory Management
STORE VIEW: Pervasive RFID & Indoor Navigation based Retail Inventory Management Anna Carreras Tànger, 122-140. [email protected] Marc Morenza-Cinos Barcelona, SPAIN [email protected] Rafael Pous
Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks
Adaptive DCF of MAC for VoIP services using IEEE 802.11 networks 1 Mr. Praveen S Patil, 2 Mr. Rabinarayan Panda, 3 Mr. Sunil Kumar R D 1,2,3 Asst. Professor, Department of MCA, The Oxford College of Engineering,
Course Overview. CSCI 480 Computer Graphics Lecture 1. Administrative Issues Modeling Animation Rendering OpenGL Programming [Angel Ch.
CSCI 480 Computer Graphics Lecture 1 Course Overview January 14, 2013 Jernej Barbic University of Southern California http://www-bcf.usc.edu/~jbarbic/cs480-s13/ Administrative Issues Modeling Animation
Forced Low latency Handoff in Mobile Cellular Data Networks
Forced Low latency Handoff in Mobile Cellular Data Networks N. Moayedian, Faramarz Hendessi Department of Electrical and Computer Engineering Isfahan University of Technology, Isfahan, IRAN [email protected]
Loudspeaker Equalization with Post-Processing
EURASIP Journal on Applied Signal Processing 2002:11, 1296 1300 c 2002 Hindawi Publishing Corporation Loudspeaker Equalization with Post-Processing Wee Ser Email: ewser@ntuedusg Peng Wang Email: ewangp@ntuedusg
3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte
Aalborg Universitet 3D sound in the telepresence project BEAMING Olesen, Søren Krarup; Markovic, Milos; Madsen, Esben; Hoffmann, Pablo Francisco F.; Hammershøi, Dorte Published in: Proceedings of BNAM2012
Remote Graphical Visualization of Large Interactive Spatial Data
Remote Graphical Visualization of Large Interactive Spatial Data ComplexHPC Spring School 2011 International ComplexHPC Challenge Cristinel Mihai Mocan Computer Science Department Technical University
Three Effective Top-Down Clustering Algorithms for Location Database Systems
Three Effective Top-Down Clustering Algorithms for Location Database Systems Kwang-Jo Lee and Sung-Bong Yang Department of Computer Science, Yonsei University, Seoul, Republic of Korea {kjlee5435, yang}@cs.yonsei.ac.kr
