Streaming Compressed 3D Data on the Web using JavaScript and WebGL
|
|
|
- Anastasia Goodman
- 10 years ago
- Views:
Transcription
1 Streaming Compressed 3D Data on the Web using JavaScript and WebGL Guillaume Lavoué Université de Lyon, CNRS LIRIS, INSA-Lyon Laurent Chevalier VELVET Florent Dupont Université de Lyon, CNRS LIRIS, Université Lyon 1 Abstract With the development of Web3D technologies, the delivery and visualization of 3D models on the web is now possible and is bound to increase both in the industry and for the general public. However the interactive remote visualization of 3D graphic data in a web browser remains a challenging issue. Indeed, most of existing systems suffer from latency (due to the data downloading time) and lack of adaptation to heterogeneous networks and client devices (i.e. the lack of levels of details); these drawbacks seriously affect the quality of user experience. This paper presents a technical solution for streaming and visualization of compressed 3D data on the web. Our approach leans upon three strong features: (1) a dedicated progressive compression algorithm for 3D graphic data with colors producing a binary compressed format which allows a progressive decompression with several levels of details; (2) the introduction of a JavaScript halfedge data structure allowing complex geometrical and topological operations on a 3D mesh; (3) the multi-thread JavaScript / WebGL implementation of the decompression scheme allowing 3D data streaming in a web browser. Experiments and comparison with existing solutions show promising results in terms of latency, adaptability and quality of user experience. CR Categories: I.3.2 [Computer Graphics]: Graphics systems Remote systems I.3.6 [Computer Graphics]: Methodology and Techniques Graphics data structures and data types Keywords: 3D Graphics, Web3D, WebGL, Progressive Compression, Level-of-Details, JavaScript. 1 Introduction Technological advances in the fields of telecommunication, computer graphics, and hardware design during the two last decades have contributed to the development of a new type of multimedia: three-dimensional (3D) graphic data. This growing 3D activity was possible thanks to the development of hardware and software for both professionals (especially 3D modeling tools for creation and manipulation) and for end-users (3D graphic accelerated hardware, new generation of mobile phones able to visualize 3D models). Moreover, the visualization of 3D content through the web is now possible thanks to specific formats like X3D, technologies like the very recent WebGL specification, which makes the GPU control- [email protected] [email protected] [email protected] lable by JavaScript, and norms like HTML 5. The Web3D concept (i.e. communicating 3D content on the web) is seen as the future of the Web 2.0, and is supported by many organizations like the W3C and the Web3D consortium. Numerous application domains are directly concerned by 3D data (some of them are illustrated in figure 1): Mechanical engineering, scientific visualization, digital entertainment (video games, serious games, 3D movies), medical imaging, architecture, cultural heritage (e.g. 3D scanning of ancient statues). In most of these applications, 3D data are represented by polygonal meshes, which modelize the surface of the 3D objects by a set of vertices and facets (see the zoomed part on the right in figure 1). This type of data is more complex to handle than other media such as audio signals, images or videos, and thus it has brought new challenges to the scientific community. In particular, the interactive remote visualization of 3D graphic data in a web browser remains a challenging issue. As observed by Di Benedetto et al. [2010], the delivery and visualization of 3D content through the web has came with a considerable delay with respect to other digital media such as images and videos, mainly because of the higher requirements of 3D graphics in terms of computational power. First systems used Java Applets or ActiveX controls to expose 3D data on a web browser, however recently the WebGL specification has been introduced [Khronos 2009] and will probably boost the use of 3D data on the web. A lot of industries have interest in providing 3D content through the web, including online video games (to represent virtual worlds), 3D design or e-business companies. Moreover like existing huge repositories of pictures (e.g. Flikr) or videos (e.g. YouTube), community web 3D model repositories are now appearing, such as Google 3D Warehouse. Like stated in the recent study from Mouton et al. [2011], web applications have major benefits compared to desktop applications: firstly, web browsers are available for all mainstream platforms including mobile devices, and secondly the deployment of web applications is straightforward and does not require the user to install or update softwares or libraries other than the browser. All these reasons argue for a high increase of the use of 3D remote graphics on the web in the near future. An efficient system for interactive remote visualization of large 3D datasets needs to tackle the following technical issues: 1. Removing the latency; in most of existing systems, 3D data are fully loaded in an uncompressed form. Therefore, there is latency before visualization. This latency is particularly critical for web applications. 2. Allowing the adaptation of the levels of details to different transmission networks and client hardwares, in order to allow a good frame-rate even in case of low-power devices such as smartphones. These issues can be resolved by the use of progressive compression techniques [Peng et al. 2005]. Indeed, progressive compression allows to achieve high compression ratio (and thus fast transmission) and also to produce different levels of details (LoD), allowing to adapt the complexity of the data to the remote device by stopping the transmission when a sufficient LoD is reached. Moreover, users
2 Figure 1: Several 3D graphical models illustrating different application domains. From left to right: Neptune (250k vertices - cultural heritage), Venus (100k vertices - cultural heritage), Casting (5k vertices - mechanical engineering), Monkey (50k vertices - digital entertainment) and Tank (160k vertices - scientific visualization) which represents a nuclear power plant tank with temperature information (provided by R&D division of EDF). are able to quickly visualize a coarse version of the 3D data first, instead of waiting for full objects to be downloaded before they can be displayed. Figure 2 illustrates different levels of details for the power plant tank 3D model. These functionalities are able to neutralize the time latency even for huge data and make possible realtime interactions (i.e. high frame rate) even for mobile devices. We introduce a technical solution for web-based remote 3D streaming and visualization, which tackles the two issues mentioned above. Our system runs natively in a web browser without any plug-in installation and leans upon three strong features: (1) a dedicated progressive compression algorithm for 3D graphics data with colors, producing a binary compressed.p3dw file which allows a progressive decompression with several levels of details; (2) the introduction of Polyhedron HalfEdge.js, a JavaScript halfedge data structure allowing geometrical and topological operations on a 3D mesh; (3) the multi-thread JavaScript implementation of the associated decompression scheme, using Polyhedron HalfEdge.js and WebGL, allowing 3D data streaming in a web browser. The next section details the state of the art about 3D object compression and web-based 3D data delivery and visualization; then section 3 details our progressive compression algorithm while section 4 presents our JavaScript halfedge data structure and the implementation of the decompression. Finally section 5 illustrates several compression and streaming experiments and comparisons with state of the art, while section 6 concludes the paper State of the art Progressive compression The main idea of progressive (or multi-resolution) compression is to represent the 3D data by a simple coarse model (low resolution) followed by a refinement sequence permitting an incremental refinement of the 3D model until the highest resolution (see figure 2, from left to right). This functionality is particularly useful in the case of remote visualization since it allows adapting the level of details to the capacity of the visualization device, the network bandwidth and the user needs. Most of existing approaches consist of decimating the 3D mesh (vertex/edge suppressions) while storing the information necessary for the process inversion, i.e. the refinement (vertex/edge insertions during the decoding). The existing approaches differ in the way they decimate the mesh and store the refinement information (where and how to refine?). Since the pioneer work of Hoppe [1996], a lot of methods have been introduced [Taubin and Rossignac 1998; Pajarola and Rossignac 2000; Alliez and Desbrun 2001; Gandoin and Devillers 2002; Peng and Kuo 2005; Valette et al. 2009; Peng et al. 2010; Lee et al. 2012], however most of them are not adapted for remote visualization, indeed some critical issues have been almost totally ignored by the scientific community: Most of existing progressive compression techniques have concentrated their efforts on optimizing the compression ratio; however in a remote visualization scenario, improving the quality of the levels of details (see figure 2) is more important than gaining a few bits on the size of the whole compressed stream. Only some very recent methods have tried to focus on the quality of the levels of details [Tian and AlRegib 2008; Peng et al. 2010; Lee et al. 2012]. Only few existing techniques [Tian and AlRegib 2008; Lee et al. 2012] allow the progressive compression of attributes like color, texture or other information attached to the 3D data. The level of details management of these attributes is particularly important with regards to their influence on the perceptual quality of the visualized object. One of the main objectives of progressive compression is to speed up the transmission of the 3D data by decreasing the size of the content to transmit. However if the decompression time is too long, then the user has lost all the benefit of the compression since even if the transmission is fast, a long decompression time will induce a latency for the user. Therefore
3 Figure 2: Progressive decoding of the compressed P3DW file corresponding to the Tank model (with and without wireframe). From left to right : 12%, 28%, 56% and 100% of the stream are respectively decoded. Such progressive decoding allows to stream the data in order to obtain very quickly a good approximation of the model. Moreover, it allows an adaptation to the client device hardware (for a high performance workstation the full resolution model can be loaded and visualized interactively but in case of a smartphone, a low resolution version has to be preferred). The original ASCII OFF file size is kb. a critical issue for a compression scheme is to optimize the decompression time by relying on simple yet efficient schemes. However at present very few progressive compression algorithms have focused on optimizing and simplifying this step. Our objective is to focus on that point to take full advantage of the gain in transmission time provided by the small size of the compressed stream. Such a simplified decoding algorithm would also make possible its transcription in JavaScript for a full web integration using WebGL. This time issue is of major importance for a realistic industrial use of progressive compression. In our system, we propose an adaptation of the recent progressive compression algorithm from Lee et al. [2012] which fulfills these requirements (quality of the LoD, color handling, decompression simplicity). Our algorithm produces a binary compressed file (.P3DW) that allows a fast and simple progressive decompression. We have selected the algorithm from Lee et al. [2012] since it produces among the best state of the art results regarding rate-distortion performance and it is publicly available in the MEsh Processing Platform (MEPP) [Lavoue et al. 2012] ( 2.2 Remote 3D visualization on the web Like stated in the introduction, the delivery and visualization of 3D content through the web has came with a huge delay with respect to other digital media, mainly because of the higher requirements of 3D graphics in terms of computational power. Some web solutions exist, like ParaViewWeb [Jomier et al. 2011], that compute the 3D rendering on the server side and then transmit only 2D images to the client. However we focus this state-of-the-art on client side rendering solutions where the full 3D data are transmitted. Many works on web-based remote visualization of 3D data have been conducted for remote collaborative scientific visualization for which an overview has recently been conducted by Mouton et al. [2011]. ShareX3D [Jourdain et al. 2008] provides a web-based remote visualization of 3D data; the web rendering is based on Java. The COVISE (COllaborative VIsualization and Simulation Environment) platform, in its last version [Niebling and Kopecki 2010], offers a web client implemented using JavaScript and WebGL. However these systems, like most of existing ones, make use of XML-based ASCII format such as VRML or X3D [Jourdain et al. 2008] or JavaScript vertex arrays [Niebling and Kopecki 2010] to exchange the 3D data, which involves a significant latency due to the large file size. To resolve this latency issue, some compression methods have been proposed. A simple binary encoder (X3Db) has been introduced for the X3D format, however it produces poor compression rates (around 1:5 regarding the original ASCII X3D size). Isenburg and Snoeyink [2003] propose a compression method integrated to the VRML/X3D format however the decompression needs a dedicated Java client. Recently, several interesting compression methods, allowing decompression in the web browser, have been proposed: Google introduces webgl-loader [Chun 2012] ( a WebGL-based compression algorithm for 3D meshes in the context of the Google Body project [Blume et al. 2011]; it is based on UTF-8 coding, delta prediction and GZIP and produces compression ratio around 5 bytes/triangle (for encoding coordinates, connectivity and normals). Behr et al. [2012] propose to use images as binary containers for mesh geometry within the X3DOM framework;
4 they obtain compression ratio around 6 bytes/triangle in the best configuration. These two latter approaches produce interesting compression ratio and a fast decoding mostly on the GPU. However they are single-resolution hence they do not allow streaming or level of details selection. Some authors have proposed solutions for 3D data streaming; Di Benedetto et al. [2010], in their JavaScript library SpiderGL (which leans upon WebGL), propose a structure for managing levels of details however it is only suited for adaptive rendering since no streaming or compressed format is associated. In the context of remote scientific visualization, Maglo et al. [2010] propose a remote 3D data streaming framework based on a progressive compression algorithm; however they need a dedicated desktop client (Java and C++) to decompress the binary stream. A similar early version of this latter work was proposed by Chen and Nishita [2002] who consider an older and less efficient progressive compression scheme. Some works on 3D streaming have also been conducted in the context of online gaming and virtual world, Marvie et al. [2011] present a streaming scheme for 3D data based on a X3D extension. Their work is based on the progressive mesh representation introduced by Hoppe [1996] and thus allows streaming and level of details selection. However this progressive representation is not really compressed, moreover once again a dedicated desktop client is needed to visualize the data (no web integration). Finally, very recently, Gobbetti et al. [2012] propose a nice multi-resolution structure dedicated to the rapid visualization of large 3D objects on the web, however it requires complex preprocessing steps (parameterization and remeshing) and cannot handle arbitrary meshes. In our system, we consider a compressed representation of the 3D data (in the form of a P3DW file) which provides good compression ratio: around 1:10 equivalent to roughly 3 bytes/triangle, when encoding coordinates and connectivity. This compressed format allows a progressive decompression and streaming. The whole decompression process is implemented in JavaScript and WebGL hence the 3D streaming works directly on a web browser without need of plug-in. 3 Web-based progressive compression Like stated above, we need a simple decompression scheme to make possible a JavaScript/WebGL implementation providing reasonable processing time. For this purpose we have made a webbased adaptation of the progressive algorithm from Lee et al. [2012] which is based on the valence-driven progressive connectivity encoding proposed by Alliez and Desbrun [2001]. During the encoding, the mesh is iteratively simplified into several levels of details until reaching a base mesh (around a hundred vertices). At each simplification iteration, the information necessary for the refinement is recorded; it contains connectivity, geometry and color data. The encoding process and data are presented below. 3.1 Iterative simplification The encoding process is based on the iterative simplification algorithm introduced by Alliez and Desbrun [2001]. At each iteration, the algorithm decimates a set of vertices by combining decimation and cleansing conquests to get different levels of details (these two steps are illustrated in figure 3). The decimation conquest consists in removing a set of independent vertices using a patch-based traversal, and then retriangulating the holes left (see fig.3.b). Then, the cleansing conquest removes vertices of valence 3 in order to regularize the simplified mesh (see fig.3.c). For regular meshes, the combination of these two conquests performs the inverse 3 subdivision. For non-regular meshes, the retriangulation follows a deterministic rule so that the mesh connectivity is kept as regular as possible during the simplification process. These iterative simplification steps are applied until reaching a coarse base mesh. Figure 3: One iteration of the progressive encoding algorithm. (a) Original mesh, (b) intermediate result after decimation (red vertices are removed) and (c) final result after cleansing (blue vertices are removed). 3.2 Encoded information As presented above, during the encoding the mesh is iteratively simplified (decimation + cleansing). At each simplification step, the connectivity, geometry and color information of each removed vertex are written in the compressed stream, to allow the refinement at the decoding. For the connectivity, like proposed in [Alliez and Desbrun 2001], our algorithm only encodes the valences of the removed vertices, plus some null patch codes when vertices were not able to be simplified for irregular connectivity reasons. This valence information is sufficient for the connectivity refinement at the decoding. In [Alliez and Desbrun 2001] the valence values are fed to an arithmetic coder. In our algorithm, we wish to avoid a complex arithmetic decoding in JavaScript, hence we consider a straightforward binary encoding. We use 3 bits per valence value and null patch code, this gives an average of 10 bits/vertex for the connectivity. For the geometry, Alliez and Desbrun [2001] first apply a global and uniform quantization to the mesh vertex coordinates. When a vertex is removed, its position is predicted from the average position of its 1-ring neighboring vertices and only the prediction residue is encoded (once again using arithmetic coding). Lee et al. [2012] improve this geometry encoding by introducing an optimal adaptation of the quantization precision for each level of details. In our web-based algorithm, we consider a global uniform quantization (on Q bits) of the (x,y,z) coordinates and then we simply binary encode them (without any prediction nor entropy coding). For the color, Lee et al. [2012] first transform the RGB color components into the CIE L*a*b* representation which is more decorrelated than the RGB space; thus it is more appropriate for data compression. In [Lee et al. 2012], the L*a*b* components are then quantized adaptively according to the level of details, and predicted using a color-specific rule. The authors also introduce a color metric to prevent the removal of visually important vertices during the simplification. In our algorithm we also use this metric. However, like for connectivity and geometry, the color encoding is simplified: we apply a 8 bits quantization and a simple binary encoding of the L*a*b* components (no prediction, nor entropy
5 coding). In practice, for Q = 12 bits of geometric quantization, without color information, a 3D model is compressed using 46 bits/vertex ( 2.9 bytes/triangle). Basically we have made the choice of losing a part of the compression performance to decrease the complexity and the decompression time. Figure 4 presents the compressed stream. This stream is naturally decomposed into several parts, each standing for a certain level of detail and each containing connectivity (C), geometry (G) and color (Cl) information. The first part is the base mesh (usually around a hundred vertices) which is encoded in a simple binary form. Then, each part of the stream, together with the already decompressed level, allows building the next level of details. At the end of the stream decoding, the original object is retrieved (lossless compression). Figure 4: Format of the encoded stream. Each part of the stream contains geometry (G), connectivity (C) and color (Cl) data needed for mesh refinement. 4 Web integration using WebGL and JavaScript 4.1 An halfedge data structure in JavaScript The decompression mechanism is basically the following: first the base mesh is decoded, and then each layer of the compressed stream provides connectivity, geometry and color information to construct the next level of details. This mesh refinement corresponds to steps (c) (b) and (b) (a) from figure 3 and thus needs quite complex topological operations on the current 3D mesh like vertex insertion, face merging, etc. These topological operations need an efficient data structure for representing the mesh and allowing fast adjacency queries. In classical compiled C++ Computer Graphics applications, the most widespread structure is the halfedge data structure (see figure 5), like used in the CGAL library ( It is an edge-centered data structure capable of maintaining incidence information of vertices, edges and faces. As illustrated in figure 5, each edge is decomposed into two halfedges with opposite orientations. The halfedges that border a face form a circular linked list around it. Each halfedge stores pointers to its incident face, its incident vertex and it s previous, next and opposite halfedges. Every faces and vertices store a pointer to their incident halfedge. This data structure is able to answer local adjacency queries in constant time. We have implemented the Polyhedron HalfEdge.js library which describes a complete halfedge data structure in JavaScript. This library allows to represent vertices, edges, faces and color attributes and it provides access to all incidence relations of these primitives (e.g. all incident faces from a given vertex). Moreover some complex topological operations have been implemented like create center vertex which adds a vertex to the barycenter of a face and connect it with its neighbors (see step (c) (b) in figure 3), join facets which merges two facets into a single one of higher degree, split face, split vertex, fill hole, etc. Figure 5: 1999]. The halfedge data structure. Reprinted from [Kettner Our library relies on the Three.js library ( for the representation of basic geometrical primitives (vertex, faces, 3D positions); Three.js is actually one of the most popular JavaScript libraries for WebGL. 4.2 Implementation of the decompression Basically five main components have been implemented in JavaScript to allow the 3D data streaming: A binary reader which decodes the connectivity, geometry and color information from the binary P3DW file, in a streamed way (i.e. level by level). For the streaming, nothing is implemented in the server side; we just make one single XMLHttpRequest. We use the responsetext property of the XHR to read the stream progressively. The maximum size of a level (see figure 4) is estimated using the number of already decompressed vertices. The corresponding refinement is launched as soon as enough data are available. In future implementations, we plan to add a header at the beginning of the compressed stream with the sizes of all levels, to read exactly the necessary numbers of bytes. A minimal implementation on the server side could also bring some interesting features; for instance we could imagine authorizing the transmission of the first levels of details in a free basis, and authorizing the next ones after payment (like in online image libraries). The base mesh initializer which constructs the base mesh (usually several dozens of vertices). The LoD decompressor which constructs the next level of details, starting from an already decompressed level of details and using the decoded connectivity, geometry and color information. This component computes the necessary geometrical and topological operations (steps (c) (b) and (b) (a) from figure 3). It is mostly based on our Polyhedron HalfEdge.js library presented above. The rendering and user interaction management, which are mostly based on functions from the Three.js library. An efficient multi-thread implementation which enables user interactions while decompressing the levels of details, hence yielding an improved quality of user experience. JavaScript owns the important limitation of being executable only in one single thread. HTML5 has very recently provided a solution, the Web Workers, which allow to run scripts in background threads. The problem is that these Workers do not have access to the DOM (Document Object Model) hence they have to constantly communicate their data to the main
6 thread. Fortunately the Array Buffers have been very recently introduced (September 2011) and allow a zero-copy transfer between threads (like a pass-by-reference). In our implementation the decompression runs as background thread and we use this brand new Array Buffer technology to quickly send the decoded information to the main thread. Note that an important effort of implementation was dedicated to the minimization of the garbage collection. Indeed the garbage collector is particularly harmful in JavaScript applications. It may induce very visible pauses (half second or more). This fact is particularly true for our refinement algorithms which allocate and destroy of lot of elements (vertices, faces, halfedges) during the topological operations with a naive implementation. Therefore we have optimized the object recycling; no object is destroyed in our current implementation. All the features presented above are integrated into a web platform illustrated in figure 6. Once the user has chosen a remote P3DW file, the levels of details are streamed and visualized interactively, as illustrated in the accompanying video that shows a live recording of the streaming. Name (#vertices) OFF ZIP P3DW Neptune (250k) 19,184 6,788 (2.8) 1,509 (12.7) Tank (160k) 12,762 2,980 (4,3) 1,390 (9.2) Venus (100k) 7,182 2,701 (2.7) 609 (11.8) Monkey (50k) 5,105 1,856 (2.8) 430 (11.9) Casting (5k) (3) 28 (11.9) Table 1: Baseline evaluation of the compression rates: file size (kb) and associated reduction factor (in parenthesis) of our compressed format (P3DW) against standard ASCII format (OFF) and ZIP compression (ZIP), for the test models from figure 1. considered by these methods and we have reproduced exactly the parameters: 16 bits quantization (for position and normals) for the Bunny (as the X3DOM s approach), and 11 bits for position and 8 bits for normal for the Happy Buddha (as the webgl-loader approach). We usually not consider the compression of normals in our approach, but we have included them for these comparisons. For the X3DOM SIG approach we have selected the best setting (SIG with PNG compression). We also include results from the X3Db codec provided in [Behr et al. 2012]. We can observe that our compression rate is quite similar to these two recent concurrent approaches. Such compression factor will obviously fasten the transmission time and thus reduce the latency for remote visualization. However the main feature of our compressed P3DW format is that it allows a progressive decoding and therefore yields to very quickly visualize a coarse version of the 3D data (then progressively refined) instead of waiting for the full object to be downloaded before it can be displayed. 5.2 Quality of the levels of details Figure 6: Illustration of our web page for streaming compressed 3D data. Figure 7 illustrates the geometric error associated with the different levels of details according to the percentage of decoded vertices/facets, for the Venus model. We can easily see than the visual appearance of the decoded model becomes very quickly correct. Indeed, after decoding only 10% of the elements (this corresponds basically to decoding 10% of the P3DW file, which represents around 1% of the original ASCII file size) we already have a nice approximation of the object. 5 Experiments and comparisons 5.1 Compression ratio We have conducted experiments using the objects presented in figure 1. Table 1 presents respectively the original sizes of the 3D models (OFF ASCII file format) and the sizes of the compressed streams corresponding to a lossless compression (all levels of details). The geometry quantization precision Q was fixed between 11 bits and 13 bits according to the model, so as to obtain no perceptible difference with the uncompressed version. We can observe that the compression ratios are very good (around 1:10), which is between 2 and 3 times better than a ZIP compression. Table 2 illustrates a comparison with concurrent state of the art methods: Google webgl-loader [Chun 2012] (UTF8 codec) and the X3DOM s image geometry approach [Behr et al. 2012] (SIG codec). For a fair comparison, we have taken two models Figure 7: Maximum Root Mean Squared error vs percentage of decoded elements for the Venus model.
7 Name (#vert.) OFF ZIP P3DW X3Db X3DOM UTF8 GZIP UTF8 Bunny (35k) 2, NA NA Happy Buddha (540k) 41,623 10,132 4,523 NA NA Table 2: Comparison with X3Db, Google webgl-loader [Chun 2012] (UTF8 codec) and the X3DOM s image geometry approach [Behr et al. 2012] (SIG PNG codec). File sizes are given in kb. 5.3 Decompression time Like stated in the introduction, the decompression time is of great importance for the usability of the method. Table 3 illustrates the decompression times for our web platform using the Mozilla Firefox browser on a 2GHz laptop. The data are downloaded/streamed from a remote server using a high speed connection in order to make the downloading time negligible. The table shows that the decoding time has a linear behavior regarding the number of decoded elements. On average our platform is able to decode between 20k and 30k vertices per second. This timing is very good for a JavaScript implementation and allows bringing a very significant gain in a remote visualization scenario, in term of quality of experience. Even if the whole decompression may take several seconds for large objects, it is interesting to see that whatever the original size of the data, we obtain very quickly several thousands of vertices, hence a very nice approximation. Note that these results correspond to the multi-thread implementation, the mono-thread version is around 25% faster (but the user cannot interact with the object until it is fully decompressed). already very good approximations. Figure 8 and 9 illustrate the percentage of decoded elements in function of the time the user wait. The levels of details corresponding to 10% elements are illustrated. The curves corresponding to uncompressed representation and ZIP compression are also shown. The accompanying video shows the live results for these two models. Name 10% 20% 50% 100% Neptune 1.0 (25k) 1.8 (50k) 4.6 (125k)) 11.2 (250k) Tank 1.1 (16k) 1.6 (32k) 4.0 (80k) 8.8 (160k) Venus 0.4 (10k) 0.8 (20k) 1.5 (50k)) 3.2 (100k) Monkey 0.3 (5k) 0.4 (10k) 0.9 (25k)) 1.8 (50k) Casting 0.1 (0.5k) 0.1 (1k) 0.1 (2.5K)) 0.2 (5k) Figure 8: Percentage of decoded elements according to the time latency starting from the selection of the Venus model on the web page. The dotted lines represent the scenarios of transmission/visualization in uncompressed ASCII form (red) and ZIP format (green). Table 3: Decompression time (in seconds, for a 2GHz laptop) and associated numbers of vertices (in parenthesis) according to the percentage of decoded elements, for the test models from figure Remote 3D streaming results We have conducted some experiments and comparisons in order to evaluate the gain, in term of quality of experience, of our web 3D streaming technical solution compared with concurrent approaches Baseline results In this first experiment, we have considered the remote visualization of the Neptune and Venus 3D models through a good ADSL Internet access (10 Mbit/s), and have compared our results with the transmission in uncompressed form and the transmission after ZIP compression. The latency (i.e. the time the user will wait before seeing anything) in the case of the transmission/visualization in uncompressed form is respectively 6.7 seconds for Venus (5.9s for transmission and 0.8s for loading the.obj file) and 18.5 seconds for Neptune (15.7s for transmission and 2.8s for loading the.obj file). If we consider the transmission of ZIP files, the latency is then 3s and 8.3s for Venus and Neptune respectively. In comparison, with our system, the user immediately (0.3s) starts to see coarse versions of the models. After 0.5s he visualizes the Neptune and Venus models with respectively 5% and 10% of elements (around 10K vertices), which constitute Figure 9: Percentage of decoded elements according to the time latency starting from the selection of the Neptune model on the web page. The dotted lines represent the scenarios of transmission/visualization in uncompressed ASCII form (red) and ZIP format (green) Comparison with concurrent state of the art We have tested our web platform for the remote visualization of the Happy Buddha model (500K vertices) using a 5 Mbit/s Internet access (a typical 3G+ bandwidth). We have compared the results
8 against the Google webgl-loader (Happy Buddha available here 1 ) and the X3DOM s image geometry approach (Happy Buddha available here 2 ), under the same conditions (same PC, same bandwidth, same browser). Of course the servers are not the same, since we used the servers from the owners of the solutions. However the server owns a tiny influence on the results which mostly depend on the PC and bandwidth. Figure 10 illustrates some screenshots of the visualization for these three approaches, after respectively 800ms, 1.5s, 3s and 6s after launching the loading of the web pages. The live results are available in the accompanying video. We can easily see the benefit of our streaming approach in this low bandwidth case. Indeed, after 800ms, we already have a coarse but illustrative model (1.5K vertices) which is then refined progressively: 16k vertices after 1.5s, 38K vertices after 3s and 92K vertices after 6s. On the other hand, for the single rate approaches, the user has to wait around 10s to see the whole 3D model. Our approach is particularly suited for medium and low bandwidth channels; indeed, in case of very high speed connections (50 Mbit/s), Google webgl-loader and X3DOM s approach are very efficient. We have not tested our platform on a mobile device. However, the management of the levels of details is a very interesting feature for this kind of lightweight device. For instance, we could decide to interrupt the stream when a certain number of vertices are reached or when the frame-rate decreases under a threshold. 6 Conclusion We have presented a technical solution for the remote streaming and visualization of compressed 3D content on the web. Our approach relies on a dedicated progressive compression algorithm, a new halfedge JavaScript structure and a fast and multi-thread JavaScript implementation of the streaming and decompression into levels of details. Our approach brings a clear gain in term of quality of user experience by removing the latency and providing very quickly a good approximation of the 3D model even for huge data. Our approach is one of the first attempts to implement complex geometry processing operations directly in JavaScript; hence it provides useful insights on the benefits and limitations of this scripting language. JavaScript has shown unexpected impressive performances in our case. Of course, the main weakness of our approach is to make an intensive use of the CPU. We plan to investigate parallel decoding algorithms in order to overcome this limitation. One remaining critical issue for the practical industrial use of web 3D data streaming is the question of the intellectual property protection. Indeed, during its transmission or visualization the 3D content can be duplicated and redistributed by a pirate. This issue can be resolved with the use of watermarking techniques. Such technique hides secret information in the functional part of the cover content (usually the geometry in case of 3D data). We plan to integrate such watermarking algorithm in the next version of our web platform however this algorithm has to be embedded within the compression and this constitutes a quite complex issue. Acknowledgment We thank the anonymous reviewers for helping us to improve this paper. This work is supported by Lyon Science Transfert through the project Web 3D Streaming imagegeometry.html References ALLIEZ, P., AND DESBRUN, M Progressive encoding for lossless transmission of 3D meshes. In ACM Siggraph, BEHR, J., JUNG, Y., FRANKE, T., AND STURM, T Using images and explicit binary container for efficient and incremental delivery of declarative 3D scenes on the web. In ACM Web3D, BLUME, A., CHUN, W., KOGAN, D., KOKKEVIS, V., WEBER, N., PETTERSON, R. W., AND ZEIGER, R Google Body: 3D human anatomy in the browser. In ACM Siggraph Talks. CHEN, B., AND NISHITA, T Multiresolution streaming mesh with shape preserving and QoS-like controlling. In ACM Web3D, CHUN, W WebGL Models: End-to-End. In OpenGL Insights, P. Cozzi and C. Riccio, Eds. CRC Press, DI BENEDETTO, M., PONCHIO, F., GANOVELLI, F., AND SCOPIGNO, R SpiderGL: a JavaScript 3D graphics library for next-generation WWW. In ACM Web3D, GANDOIN, P.-M., AND DEVILLERS, O Progressive lossless compression of arbitrary simplicial complexes. In ACM Siggraph, GOBBETTI, E., MARTON, F., RODRIGUEZ, M. B., GANOVELLI, F., AND DI BENEDETTO, M Adaptive quad patches. In ACM Web3D, 9. HOPPE, H Progressive meshes. ACM Siggraph. ISENBURG, M., AND SNOEYINK, J Binary compression rates for ASCII formats. In ACM Web3D, JOMIER, J., JOURDAIN, S., AND MARION, C Remote Visualization of Large Datasets with MIDAS and ParaViewWeb. In ACM Web3D. JOURDAIN, S., FOREST, J., MOUTON, C., NOUAILHAS, B., MO- NIOT, G., KOLB, F., CHABRIDON, S., SIMATIC, M., ABID, Z., AND MALLET, L ShareX3D, a scientific collaborative 3D viewer over HTTP. In ACM Web3D. KETTNER, L Using generic programming for designing a data structure for polyhedral surfaces. Computational Geometry 13, 21957, KHRONOS, WebGL - OpenGL ES 2.0 for the Web. LAVOUÉ, G., TOLA, M., AND DUPONT, F MEPP - 3D Mesh Processing Platform. In International Conference on Computer Graphics Theory and Applications. LEE, H., LAVOUÉ, G., AND DUPONT, F Rate-distortion optimization for progressive compression of 3D mesh with color attributes. The Visual Computer 28, 2 (May), MAGLO, A., LEE, H., LAVOUÉ, G., MOUTON, C., HUDELOT, C., AND DUPONT, F Remote scientific visualization of progressive 3D meshes with X3D. In ACM Web3D. MARVIE, J.-E., GAUTRON, P., LECOCQ, P., MOCQUARD, O., AND GÉRARD, F Streaming and Synchronization of Multi-User Worlds Through HTTP/1.1. In ACM Web3D. MOUTON, C., SONS, K., AND IAN GRIMSTEAD Collaborative visualization: current systems and future trends. In ACM Web3D.
9 N IEBLING, F., AND KOPECKI, A Collaborative steering and post-processing of simulations on HPC resources: Everyone, anytime, anywhere. In ACM Web3D. PAJAROLA, R., AND ROSSIGNAC, J Compressed progressive meshes. IEEE Visualization and Computer Graphics 6, 1, P ENG, J., AND K UO, C.-C. J Geometry-guided progressive lossless 3D mesh coding with octree (OT) decomposition. ACM Transactions on Graphics (TOG) 24, 3. P ENG, J., K IM, C.-S., AND K UO, C.-C. J Technologies for 3D mesh compression: A survey. Journal of Visual Communication and Image Representation 16, 6, P ENG, J., K UO, Y., E CKSTEIN, I., AND G OPI, M Feature Oriented Progressive Lossless Mesh Coding. Computer Graphics Forum 29, 7, TAUBIN, G., AND ROSSIGNAC, J Geometric compression through topological surgery. ACM Transactions on Graphics 17, 2, T IAN, D., AND A L R EGIB, G Batex3: Bit allocation for progressive transmission of textured 3-d models. IEEE Transactions on Circuits and Systems for Video Technology 18, 1, VALETTE, S., C HAINE, R., AND P ROST, R Progressive lossless mesh compression via incremental parametric refinement. Computer Graphics Forum 28, 5 (July), Figure 10: Some screenshots illustrating the remote visualization of the Happy Buddha (1 million triangles) on a 5 Mbit/s Internet access, using our approach (left column), webgl-loader [Chun 2012] (middle column) and X3DOM s image geometry [Behr et al. 2012] (right column). From top to bottom, screenshots are taken respectively at 800ms, 1.5s, 3s and 6s after loading the web page.
A Hybrid Visualization System for Molecular Models
A Hybrid Visualization System for Molecular Models Charles Marion, Joachim Pouderoux, Julien Jomier Kitware SAS, France Sébastien Jourdain, Marcus Hanwell & Utkarsh Ayachit Kitware Inc, USA Web3D Conference
Parallel Simplification of Large Meshes on PC Clusters
Parallel Simplification of Large Meshes on PC Clusters Hua Xiong, Xiaohong Jiang, Yaping Zhang, Jiaoying Shi State Key Lab of CAD&CG, College of Computer Science Zhejiang University Hangzhou, China April
Web Based 3D Visualization for COMSOL Multiphysics
Web Based 3D Visualization for COMSOL Multiphysics M. Jüttner* 1, S. Grabmaier 1, W. M. Rucker 1 1 University of Stuttgart Institute for Theory of Electrical Engineering *Corresponding author: Pfaffenwaldring
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Visualisation in the Google Cloud
Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores
Efficient Storage, Compression and Transmission
Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation
Video compression: Performance of available codec software
Video compression: Performance of available codec software Introduction. Digital Video A digital video is a collection of images presented sequentially to produce the effect of continuous motion. It takes
Front-End Performance Testing and Optimization
Front-End Performance Testing and Optimization Abstract Today, web user turnaround starts from more than 3 seconds of response time. This demands performance optimization on all application levels. Client
NVIDIA IndeX Enabling Interactive and Scalable Visualization for Large Data Marc Nienhaus, NVIDIA IndeX Engineering Manager and Chief Architect
SIGGRAPH 2013 Shaping the Future of Visual Computing NVIDIA IndeX Enabling Interactive and Scalable Visualization for Large Data Marc Nienhaus, NVIDIA IndeX Engineering Manager and Chief Architect NVIDIA
Faculty of Computer Science Computer Graphics Group. Final Diploma Examination
Faculty of Computer Science Computer Graphics Group Final Diploma Examination Communication Mechanisms for Parallel, Adaptive Level-of-Detail in VR Simulations Author: Tino Schwarze Advisors: Prof. Dr.
A Tool for Evaluation and Optimization of Web Application Performance
A Tool for Evaluation and Optimization of Web Application Performance Tomáš Černý 1 [email protected] Michael J. Donahoo 2 [email protected] Abstract: One of the main goals of web application
Parallel Web Programming
Parallel Web Programming Tobias Groß, Björn Meier Hardware/Software Co-Design, University of Erlangen-Nuremberg May 23, 2013 Outline WebGL OpenGL Rendering Pipeline Shader WebCL Motivation Development
Multiresolution 3D Rendering on Mobile Devices
Multiresolution 3D Rendering on Mobile Devices Javier Lluch, Rafa Gaitán, Miguel Escrivá, and Emilio Camahort Computer Graphics Section Departament of Computer Science Polytechnic University of Valencia
Performance Optimization and Debug Tools for mobile games with PlayCanvas
Performance Optimization and Debug Tools for mobile games with PlayCanvas Jonathan Kirkham, Senior Software Engineer, ARM Will Eastcott, CEO, PlayCanvas 1 Introduction Jonathan Kirkham, ARM Worked with
Voronoi Treemaps in D3
Voronoi Treemaps in D3 Peter Henry University of Washington [email protected] Paul Vines University of Washington [email protected] ABSTRACT Voronoi treemaps are an alternative to traditional rectangular
A Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
Remote Graphical Visualization of Large Interactive Spatial Data
Remote Graphical Visualization of Large Interactive Spatial Data ComplexHPC Spring School 2011 International ComplexHPC Challenge Cristinel Mihai Mocan Computer Science Department Technical University
White paper. H.264 video compression standard. New possibilities within video surveillance.
White paper H.264 video compression standard. New possibilities within video surveillance. Table of contents 1. Introduction 3 2. Development of H.264 3 3. How video compression works 4 4. H.264 profiles
Responsive Web Design. vs. Mobile Web App: What s Best for Your Enterprise? A WhitePaper by RapidValue Solutions
Responsive Web Design vs. Mobile Web App: What s Best for Your Enterprise? A WhitePaper by RapidValue Solutions The New Design Trend: Build a Website; Enable Self-optimization Across All Mobile De vices
Reference Guide WindSpring Data Management Technology (DMT) Solving Today s Storage Optimization Challenges
Reference Guide WindSpring Data Management Technology (DMT) Solving Today s Storage Optimization Challenges September 2011 Table of Contents The Enterprise and Mobile Storage Landscapes... 3 Increased
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
OpenEXR Image Viewing Software
OpenEXR Image Viewing Software Florian Kainz, Industrial Light & Magic updated 07/26/2007 This document describes two OpenEXR image viewing software programs, exrdisplay and playexr. It briefly explains
Key Components of WAN Optimization Controller Functionality
Key Components of WAN Optimization Controller Functionality Introduction and Goals One of the key challenges facing IT organizations relative to application and service delivery is ensuring that the applications
Introduction to WebGL
Introduction to WebGL Alain Chesnais Chief Scientist, TrendSpottr ACM Past President [email protected] http://www.linkedin.com/in/alainchesnais http://facebook.com/alain.chesnais Housekeeping If you are
REMOTE RENDERING OF COMPUTER GAMES
REMOTE RENDERING OF COMPUTER GAMES Peter Eisert, Philipp Fechteler Fraunhofer Institute for Telecommunications, Einsteinufer 37, D-10587 Berlin, Germany [email protected], [email protected]
Visualizing Data: Scalable Interactivity
Visualizing Data: Scalable Interactivity The best data visualizations illustrate hidden information and structure contained in a data set. As access to large data sets has grown, so has the need for interactive
Fast Arithmetic Coding (FastAC) Implementations
Fast Arithmetic Coding (FastAC) Implementations Amir Said 1 Introduction This document describes our fast implementations of arithmetic coding, which achieve optimal compression and higher throughput by
Real Time Data Communication over Full Duplex Network Using Websocket
Real Time Data Communication over Full Duplex Network Using Websocket Shruti M. Rakhunde 1 1 (Dept. of Computer Application, Shri Ramdeobaba College of Engg. & Mgmt., Nagpur, India) ABSTRACT : Internet
Web-Based Enterprise Data Visualization a 3D Approach. Oleg Kachirski, Black and Veatch
Web-Based Enterprise Data Visualization a 3D Approach Oleg Kachirski, Black and Veatch Contents - Introduction - Why 3D? - Applications of 3D - 3D Content Authoring - 3D/4D in GIS - Challenges of Presenting
MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations
MMGD0203 MULTIMEDIA DESIGN Chapter 3 Graphics and Animations 1 Topics: Definition of Graphics Why use Graphics? Graphics Categories Graphics Qualities File Formats Types of Graphics Graphic File Size Introduction
Introduction to Computer Graphics. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012
CSE 167: Introduction to Computer Graphics Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Today Course organization Course overview 2 Course Staff Instructor Jürgen Schulze,
Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park
NSF GRANT # 0727380 NSF PROGRAM NAME: Engineering Design Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park Atul Thakur
How To Test Your Web Site On Wapt On A Pc Or Mac Or Mac (Or Mac) On A Mac Or Ipad Or Ipa (Or Ipa) On Pc Or Ipam (Or Pc Or Pc) On An Ip
Load testing with WAPT: Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. A brief insight is provided
SECONDARY STORAGE TERRAIN VISUALIZATION IN A CLIENT-SERVER ENVIRONMENT: A SURVEY
SECONDARY STORAGE TERRAIN VISUALIZATION IN A CLIENT-SERVER ENVIRONMENT: A SURVEY Kai Xu and Xiaofang Zhou School of Information Technology and Electrical Engineering The University of Queensland, Brisbane,
Data Storage 3.1. Foundations of Computer Science Cengage Learning
3 Data Storage 3.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List five different data types used in a computer. Describe how
So today we shall continue our discussion on the search engines and web crawlers. (Refer Slide Time: 01:02)
Internet Technology Prof. Indranil Sengupta Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture No #39 Search Engines and Web Crawler :: Part 2 So today we
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Chapter 3 ATM and Multimedia Traffic
In the middle of the 1980, the telecommunications world started the design of a network technology that could act as a great unifier to support all digital services, including low-speed telephony and very
Dynamic Digital Depth (DDD) and Real-time 2D to 3D conversion on the ARM processor
Dynamic Digital Depth (DDD) and Real-time 2D to 3D conversion on the ARM processor November 2005 Abstract The use of mobile devices for entertainment consumption is a rapidly growing, global industry.
Managing video content in DAM How digital asset management software can improve your brands use of video assets
1 Managing Video Content in DAM Faster connection speeds and improved hardware have helped to greatly increase the popularity of online video. The result is that video content increasingly accounts for
How To Create A Surface From Points On A Computer With A Marching Cube
Surface Reconstruction from a Point Cloud with Normals Landon Boyd and Massih Khorvash Department of Computer Science University of British Columbia,2366 Main Mall Vancouver, BC, V6T1Z4, Canada {blandon,khorvash}@cs.ubc.ca
CSE 237A Final Project Final Report
CSE 237A Final Project Final Report Multi-way video conferencing system over 802.11 wireless network Motivation Yanhua Mao and Shan Yan The latest technology trends in personal mobile computing are towards
Robust Blind Watermarking Mechanism For Point Sampled Geometry
Robust Blind Watermarking Mechanism For Point Sampled Geometry Parag Agarwal Balakrishnan Prabhakaran Department of Computer Science, University of Texas at Dallas MS EC 31, PO Box 830688, Richardson,
Addressing Mobile Load Testing Challenges. A Neotys White Paper
Addressing Mobile Load Testing Challenges A Neotys White Paper Contents Introduction... 3 Mobile load testing basics... 3 Recording mobile load testing scenarios... 4 Recording tests for native apps...
Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *
Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing
Step into the Future: HTML5 and its Impact on SSL VPNs
Step into the Future: HTML5 and its Impact on SSL VPNs Aidan Gogarty HOB, Inc. Session ID: SPO - 302 Session Classification: General Interest What this is all about. All about HTML5 3 useful components
Research on HTML5 in Web Development
Research on HTML5 in Web Development 1 Ch Rajesh, 2 K S V Krishna Srikanth 1 Department of IT, ANITS, Visakhapatnam 2 Department of IT, ANITS, Visakhapatnam Abstract HTML5 is everywhere these days. HTML5
Computer Graphics Hardware An Overview
Computer Graphics Hardware An Overview Graphics System Monitor Input devices CPU/Memory GPU Raster Graphics System Raster: An array of picture elements Based on raster-scan TV technology The screen (and
White Paper. Optimizing the video experience for XenApp and XenDesktop deployments with CloudBridge. citrix.com
Optimizing the video experience for XenApp and XenDesktop deployments with CloudBridge Video content usage within the enterprise is growing significantly. In fact, Gartner forecasted that by 2016, large
INSTALLATION GUIDE ENTERPRISE DYNAMICS 9.0
INSTALLATION GUIDE ENTERPRISE DYNAMICS 9.0 PLEASE NOTE PRIOR TO INSTALLING On Windows 8, Windows 7 and Windows Vista you must have Administrator rights to install the software. Installing Enterprise Dynamics
MeshLab and Arc3D: Photo-Reconstruction and Processing of 3D meshes
MeshLab and Arc3D: Photo-Reconstruction and Processing of 3D meshes P. Cignoni, M Corsini, M. Dellepiane, G. Ranzuglia, (Visual Computing Lab, ISTI - CNR, Italy) M. Vergauven, L. Van Gool (K.U.Leuven ESAT-PSI
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
FEAWEB ASP Issue: 1.0 Stakeholder Needs Issue Date: 03/29/2000. 04/07/2000 1.0 Initial Description Marco Bittencourt
)($:(%$63 6WDNHKROGHU1HHGV,VVXH 5HYLVLRQ+LVWRU\ 'DWH,VVXH 'HVFULSWLRQ $XWKRU 04/07/2000 1.0 Initial Description Marco Bittencourt &RQILGHQWLDO DPM-FEM-UNICAMP, 2000 Page 2 7DEOHRI&RQWHQWV 1. Objectives
WHITE PAPER Personal Telepresence: The Next Generation of Video Communication. www.vidyo.com 1.866.99.VIDYO
WHITE PAPER Personal Telepresence: The Next Generation of Video Communication www.vidyo.com 1.866.99.VIDYO 2009 Vidyo, Inc. All rights reserved. Vidyo is a registered trademark and VidyoConferencing, VidyoDesktop,
Broadband Networks. Prof. Dr. Abhay Karandikar. Electrical Engineering Department. Indian Institute of Technology, Bombay. Lecture - 29.
Broadband Networks Prof. Dr. Abhay Karandikar Electrical Engineering Department Indian Institute of Technology, Bombay Lecture - 29 Voice over IP So, today we will discuss about voice over IP and internet
3D Client Software - Interactive, online and in real-time
3D Client Software - Interactive, online and in real-time Dipl.Inform.Univ Peter Schickel CEO Bitmanagement Software Vice President Web3D Consortium, Mountain View, USA OGC/Web3D liaison manager Presentation
Crosswalk: build world class hybrid mobile apps
Crosswalk: build world class hybrid mobile apps Ningxin Hu Intel Today s Hybrid Mobile Apps Application HTML CSS JS Extensions WebView of Operating System (Tizen, Android, etc.,) 2 State of Art HTML5 performance
Update logo and logo link on A Master. Update Date and Product on B Master
Cover Be sure to: Update META data Update logo and logo link on A Master Update Date and Product on B Master Web Performance Metrics 101 Contents Preface...3 Response Time...4 DNS Resolution Time... 4
Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall.
Web Analytics Understand your web visitors without web logs or page tags and keep all your data inside your firewall. 5401 Butler Street, Suite 200 Pittsburgh, PA 15201 +1 (412) 408 3167 www.metronomelabs.com
Titolo del paragrafo. Titolo del documento - Sottotitolo documento The Benefits of Pushing Real-Time Market Data via a Web Infrastructure
1 Alessandro Alinone Agenda Introduction Push Technology: definition, typology, history, early failures Lightstreamer: 3rd Generation architecture, true-push Client-side push technology (Browser client,
FCE: A Fast Content Expression for Server-based Computing
FCE: A Fast Content Expression for Server-based Computing Qiao Li Mentor Graphics Corporation 11 Ridder Park Drive San Jose, CA 95131, U.S.A. Email: qiao [email protected] Fei Li Department of Computer Science
Develop Computer Animation
Name: Block: A. Introduction 1. Animation simulation of movement created by rapidly displaying images or frames. Relies on persistence of vision the way our eyes retain images for a split second longer
How to Send Video Images Through Internet
Transmitting Video Images in XML Web Service Francisco Prieto, Antonio J. Sierra, María Carrión García Departamento de Ingeniería de Sistemas y Automática Área de Ingeniería Telemática Escuela Superior
SiteCelerate white paper
SiteCelerate white paper Arahe Solutions SITECELERATE OVERVIEW As enterprises increases their investment in Web applications, Portal and websites and as usage of these applications increase, performance
A Proposal for OpenEXR Color Management
A Proposal for OpenEXR Color Management Florian Kainz, Industrial Light & Magic Revision 5, 08/05/2004 Abstract We propose a practical color management scheme for the OpenEXR image file format as used
Data Storage. Chapter 3. Objectives. 3-1 Data Types. Data Inside the Computer. After studying this chapter, students should be able to:
Chapter 3 Data Storage Objectives After studying this chapter, students should be able to: List five different data types used in a computer. Describe how integers are stored in a computer. Describe how
balesio Native Format Optimization Technology (NFO)
balesio AG balesio Native Format Optimization Technology (NFO) White Paper Abstract balesio provides the industry s most advanced technology for unstructured data optimization, providing a fully system-independent
Digital Audio and Video Data
Multimedia Networking Reading: Sections 3.1.2, 3.3, 4.5, and 6.5 CS-375: Computer Networks Dr. Thomas C. Bressoud 1 Digital Audio and Video Data 2 Challenges for Media Streaming Large volume of data Each
massive aerial LiDAR point clouds
Visualization, storage, analysis and distribution of massive aerial LiDAR point clouds Gerwin de Haan and Hugo Ledoux Data Visualization group GIS technology group GeoWeb 2010, Vancouver July 28 2010 1
Recent Advances and Future Trends in Graphics Hardware. Michael Doggett Architect November 23, 2005
Recent Advances and Future Trends in Graphics Hardware Michael Doggett Architect November 23, 2005 Overview XBOX360 GPU : Xenos Rendering performance GPU architecture Unified shader Memory Export Texture/Vertex
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Application-Centric Analysis Helps Maximize the Value of Wireshark
Application-Centric Analysis Helps Maximize the Value of Wireshark The cost of freeware Protocol analysis has long been viewed as the last line of defense when it comes to resolving nagging network and
Chapter 10: Multimedia and the Web
Understanding Computers Today and Tomorrow 12 th Edition Chapter 10: Multimedia and the Web Learning Objectives Define Web-based multimedia and list some advantages and disadvantages of using multimedia.
A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET
22nd CIPA Symposium, October 11-15, 2009, Kyoto, Japan A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET Zhiqiang Du*, Qiaoxiong Li State Key Laboratory of Information Engineering
CHAPTER FIVE RESULT ANALYSIS
CHAPTER FIVE RESULT ANALYSIS 5.1 Chapter Introduction 5.2 Discussion of Results 5.3 Performance Comparisons 5.4 Chapter Summary 61 5.1 Chapter Introduction This chapter outlines the results obtained from
FIVE WAYS TO OPTIMIZE MOBILE WEBSITE PERFORMANCE WITH PAGE SPEED
WHITE PAPER: MOBILE WEBSITE PERFORMANCE FIVE WAYS TO OPTIMIZE MOBILE WEBSITE PERFORMANCE WITH PAGE SPEED SNOOZE, YOU LOSE. TODAY S MOBILE USERS EXPECT PERFORMANCE DELIVERED FAST. For those of us who depend
Proposal for a Virtual 3D World Map
Proposal for a Virtual 3D World Map Kostas Terzidis University of California at Los Angeles School of Arts and Architecture Los Angeles CA 90095-1467 ABSTRACT The development of a VRML scheme of a 3D world
Collaborative modelling and concurrent scientific data analysis:
Collaborative modelling and concurrent scientific data analysis: Application case in space plasma environment with the Keridwen/SPIS- GEO Integrated Modelling Environment B. Thiebault 1, J. Forest 2, B.
A FRAMEWORK FOR REAL-TIME TERRAIN VISUALIZATION WITH ADAPTIVE SEMI- REGULAR MESHES
A FRAMEWORK FOR REAL-TIME TERRAIN VISUALIZATION WITH ADAPTIVE SEMI- REGULAR MESHES Lourena Rocha, Sérgio Pinheiro, Marcelo B. Vieira, Luiz Velho IMPA - Instituto Nacional de Matemática Pura e Aplicada
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
A CLOUD-BASED FRAMEWORK FOR ONLINE MANAGEMENT OF MASSIVE BIMS USING HADOOP AND WEBGL
A CLOUD-BASED FRAMEWORK FOR ONLINE MANAGEMENT OF MASSIVE BIMS USING HADOOP AND WEBGL *Hung-Ming Chen, Chuan-Chien Hou, and Tsung-Hsi Lin Department of Construction Engineering National Taiwan University
Programming 3D Applications with HTML5 and WebGL
Programming 3D Applications with HTML5 and WebGL Tony Parisi Beijing Cambridge Farnham Köln Sebastopol Tokyo Table of Contents Preface ix Part I. Foundations 1. Introduction 3 HTML5: A New Visual Medium
Lightweight DNS for Multipurpose and Multifunctional Devices
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.12, December 2013 71 Lightweight DNS for Multipurpose and Multifunctional Devices Yogesh A. Wagh 1, Prashant A. Dhangar
Understanding the Performance of an X550 11-User Environment
Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single
GUIDE TO POST-PROCESSING OF THE POINT CLOUD
GUIDE TO POST-PROCESSING OF THE POINT CLOUD Contents Contents 3 Reconstructing the point cloud with MeshLab 16 Reconstructing the point cloud with CloudCompare 2 Reconstructing the point cloud with MeshLab
Better benchmarking starts here
Page 1 of 19 Better benchmarking starts here PCMark for Android is a benchmark for testing the performance and battery life of Android phones and tablets. PCMark measures performance using tests based
Performance analysis and comparison of virtualization protocols, RDP and PCoIP
Performance analysis and comparison of virtualization protocols, RDP and PCoIP Jiri Kouril, Petra Lambertova Department of Telecommunications Brno University of Technology Ustav telekomunikaci, Purkynova
Strategies to Speed Collaboration and Data Management Using Autodesk Vault and Riverbed WAN Optimization Technology
Autodesk Vault Professional Manufacturing Industry Marketing 2011 Strategies to Speed Collaboration and Data Management Using Autodesk Vault and Riverbed WAN Optimization Technology Geographically dispersed
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming
Characteristics of Java (Optional) Y. Daniel Liang Supplement for Introduction to Java Programming Java has become enormously popular. Java s rapid rise and wide acceptance can be traced to its design
SCALABILITY OF CONTEXTUAL GENERALIZATION PROCESSING USING PARTITIONING AND PARALLELIZATION. Marc-Olivier Briat, Jean-Luc Monnot, Edith M.
SCALABILITY OF CONTEXTUAL GENERALIZATION PROCESSING USING PARTITIONING AND PARALLELIZATION Abstract Marc-Olivier Briat, Jean-Luc Monnot, Edith M. Punt Esri, Redlands, California, USA [email protected], [email protected],
Cost Effective Deployment of VoIP Recording
Cost Effective Deployment of VoIP Recording Purpose This white paper discusses and explains recording of Voice over IP (VoIP) telephony traffic. How can a company deploy VoIP recording with ease and at
Load testing with. WAPT Cloud. Quick Start Guide
Load testing with WAPT Cloud Quick Start Guide This document describes step by step how to create a simple typical test for a web application, execute it and interpret the results. 2007-2015 SoftLogica
