White paper sponsored by: Next-Generation Data Center Architecture for Advanced Compute and Visualization in Upstream Oil and Gas

Size: px
Start display at page:

Download "White paper sponsored by: Next-Generation Data Center Architecture for Advanced Compute and Visualization in Upstream Oil and Gas"

Transcription

1 White paper sponsored by: Next-Generation Data Center Architecture for Advanced Compute and Visualization in Upstream Oil and Gas

2 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY INTRODUCTION TRADITIONAL APPROACH TO INFRASTRUCTURE: CHALLENGES AND OPPORTUNITIES BRINGING GPUS INSIDE THE DATA CENTER REMOTE VISUALIZATION INTEGRATION OF VIRTUALIZATION AND CLOUD TECHNOLOGIES CONCLUSION REFERENCES... 8

3 September 2012 Next-Generation Data Center Architecture for Advanced Compute and Visualization in Upstream Oil and Gas 1 EXECUTIVE SUMMARY Computing infrastructure for upstream oil and gas has evolved within siloed IT environments to satisfy unique requirements or because of boundaries between various groups created by factors such as a lack of integration between different scientific disciplines and immature software. Visualization has typically been separate from data center environments, with visualization systems located close to end users who need them. Today, upstream workflows are becoming more integrated, revealing weaknesses in the traditional model. For instance, seismic processing and interpretation are no longer separate steps; they are performed in a loop. Interpreters visualize intermediate results to direct seismic processing jobs. Where a project might have consisted of two or three processing jobs in the past, it now may require 20 to 30 jobs. At the same time, graphics processing unit (GPU) computing is increasingly important for the acceleration of various upstream computational kernels. Recent developments, including the release of the NVIDIA Kepler GPU architecture and the NVIDIA VGX platform for remote visualization and GPU virtualization will further accelerate this trend. As a result, visualization and computing infrastructure (GPUs and CPUs) need to be closer to each other than in the past. Data growth which in some cases is exponential is making it almost essential for visualization, compute, and data resources to be colocated. Bringing GPUs inside the data center. The adoption of GPU computing, in conjunction with the emergence of flexible server and storage virtualization and rapidly maturing cloud computing technologies, makes a strong case for reconsidering the infrastructure supporting high-performance computing and visualization in an integrated, next-generation data center design in which GPUs are brought inside the upstream oil and gas data center. Colocating GPUs, central processing units (CPUs), and data avoids many of the data bandwidth, data management, and other challenges associated with dispersed visualization systems, while allowing computing resources to be fully used at all times for a combination of visual applications and highperformance computing (HPC) applications. The ability to virtualize GPU resources analogous to the way server virtualization solutions virtualize CPU resources will make it possible to provide a flexible pool of GPUs that are available for both visualization and computation needs. With these new capabilities, virtualization can be extended to all type of applications. Remote visualization. For oil and gas companies, mobility is a lot more than a nice to have capability it is the nature of the business. Remote, interactive 3D visualization technologies have been pursued for many years and today are finally able to deliver a solid combination of high-resolution, high-performance, and low required bandwidth to support the geographically dispersed nature of oil and gas. Advances, including the ability to orchestrate and manage the use of hybrid CPU/GPU clusters and GPU integration into hypervisors, will allow a wider range of end users to take advantage of real-time visualization using less specialized displays, supporting geographically dispersed collaboration, and speeding up decision making while allowing datasets to grow and remain centralized. Remote visualization will not immediately replace all high-end desktop systems, but it will complement them to enable remote workers to have full access to the same IT resources they can access from inside the company. It will also enable specialized cloud providers to further augment a company s own IT infrastructure. Integrating the upstream oil and gas data center with the cloud. Virtualization and cloud technology will make it possible to integrate and dynamically share all the compute, GPU, networking, and storage resources necessary for computation, visualization, and interpretation as part of a next-generation upstream oil and gas data center. These new data centers will deliver greater flexibility, efficiency, and economies of scale while improving computational performance, optimizing data management, and facilitating the use of critical visualization capabilities by experts and decision makers. Ultimately, this approach will facilitate a seamless connection between local and remote facilities, allowing an oil and gas data center to use remote IT resources either at secondary locations or in public clouds as necessary to satisfy compute and visualization needs. This new IT infrastructure will also lead to further changes in workflows 1

4 and practices for better collaboration, faster deployment of new joint venture projects, and provisioning of advanced IT capabilities to remote operations. This white paper explores these emerging trends in detail, incorporating the latest ideas from industry leaders such as Paradigm, NVIDIA, NetApp, Cisco, and NICE. 2 INTRODUCTION The rapid growth in dataset sizes and the integration of large numbers of diverse, multidisciplinary data types to more accurately understand the earth s subsurface have contributed to the continued evolution of seismic processing, reservoir simulation, interpretation, visualization, digital oil field, and petroleum economics technologies. Emerging trends in the IT industry are creating an opportunity to rethink traditional approaches to upstream oil and gas data centers to better manage the data explosion, and also to facilitate global collaboration and to speed interpretation by using cloud computing technologies. Through in-depth discussions with experts from both the computer and oil and gas industries, the authors of this paper sought to understand: Limitations of the current infrastructure Pressures that are driving change in upstream data centers Potential impact of IT trends, including GPU-based processing, advances in remote visualization, and the rapid development of virtualization and cloud technologies This white paper discusses these issues, details the current state of the art as practiced by several industry leaders, and provides insight into the architecture of next-generation oil and gas data centers. 3 TRADITIONAL APPROACH TO INFRASTRUCTURE: CHALLENGES AND OPPORTUNITIES The computing and data storage infrastructure used in upstream oil and gas data centers has typically been centered around discrete, siloed architectures designed to meet the particular needs of seismic processing, reservoir simulation, and interpretation. Advanced visualization needs have been addressed by high-powered graphics workstations or other visualization systems located outside the data center. As the scale and complexity of oil and gas projects continue to grow, the limitations are increasingly obvious. Specialized silos of IT infrastructure lack flexibility. Expensive resources cores, network bandwidth, storage capacity, and I/O may sit idle in one silo, while another environment is hamstrung and suffers delays. Provisioning each environment separately to meet peak demand is both expensive and inefficient. With dataset sizes growing exponentially, feeding data to visualization workstations outside the data center is increasingly difficult. Network connections lack the necessary bandwidth for real-time operations, and copying data to local storage is prohibitive in terms of both the time needed for the copy and the additional storage required. Having multiple copies of valuable datasets especially copies outside the data center also creates significant security concerns and data management and governance issues. Changes in the way that upstream work is performed have created a need for visualization capabilities to facilitate quick visual inspection of results during computation and to meet the needs of a distributed workforce that can include collaborators (who may also be competitors) and people working at home and in the field. At the same time, new hybrid computing systems that do computation and visualization by using both CPU and GPU resources together are becoming mainstream, creating a need for GPU resources across numerous application environments. Future upstream data center architectures will address these challenges by using a combination of approaches: GPUs will move into the upstream data center, providing computation and visualization capabilities. Remote visualization technology will change the nature of visualization workstations. High-resolution displays will still be required, but the need for CPU and GPU resources on the workstation will be reduced. Workstations will be augmented and integrated with other devices such as tablets. Handheld devices will access centralized server applications via applets. Virtualization and cloud technology will allow upstream data centers to create private clouds in which resources can be dynamically allocated. Upstream data centers will supplement dedicated resources with compute resources in the public cloud. These initiatives are already under way. Leading-edge data centers are employing some or all of these technologies and are poised to increase adoption as the necessary technologies mature. 4 BRINGING GPUS INSIDE THE DATA CENTER The next-generation oil and gas data center will harness GPUs for two purposes: computing and remote visualization. This section focuses on computing uses; section 5 discusses visualization. By 2003, computer scientists and researchers began to recognize that the parallel processing capabilities of the GPU could be leveraged to accelerate computation for a wider range of problems. This approach is known as GPU computing or general-purpose computing on GPU (GPGPU). In many cases, the speedup observed can be 10-fold or greater with respect to specific computational kernels. (Pre- and postprocessing around these kernels continues to be done by CPUs.) GPUs have an inherently parallel design with higher data bandwidth and hundreds of small cores capable of executing concurrent threads. These cores can be used to process arrays of data elements such as those in a seismic survey in the same way that arrays of pixels are manipulated. GPU computing can be used to accelerate almost any application that can be parallelized. In 2007, NVIDIA introduced the Compute Unified Device Architecture (CUDA) to simplify programming parallel applications on its GPUs. CUDA provides C, C++, and Fortran extensions that eliminate the need to program GPUs by using unfamiliar graphics processing languages. This key development in the evolution of GPU computing is making much wider adoption possible, and GPU computing is now used in a variety of industries, including oil and gas, where NVIDIA is the dominant GPU vendor. In 2011, NVIDIA announced the OpenACC initiative, to enable intelligent compiler vendors such as PGI, CAPS, and 2

5 OpenMP to use compiler directives to automate the GPU parallelization of applications, extending the applicability of GPU computing to existing applications. Even applications such as MathLab and Microsoft Excel now have GPU extensions. In 2012, NVIDIA announced the NVIDIA Kepler computing architecture, optimized to address the GPU computing needs of seismic processing algorithms such as reverse time migration, full waveform inversion, and Kirchoff time/depth migration, and to enable the use of GPU computing for reservoir simulation. New capabilities like Dynamic Parallelism allow threads executing on a GPU to dynamically spawn new threads without CPU involvement, while Hyper-Q allows multiple CPU cores to simultaneously use the cores on a single Kepler GPU. Early adopters report that seismic applications run up to 1.8 times faster on Kepler than on the previous generation NVIDIA GPU with the same power consumption. In addition to raw performance and energy efficiency, Kepler is also designed for virtualization and low-latency remote display. A number of papers that discuss the use of GPU computing in seismic processing and interpretation are included in the reference section of this paper. GPU Computing in Oil and Gas. In 2011, there were 14 new GPU supercomputers on the top 500 list for oil and gas alone. For example, WesternGeco, Petrobras, Hess, Total, and Chevron are using GPUs for seismic processing. WesternGeco recently disclosed that they have more than 15,000 NVIDIA GPUs in production in their data centers. It has taken some time for oil and gas companies to fully invest in GPU technology because of GPU memory limitations and barriers created by programming difficulties. The OpenACC specification for directivebased programming of GPUs helps resolve the programmability issue and further promotes hybrid compute environments that contain both CPUs and GPUs. NVIDIA and other GPU providers are moving toward colocation of GPUs and CPUs on a processor board, taking full advantage of evolving PCI technology and exploiting data streaming techniques. In the future, CPUs and GPUs are likely to be built into a single chip. GPUs are also being used in reservoir simulation and modeling as well as petroleum economics and energy trading. The significant power reduction (factor of 10) per GPU GFLOP versus CPUs makes them attractive to deploy for these applications. GPU Computing: An ISV Perspective. Industry trends, such as the increased use of prestack seismic data analysis in interpretation workflows and the larger dataset sizes associated with modern seismic acquisition programs, continue to create a compute intensive environment where GPU based computing can be applied. In the HPC arena, seismic processing and imaging algorithms such as full wave migration, reverse time migration and surface multiple elimination are very compute intensive and can benefit from the additional compute capabilities provided by GPU computing. The appearance of compute servers in the marketplace that are specifically built to accommodate GPUs for HPC computing was clear evidence of demand for this type of compute capability and has helped to create a marketplace for HPC software that is GPU compute capable. For desktop applications, adoption of GPU computing has focused on improving performance for compute-intensive tasks to improve interactivity. For example, in the SKUA subsurface modeling program GPU based computing has been used to speed up the matrix inversion used in the innovative UVT Transform. GPU based computing has also been used to improve the quality of the volume rendering display in Voxel- Geo, helping users refine their understanding of the variations in the seismic images of the subsurface. These focused implementations of GPU computing have an immediate impact on users ability to locate or rank prospects, and help to improve productivity while working under tight deadlines. The availability of professional series graphics cards in many E&P desktop and blade workstations facilitates use of OpenGL based 3D graphics but requires branching code in order to support GPU based computing in E&P desktop applications. Any investment in branching code must consider the value of that effort to the work being performed by the end user of the software. The availability and maturity of the software development tools and libraries that support accelerator based computing are also important factors in deciding when and how to commit to a particular implementation. 5 REMOTE VISUALIZATION The second advantage of bringing GPUs inside the data center is the ability to use remote visualization technologies to supplement or replace the capabilities of specialized visualization hardware used for high-end 3D and 4D processing and delivering graphical output to remote clients. For oil and gas, mobility is the nature of the business. Remote, interactive 3D visualization technologies for high-end interpretation have been pursued for many years and today are maturing to deliver an excellent level of performance, even for high-end 3D visualization tasks. Remote visualization will improve the use of visualization in oil and gas exploration and production in a number of ways: Make visualization available when and where it s needed Improve collaboration Facilitate workflows that cross organizational boundaries Overcome problems created by dataset size and data sensitivity By centralizing GPU resources in the upstream data center and using remote visualization capabilities, the results of visualization can be made available where they re needed, whether that s near the data center, remotely, or in the field. Collaborators in different locations view and manipulate the same images, eliminating potential points of confusion and miscommunication while saving valuable time. Data stays in the data center, where it s protected, without the need for time-consuming copies. In cases where a dataset cannot legally leave the country, remote visualization facilitates both analysis and collaboration that might not be possible otherwise. It s important to understand that remote visualization doesn t always mean that long distances are involved. The visualization application is remote from the user, but in the majority of cases, the user may be relatively close to the data center, connected via an intranet with good bandwidth and low latency. Quick collaboration no longer requires multiple users to gather around the same workstation. Intermediate analysis or computational steering, in which a user visualizes data during processing to guide the result, is also accelerated. Remote Visualization Quality. The quality of remote visualization comes down to 3

6 Figure 1) Next-generation data center with shared infrastructure to meet the needs of multiple types of users. (Monitor images courtesy of Paradigm)

7 three factors: resolution (pixels), bandwidth, and latency. Screen resolution of 1920X1200 typically requires from 8-32Mbps bandwidth. Latency less than 30 milliseconds typically results in no compromise in user experience, and less than 80 milliseconds yields a good experience. Because low latency is essential to the quality of a remote visualization experience, NVIDIA has made a significant effort to squeeze the latency out of its graphics products. The recently announced NVIDIA Kepler architecture and VGX platform deliver fully compressed frames from an NVIDIA Kepler GPU to an IP stack faster than to a local monitor, ensuring that total latency is affected only by network conditions. Remote Visualization Software Desktop Cloud Visualization (DCV) RGS XenDesktop HDX 3D Pro ThinAnywhere (TAw) Because the technology works at the driver level, visualization applications are able to take advantage of it with no modifications. An ISV can test an application locally and be guaranteed that it will work remotely (assuming that bandwidth and latency minimums are met). Because latency is directly related to distance, regional hubs capable of providing remote visualization services with good latency to workers in each region may be necessary in larger oil and gas companies. Visualization in the oil and gas field poses some unique quality requirements. Both color and crispness of edges are important, and images must be free of artifacts. NICE Desktop Cloud Visualization (DCV), for example, has been optimized to meet these requirements. DCV offers dynamic quality adjustment. During mouse movements, compression rates increase to provide smoother motion; when movement stops, DCV can deliver high-quality or lossless still images for maximum resolution without artifacts. Figure 2) VGX Remote Display reduces latency by bypassing much of the typical graphics stack. Remote Visualization at Eni. Eni is an integrated energy company that recognizes the value of remote visualization and has created a strategy to consolidate both batch and interactive applications on a centralized infrastructure. The Eni technical cloud has a hub in Houston that focuses on visualization, while the main data center in Milan handles the bulk of European visualization and HPC worldwide. Eni has been using its remote visualization approach since 2008 and has seen major advantages, including significant cost savings resulting from the elimination of expensive graphics workstations in favor of lower-cost desktops serving as thin clients. Vendor NICE HP Citrix ThinAnywhere Table 1) Remote visualization software that supports NVIDIA technology. The typical desktop is now a PC with dual monitors and no specialized software installed locally, significantly reducing the amount of desktop IT support needed. Overall efficiency in terms of hardware and software utilization has gone up. Remote visualization facilitates work by virtual teams and makes it easier to bring the most appropriate skill sets to bear on complex problems. All user sessions begin with the user connecting to a portal created by using NICE EnginFrame. This portal lets users see available datasets and select the application services they want. Individual interactive services are accessed via multiple remote desktop protocols, with NICE DCV handling the majority of technical 3D sessions. When a session is requested, the scheduler identifies the appropriate resources based on the type of job (batch or interactive), the particular application, and the user s specific profile. With EnginFrame, users are able to see and manage their active sessions, including the ability to easily share a session for collaboration. Another advantage is that the user can be immediately productive from anywhere the portal can be accessed. Rather than maintain a homogeneous cluster of servers with the same CPU, GPU, and memory resources, Eni maintains pools of various server configurations with differing capabilities tuned to meet the requirements of a wide variety of oil and gas applications. Remote Visualization: An ISV Perspective. The availability of remote visualization solutions that can effectively support the complex OpenGL based 3D graphics displays that are used extensively in E&P software are a critical element of the next generation data center. In order to be accepted as a replacement for a local workstation by end users, a data center based solution must meet the usability requirements of the end user. It must support the display of the complex 3D graphics without loss of fidelity, it must support the variety of monitor displays including multiple 30 inch 2560x1600 resolution monitors, and it must do so without negatively 5

8 impacting the user s interactivity with the software. The efforts in the reduction of latency in remote visualization offerings, which builds on demand from different industries, will be extremely beneficial to the E&P user community. Some of the display techniques used for visualizing seismic data, such as opacity based rendering, are quite demanding, not only in terms of the 3D graphics operations themselves. They also result in extensive changes to the images being displayed, which in turn may impact the capability of remote visualization software where image compression/decompression and encryption/decryption are used. The network to the thin clients in offices must be able to support the network traffic associated with the remote visualization solution. There are a number of benefits associated with moving the E&P workstations back into the data center, such as a reduction in the cooling and power requirements in offices, the increased data I/O rates available in the computer center for loading data into systems, and improved facilities management such as the time required to set up new users or to move users into new offices. Standard PC users who use 2D graphics have already benefited from data center consolidation and virtualization that is enabling a reduction in the ratio of PCs to users. The forthcoming virtualization of GPUs will offer similar benefits of resource pooling for the E&P 3D graphics user community. Many remote visualization solutions offer additionally capabilities such as an improved collaboration environment. Multidisciplinary professionals can share the same visualization, leading to a demand for software that integrates the display of diverse data types, including data that is sampled at different scales: seismic scale, core scale, borehole scale, and microseismic scale. 6 INTEGRATION OF VIRTUALIZATION AND CLOUD TECHNOLOGIES The Eni example discussed in the previous section can be considered a private cloud deployment. Users in this environment don t need to concern themselves with the underlying hardware; resources are identified and assigned to each job automatically based on requirements. The infrastructure is designed to meet particular needs while increasing efficiency and providing greater flexibility than traditional approaches. Figure 3) GPU virtualization in NVIDIA VGX. Although the technologies used to create this private cloud are far from those used in mainstream IT, over time more mainstream technologies will converge with the needs of oil and gas to become a part of the upstream data center. Many upstream data centers have already moved to incorporate standard server virtualization technologies. For example, some applications rely on a set of services that provide security and other functions; those services need to be available for applications to run. Many users have found that virtualizing those services improves uptime and allows them to be migrated without downtime. Virtualization of core visualization applications, however, has been hampered by the inability to virtualize GPU resources. GPU Virtualization. The Eni scenario works at the granularity of physical servers or server blades (and their associated GPU resources). With servers capable of supporting up to 16 GPUs already in existence, the ability to virtualize and share GPU resources in a manner analogous to CPU resources is becoming increasingly desirable for both applications that use modest GPU computing resources (like reservoir simulation codes) and for remote visualization. GPU computing applications such as seismic processing that require hundreds or thousands of GPUs will continue to be best served by physical hardware. VMware, a server virtualization leader, has demonstrated GPU computing performance at 98% of native performance when using a single GPU from within a virtual machine (VM) using both CUDA and OpenCL. VMware and others plan to incorporate the NVIDIA VGX platform into their products, to enable virtual desktop infrastructure to support remote 3D visualization. The NVIDIA VGX platform is designed to enable GPU virtualization by using three key technologies: VGX boards provide multiple Kepler GPUs in a standard server PCI Express slot. The VGX hypervisor integrates into commercial hypervisors to enable GPU virtualization. User-selectable machines allow graphics capabilities to be preconfigured to support the needs of different types of users. Citrix XenDesktop will support VGX capabilities as they become available. XenDesktop HDX 3D Pro supports VGX performance acceleration now and is ready to support virtualization capabilities as they are released. This will allow users who require occasional access to graphics-intensive applications to access them from any device. NICE DCV today provides GPU sharing for both Windows and Linux applications, and can even allow the use of GPU appliances network-attached GPU servers that allow VMs to accelerate OpenGL applications over a standard gigabit network. Evolving Next-Generation Private Cloud Infrastructure. NVIDIA and Cisco have partnered to architect a next-generation infrastructure capable of supporting both physical and virtualized hardware 6

9 resources. The work they are doing will leverage the FlexPod data center platform developed by Cisco and NetApp to help customers accelerate the deployment of shared infrastructure and the cloud. FlexPod combines NetApp storage with Cisco network and compute capabilities into a converged infrastructure that accelerates and simplifies infrastructure deployment and management. The idea is to create an easy-to-deploy infrastructure that is capable of flexibly supporting a range of geological and geophysical applications, including those from Paradigm, Schlumberger, and Halliburton. Users will be able to request a session of a particular type, and the appropriate resources either physical or virtual will be provisioned automatically. A single rack system will include: Cisco UCS C-Series rack mount servers NextIO PCI Express enclosure NVIDIA VGX boards (4 GPUs per board) Cisco Nexus and Cisco UCS infrastructure (full 10Gigabit Ethernet fabric) NetApp FAS storage Cisco C-Series servers connect via host interface cards to GPUs in external PCI Express enclosures. The optimal number of GPUs per server is still to be determined; available hardware currently supports up to 16 GPUs per server. The network infrastructure supports a full 10Gigabit Ethernet internal fabric for serverto-storage and server-to-server communication. Remote direct memory access provides the low-latency connectivity required between clustered servers. External connections to clients can use either gigabit or 10Gigabit Ethernet. Figure 4) Validated solutions reduce risk and expense and speed deployment while providing the flexibility to adapt to changing needs. NetApp FAS storage stores user datasets and provides SAN and NAS access, depending on application requirements. NetApp delivers unified storage performance along with proven storage efficiency technologies, including deduplication and compression, and integrated data protection to accelerate both backup and recovery in oil and gas application environments. Cisco Intelligent Automation for Cloud will provide orchestration for the infrastructure. The software will automatically provision both clusters of bare metal servers (with attached GPU resources) for HPC and multiple virtual machines, each with a single GPU, for interactive sessions via remote visualization. This will allow the solution to provide for example bare metal server clusters running Linux for reservoir simulation and VMs with GPU resources running Windows to support interactive seismic interpretation sessions with remote visualization. At night, when interactive use declines or stops, associated resources can be dynamically reprovisioned for GPU-computing jobs, making very efficient use of all the available resources. Transitioning to Cloud. Some companies are skeptical of cloud technology because of concerns that the resources to accomplish a particular task won t be available when needed. Although virtualization and cloud technologies allow you to more flexibly allocate and reallocate resources as needs change, the changes that you make on a day-to-day basis don t need to be extensive. You can allocate resources for important tasks for as long as necessary even more or less permanently. The difference is that the building blocks you use will be more standardized, making procurement and deployment easier, and you ll be able to make better use of resources. Rather than having unused resources allocated in each application silo, a single pool of resources can meet unexpected growth or peaks in demand in different areas. Where Does Public Cloud Make Sense? Although most oil and gas data centers focus on the potential benefits of private cloud, public cloud options may make sense for a variety of scenarios. A number of public cloud providers already offer GPU and other HPC resources, including Amazon EC2, Nimbix, Peer1, and Penguin Computing. The availability of both GPU and CPU resources in the cloud opens the possibility of using public cloud resources as an adjunct to or instead of internal infrastructure. The biggest limitation in this regard is data. Many companies would have considerable concern about uploading critical data to a public cloud, and bandwidth limitations would probably make it impossible to use public cloud resources with data stored internally. Also, simply getting data into the public cloud could be prohibitively time consuming. Companies that offer HPC cloud services recognize these concerns and do as much as possible to mitigate them, offering services such as private clouds within their infrastructure that are walled off from other users, as well as high-speed networking infrastructure for moving data. Not all upstream projects are massive in size, and some projects need to be done in geographic locations where size, local skills, 7

10 latency, and connectivity present serious obstacles. In such cases, using public cloud resources (if available) may be the best option. Public cloud also may make business sense for quick hit projects small, shared projects that last 30 to 60 days especially those that involve outside players. 7 CONCLUSION The incorporation of GPUs in the oil and gas data center must address three scenarios: Compute clusters with tens to hundreds or thousands of GPUs More modest GPU computing with 2 to 10 GPUs Remote visualization with 1 GPU per visualization session In the near term, upstream data centers will be able to begin moving away from silos of infrastructure by deploying standardized, multi-gpu servers suitable for both computation and visualization and incorporating them into daily operations by using technologies similar to those used by Eni or under development by technology leaders such as NVIDIA, NICE, Cisco, and NetApp. In the longer term, mainstream virtualization technologies will better incorporate GPUs, resulting in more options to address GPU computing and remote visualization scenarios and to achieve increased automation via private cloud. Converged infrastructure solutions such as the FlexPod data center platform will incorporate GPU resources to address highend 3D remote visualization and more modest GPU computing requirements. Such solutions integrate necessary infrastructure elements and provide uniform building blocks to simplify and accelerate infrastructure deployment. n 8 REFERENCES HPC in Oil and Gas Exploration. Anthony Lichnewsky. Presentation at Prace 2011 Industry Workshop: /pdf/schlumberger_alichnewsky_is_2 011.pdf Leveraging Graphics Processing Units (GPUs) for Real-time Seismic Interpretation. Benjamin J. Kadlek and Geoffrey A. Dorn. The Leading Edge, January 2010: Maximizing Throughput for High Performance TTI-RTM: From CPU-RTM to GPU-RTM. Xinyi Sun and Sang Suh. SEG San Antonio 2011 Annual Meeting: Expanding Domain Methods in GPU Based TTI Reverse Migration. Sang Suh and Bin Wang. SEG San Antonio 2011 Annual Meeting: Giga-Cell Simulation. Dr. Ali H. Dogru. Saudi Aramco Journal of Technology, Spring 2011: Spring2011/GigaCell.pdf Energy efficient resource allocation strategy for cloud data centres. Domenico Sannelli, Federico Mezza, Eni. International Symposium on Computer and Information Sciences, September 2011: Santos sponsors Open Source software for better reservoir visualization. Featured article in Finding Petroleum, March 2011: Source_ software_for_better_reservoir_visualization/69e8c2de.aspx 8

Unified Computing Systems

Unified Computing Systems Unified Computing Systems Cisco Unified Computing Systems simplify your data center architecture; reduce the number of devices to purchase, deploy, and maintain; and improve speed and agility. Cisco Unified

More information

www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING VISUALISATION GPU COMPUTING

www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING VISUALISATION GPU COMPUTING www.xenon.com.au STORAGE HIGH SPEED INTERCONNECTS HIGH PERFORMANCE COMPUTING GPU COMPUTING VISUALISATION XENON Accelerating Exploration Mineral, oil and gas exploration is an expensive and challenging

More information

Cisco Unified Data Center

Cisco Unified Data Center Solution Overview Cisco Unified Data Center Simplified, Efficient, and Agile Infrastructure for the Data Center What You Will Learn The data center is critical to the way that IT generates and delivers

More information

IBM Deep Computing Visualization Offering

IBM Deep Computing Visualization Offering P - 271 IBM Deep Computing Visualization Offering Parijat Sharma, Infrastructure Solution Architect, IBM India Pvt Ltd. email: parijatsharma@in.ibm.com Summary Deep Computing Visualization in Oil & Gas

More information

Data Center Network Evolution: Increase the Value of IT in Your Organization

Data Center Network Evolution: Increase the Value of IT in Your Organization White Paper Data Center Network Evolution: Increase the Value of IT in Your Organization What You Will Learn New operating demands and technology trends are changing the role of IT and introducing new

More information

Make the Most of Big Data to Drive Innovation Through Reseach

Make the Most of Big Data to Drive Innovation Through Reseach White Paper Make the Most of Big Data to Drive Innovation Through Reseach Bob Burwell, NetApp November 2012 WP-7172 Abstract Monumental data growth is a fact of life in research universities. The ability

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops

Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Cisco, Citrix, Microsoft, and NetApp Deliver Simplified High-Performance Infrastructure for Virtual Desktops Greater Efficiency and Performance from the Industry Leaders Citrix XenDesktop with Microsoft

More information

Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga

Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.

More information

Reasons to Consider Blades for the Next Wave of Virtualization

Reasons to Consider Blades for the Next Wave of Virtualization Enterprise and midsize businesses are increasingly turning to blade servers as the platform of choice to deliver the next generation of virtualized applications. Blade servers can yield significant cost

More information

Desktop Virtualization and Storage Infrastructure Optimization

Desktop Virtualization and Storage Infrastructure Optimization Desktop Virtualization and Storage Infrastructure Optimization Realizing the Most Value from Virtualization Investment Contents Executive Summary......................................... 1 Introduction.............................................

More information

Simplifying Storage Operations By David Strom (published 3.15 by VMware) Introduction

Simplifying Storage Operations By David Strom (published 3.15 by VMware) Introduction Simplifying Storage Operations By David Strom (published 3.15 by VMware) Introduction There are tectonic changes to storage technology that the IT industry hasn t seen for many years. Storage has been

More information

Software. Enabling Technologies for the 3D Clouds. Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager

Software. Enabling Technologies for the 3D Clouds. Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager Software Enabling Technologies for the 3D Clouds Paolo Maggi (paolo.maggi@nice-software.com) R&D Manager What is a 3D Cloud? "Cloud computing is a model for enabling convenient, on-demand network access

More information

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router

How To Speed Up A Flash Flash Storage System With The Hyperq Memory Router HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com info@parseclabs.com sales@parseclabs.com

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

Get the Best out of NVIDIA GPUs for 3D Design and Engineering in the Cloud

Get the Best out of NVIDIA GPUs for 3D Design and Engineering in the Cloud Get the Best out of NVIDIA GPUs for 3D Design and Engineering in the Cloud Andrea.Rodolico@nice-software.com CTO & Co-founder S5415 About NICE o o o Company Focus on technical computing since 1996 Partners

More information

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center

Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center Solution Overview Cisco Virtualized Multiservice Data Center Reference Architecture: Building the Unified Data Center What You Will Learn The data center infrastructure is critical to the evolution of

More information

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure

Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure White Paper Cisco Unified Data Center: The Foundation for Private Cloud Infrastructure Providing Agile and Efficient Service Delivery for Sustainable Business Advantage What You Will Learn Enterprises

More information

GPU System Architecture. Alan Gray EPCC The University of Edinburgh

GPU System Architecture. Alan Gray EPCC The University of Edinburgh GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems

More information

Consolidate and Virtualize Your Windows Environment with NetApp and VMware

Consolidate and Virtualize Your Windows Environment with NetApp and VMware White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...

More information

SOLUTION BRIEF MANUFACTURING DELIVER BRILLIANT CAD PERFORMANCE

SOLUTION BRIEF MANUFACTURING DELIVER BRILLIANT CAD PERFORMANCE SOLUTION BRIEF MANUFACTURING DELIVER BRILLIANT CAD PERFORMANCE NVIDIA GRID vgpu with VMware Horizon delivers superior virtualized graphics performance. Employees across the organization, from engineers

More information

Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER

Four Ways High-Speed Data Transfer Can Transform Oil and Gas WHITE PAPER Transform Oil and Gas WHITE PAPER TABLE OF CONTENTS Overview Four Ways to Accelerate the Acquisition of Remote Sensing Data Maximize HPC Utilization Simplify and Optimize Data Distribution Improve Business

More information

The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud.

The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. White Paper 021313-3 Page 1 : A Software Framework for Parallel Programming* The Fastest Way to Parallel Programming for Multicore, Clusters, Supercomputers and the Cloud. ABSTRACT Programming for Multicore,

More information

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud

IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud IBM Platform Computing Cloud Service Ready to use Platform LSF & Symphony clusters in the SoftLayer cloud February 25, 2014 1 Agenda v Mapping clients needs to cloud technologies v Addressing your pain

More information

Data Centric Systems (DCS)

Data Centric Systems (DCS) Data Centric Systems (DCS) Architecture and Solutions for High Performance Computing, Big Data and High Performance Analytics High Performance Computing with Data Centric Systems 1 Data Centric Systems

More information

Enterprise-class desktop virtualization with NComputing. Clear the hurdles that block you from getting ahead. Whitepaper

Enterprise-class desktop virtualization with NComputing. Clear the hurdles that block you from getting ahead. Whitepaper Enterprise-class desktop virtualization with NComputing Clear the hurdles that block you from getting ahead Whitepaper Introduction Enterprise IT departments are realizing virtualization is not just for

More information

WHITE PAPER. www.fusionstorm.com. Building Blocks of the Modern Data Center

WHITE PAPER. www.fusionstorm.com. Building Blocks of the Modern Data Center WHITE PAPER: Easing the Way to the Cloud: 1 WHITE PAPER Building Blocks of the Modern Data Center How Integrated Infrastructure Solutions Help to Accelerate Application Deployments and Simplify Management

More information

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers. White Paper Virtualized SAP: Optimize Performance with Cisco Data Center Virtual Machine Fabric Extender and Red Hat Enterprise Linux and Kernel-Based Virtual Machine What You Will Learn The virtualization

More information

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller

From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller White Paper From Ethernet Ubiquity to Ethernet Convergence: The Emergence of the Converged Network Interface Controller The focus of this paper is on the emergence of the converged network interface controller

More information

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft

Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft White Paper Optimally Manage the Data Center Using Systems Management Tools from Cisco and Microsoft What You Will Learn Cisco is continuously innovating to help businesses reinvent the enterprise data

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

NVIDIA GRID OVERVIEW SERVER POWERED BY NVIDIA GRID. WHY GPUs FOR VIRTUAL DESKTOPS AND APPLICATIONS? WHAT IS A VIRTUAL DESKTOP?

NVIDIA GRID OVERVIEW SERVER POWERED BY NVIDIA GRID. WHY GPUs FOR VIRTUAL DESKTOPS AND APPLICATIONS? WHAT IS A VIRTUAL DESKTOP? NVIDIA GRID OVERVIEW Imagine if responsive Windows and rich multimedia experiences were available via virtual desktop infrastructure, even those with intensive graphics needs. NVIDIA makes this possible

More information

Why is the V3 appliance so effective as a physical desktop replacement?

Why is the V3 appliance so effective as a physical desktop replacement? V3 Appliance FAQ Why is the V3 appliance so effective as a physical desktop replacement? The V3 appliance leverages local solid-state storage in the appliance. This design allows V3 to dramatically reduce

More information

Silver Peak s Virtual Acceleration Open Architecture (VXOA)

Silver Peak s Virtual Acceleration Open Architecture (VXOA) Silver Peak s Virtual Acceleration Open Architecture (VXOA) A FOUNDATION FOR UNIVERSAL WAN OPTIMIZATION The major IT initiatives of today data center consolidation, cloud computing, unified communications,

More information

The Construction of Seismic and Geological Studies' Cloud Platform Using Desktop Cloud Visualization Technology

The Construction of Seismic and Geological Studies' Cloud Platform Using Desktop Cloud Visualization Technology Send Orders for Reprints to reprints@benthamscience.ae 1582 The Open Cybernetics & Systemics Journal, 2015, 9, 1582-1586 Open Access The Construction of Seismic and Geological Studies' Cloud Platform Using

More information

What Is Microsoft Private Cloud Fast Track?

What Is Microsoft Private Cloud Fast Track? What Is Microsoft Private Cloud Fast Track? MICROSOFT PRIVATE CLOUD FAST TRACK is a reference architecture for building private clouds that combines Microsoft software, consolidated guidance, and validated

More information

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain

More information

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality

NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality NETAPP TECHNICAL REPORT NetApp and Microsoft Virtualization: Making Integrated Server and Storage Virtualization a Reality Abhinav Joshi, NetApp Chaffie McKenna, NetApp August 2008 TR-3701 Version 1.0

More information

Cisco WAAS Optimized for Citrix XenDesktop

Cisco WAAS Optimized for Citrix XenDesktop White Paper Cisco WAAS Optimized for Citrix XenDesktop Cisco Wide Area Application Services (WAAS) provides high performance delivery of Citrix XenDesktop and Citrix XenApp over the WAN. What ou Will Learn

More information

Microsoft s Open CloudServer

Microsoft s Open CloudServer Microsoft s Open CloudServer Page 1 Microsoft s Open CloudServer How is our cloud infrastructure server design different from traditional IT servers? It begins with scale. From the number of customers

More information

Amazon EC2 Product Details Page 1 of 5

Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of

More information

WHITE PAPER RUN VDI IN THE CLOUD WITH PANZURA SKYBRIDGE

WHITE PAPER RUN VDI IN THE CLOUD WITH PANZURA SKYBRIDGE WHITE PAPER RUN VDI IN THE CLOUD WITH PANZURA What if you could provision VDI in the cloud as a utility, colocating ondemand VDI instances and data next to each other and close to your users, anywhere

More information

Benefits of Consolidating and Virtualizing Microsoft Exchange and SharePoint in a Private Cloud Environment

Benefits of Consolidating and Virtualizing Microsoft Exchange and SharePoint in a Private Cloud Environment . The Radicati Group, Inc. 1900 Embarcadero Road, Suite 206 Palo Alto, CA 94303 Phone 650-322-8059 Fax 650-322-8061 http://www.radicati.com THE RADICATI GROUP, INC. Benefits of Consolidating and Virtualizing

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop

VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent

More information

PRODUCTS & TECHNOLOGY

PRODUCTS & TECHNOLOGY PRODUCTS & TECHNOLOGY DATA CENTER CLASS WAN OPTIMIZATION Today s major IT initiatives all have one thing in common: they require a well performing Wide Area Network (WAN). However, many enterprise WANs

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

Virtualization s Evolution

Virtualization s Evolution Virtualization s Evolution Expect more from your IT solutions. Virtualization s Evolution In 2009, most Quebec businesses no longer question the relevancy of virtualizing their infrastructure. Rather,

More information

Technology Insight Series

Technology Insight Series Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary

More information

High Performance Computing (HPC)

High Performance Computing (HPC) High Performance Computing (HPC) High Performance Computing (HPC) White Paper Attn: Name, Title Phone: xxx.xxx.xxxx Fax: xxx.xxx.xxxx 1.0 OVERVIEW When heterogeneous enterprise environments are involved,

More information

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP

ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP ACCELERATING YOUR IT TRANSFORMATION WITH EMC NEXT-GENERATION UNIFIED STORAGE AND BACKUP Virtualization, in particular VMware, has changed the way companies look at how they deploy not only their servers,

More information

Enabling Technologies for Distributed Computing

Enabling Technologies for Distributed Computing Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies

More information

Desktop Consolidation. Stéphane Verdy, CTO Devon IT

Desktop Consolidation. Stéphane Verdy, CTO Devon IT Desktop Consolidation Stéphane Verdy, CTO Devon IT Agenda - Desktop Consolidation Migrating from PCs to Hosted Desktops Desktop Centralization Deployment Graphics Compression PCs vs. Thin s TCO User segmentation

More information

BUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS

BUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS BUILDING A SCALABLE BIG DATA INFRASTRUCTURE FOR DYNAMIC WORKFLOWS ESSENTIALS Executive Summary Big Data is placing new demands on IT infrastructures. The challenge is how to meet growing performance demands

More information

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage

Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp

More information

Scale-out NAS Unifies the Technical Enterprise

Scale-out NAS Unifies the Technical Enterprise Scale-out NAS Unifies the Technical Enterprise Panasas Inc. White Paper July 2010 Executive Summary Tremendous effort has been made by IT organizations, and their providers, to make enterprise storage

More information

A Platform Built for Server Virtualization: Cisco Unified Computing System

A Platform Built for Server Virtualization: Cisco Unified Computing System A Platform Built for Server Virtualization: Cisco Unified Computing System What You Will Learn This document discusses how the core features of the Cisco Unified Computing System contribute to the ease

More information

Flash Memory Technology in Enterprise Storage

Flash Memory Technology in Enterprise Storage NETAPP WHITE PAPER Flash Memory Technology in Enterprise Storage Flexible Choices to Optimize Performance Mark Woods and Amit Shah, NetApp November 2008 WP-7061-1008 EXECUTIVE SUMMARY Solid state drives

More information

Building a Scalable Big Data Infrastructure for Dynamic Workflows

Building a Scalable Big Data Infrastructure for Dynamic Workflows Building a Scalable Big Data Infrastructure for Dynamic Workflows INTRODUCTION Organizations of all types and sizes are looking to big data to help them make faster, more intelligent decisions. Many efforts

More information

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System

Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System Pentaho High-Performance Big Data Reference Configurations using Cisco Unified Computing System By Jake Cornelius Senior Vice President of Products Pentaho June 1, 2012 Pentaho Delivers High-Performance

More information

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation

PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential

More information

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with

More information

Big Workflow: More than Just Intelligent Workload Management for Big Data

Big Workflow: More than Just Intelligent Workload Management for Big Data Big Workflow: More than Just Intelligent Workload Management for Big Data Michael Feldman White Paper February 2014 EXECUTIVE SUMMARY Big data applications represent a fast-growing category of high-value

More information

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters

iscsi Top Ten Top Ten reasons to use Emulex OneConnect iscsi adapters W h i t e p a p e r Top Ten reasons to use Emulex OneConnect iscsi adapters Internet Small Computer System Interface (iscsi) storage has typically been viewed as a good option for small and medium sized

More information

Cisco Unified Computing Remote Management Services

Cisco Unified Computing Remote Management Services Cisco Unified Computing Remote Management Services Cisco Remote Management Services are an immediate, flexible management solution that can help you realize the full value of the Cisco Unified Computing

More information

Vmware Horizon View with Rich Media, Unified Communications and 3D Graphics

Vmware Horizon View with Rich Media, Unified Communications and 3D Graphics Vmware Horizon View with Rich Media, Unified Communications and 3D Graphics Edward Low 2014 VMware Inc. All rights reserved. Agenda Evolution of VDI Horizon View with Unified Communications Horizon View

More information

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical

Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation

More information

Virtualization of ArcGIS Pro. An Esri White Paper December 2015

Virtualization of ArcGIS Pro. An Esri White Paper December 2015 An Esri White Paper December 2015 Copyright 2015 Esri All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of Esri. This work

More information

Cloud-ready network architecture

Cloud-ready network architecture IBM Systems and Technology Thought Leadership White Paper May 2011 Cloud-ready network architecture 2 Cloud-ready network architecture Contents 3 High bandwidth with low latency 4 Converged communications

More information

Hue Streams. Seismic Compression Technology. Years of my life were wasted waiting for data loading and copying

Hue Streams. Seismic Compression Technology. Years of my life were wasted waiting for data loading and copying Hue Streams Seismic Compression Technology Hue Streams real-time seismic compression results in a massive reduction in storage utilization and significant time savings for all seismic-consuming workflows.

More information

Virtual Desktop VMware View Horizon

Virtual Desktop VMware View Horizon Virtual Desktop VMware View Horizon Presenter - Scott Le Marquand VMware Virtualization consultant with 6 years consultancy experience VMware Certified Professional 5 Data Center Virtualization VMware

More information

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief

Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information

More information

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all.

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCES Symantec understands the shifting needs of the data center and offers NetBackup

More information

Enabling Technologies for Distributed and Cloud Computing

Enabling Technologies for Distributed and Cloud Computing Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading

More information

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation

Top Ten Questions. to Ask Your Primary Storage Provider About Their Data Efficiency. May 2014. Copyright 2014 Permabit Technology Corporation Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely

More information

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd. sivaram@redhat.com

Cloud Computing with Red Hat Solutions. Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd. sivaram@redhat.com Cloud Computing with Red Hat Solutions Sivaram Shunmugam Red Hat Asia Pacific Pte Ltd sivaram@redhat.com Linux Automation Details Red Hat's Linux Automation strategy for next-generation IT infrastructure

More information

Cisco Data Center Optimization Services

Cisco Data Center Optimization Services Cisco Data Center Optimization Services Evolve your data center solutions to support business growth, deliver nextgeneration services, and maintain competitive advantage with Cisco Data Center Optimization

More information

GTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved

GTC Presentation March 19, 2013. Copyright 2012 Penguin Computing, Inc. All rights reserved GTC Presentation March 19, 2013 Copyright 2012 Penguin Computing, Inc. All rights reserved Session S3552 Room 113 S3552 - Using Tesla GPUs, Reality Server and Penguin Computing's Cloud for Visualizing

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...

More information

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded

More information

Desktop virtualization for all

Desktop virtualization for all Desktop virtualization for all 2 Desktop virtualization for all Today s organizations encompass a diverse range of users, from road warriors using laptops and mobile devices as well as power users working

More information

high-performance computing so you can move your enterprise forward

high-performance computing so you can move your enterprise forward Whether targeted to HPC or embedded applications, Pico Computing s modular and highly-scalable architecture, based on Field Programmable Gate Array (FPGA) technologies, brings orders-of-magnitude performance

More information

Desktop virtualization for all

Desktop virtualization for all Desktop virtualization for all 2 Desktop virtualization for all Today s organizations encompass a diverse range of users, from road warriors using laptops and mobile devices as well as power users working

More information

HPC Wales Skills Academy Course Catalogue 2015

HPC Wales Skills Academy Course Catalogue 2015 HPC Wales Skills Academy Course Catalogue 2015 Overview The HPC Wales Skills Academy provides a variety of courses and workshops aimed at building skills in High Performance Computing (HPC). Our courses

More information

Cisco Application Networking for Citrix Presentation Server

Cisco Application Networking for Citrix Presentation Server Cisco Application Networking for Citrix Presentation Server Faster Site Navigation, Less Bandwidth and Server Processing, and Greater Availability for Global Deployments What You Will Learn To address

More information

Windows Server 2008 R2 Hyper-V Live Migration

Windows Server 2008 R2 Hyper-V Live Migration Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described

More information

Desktop Virtualization. The back-end

Desktop Virtualization. The back-end Desktop Virtualization The back-end Will desktop virtualization really fit every user? Cost? Scalability? User Experience? Beyond VDI with FlexCast Mobile users Guest workers Office workers Remote workers

More information

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing

Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Accelerating Simulation & Analysis with Hybrid GPU Parallelization and Cloud Computing Innovation Intelligence Devin Jensen August 2012 Altair Knows HPC Altair is the only company that: makes HPC tools

More information

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.

IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand. IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise

More information

Software-Defined Networks Powered by VellOS

Software-Defined Networks Powered by VellOS WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible

More information

Virtual Desktop Infrastructure Planning Overview

Virtual Desktop Infrastructure Planning Overview WHITEPAPER Virtual Desktop Infrastructure Planning Overview Contents What is Virtual Desktop Infrastructure?...2 Physical Corporate PCs. Where s the Beef?...3 The Benefits of VDI...4 Planning for VDI...5

More information

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS

TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run

More information

The Road to Convergence

The Road to Convergence A UBM TECHWEB WHITE PAPER SEPTEMBER 2012 The Road to Convergence Six keys to getting there with the most confidence and the least risk. Brought to you by The Road to Convergence Six keys to getting there

More information

Networking for Caribbean Development

Networking for Caribbean Development Networking for Caribbean Development BELIZE NOV 2 NOV 6, 2015 w w w. c a r i b n o g. o r g Virtualization: Architectural Considerations and Implementation Options Virtualization Virtualization is the

More information

Using GPUs in the Cloud for Scalable HPC in Engineering and Manufacturing March 26, 2014

Using GPUs in the Cloud for Scalable HPC in Engineering and Manufacturing March 26, 2014 Using GPUs in the Cloud for Scalable HPC in Engineering and Manufacturing March 26, 2014 David Pellerin, Business Development Principal Amazon Web Services David Hinz, Director Cloud and HPC Solutions

More information

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber

Introduction to grid technologies, parallel and cloud computing. Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber Introduction to grid technologies, parallel and cloud computing Alaa Osama Allam Saida Saad Mohamed Mohamed Ibrahim Gaber OUTLINES Grid Computing Parallel programming technologies (MPI- Open MP-Cuda )

More information

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency

Solving I/O Bottlenecks to Enable Superior Cloud Efficiency WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one

More information

How To Make A Virtual Machine Aware Of A Network On A Physical Server

How To Make A Virtual Machine Aware Of A Network On A Physical Server VMready Virtual Machine-Aware Networking White Paper Table of Contents Executive Summary... 2 Current Server Virtualization Environments... 3 Hypervisors... 3 Virtual Switches... 3 Leading Server Virtualization

More information

Data Deduplication: An Essential Component of your Data Protection Strategy

Data Deduplication: An Essential Component of your Data Protection Strategy WHITE PAPER: THE EVOLUTION OF DATA DEDUPLICATION Data Deduplication: An Essential Component of your Data Protection Strategy JULY 2010 Andy Brewerton CA TECHNOLOGIES RECOVERY MANAGEMENT AND DATA MODELLING

More information