Memory-Shielded RAM Disk Support in RedHawk Linux 6.0
|
|
|
- Cora Dennis
- 9 years ago
- Views:
Transcription
1 A Concurrent Real-Time White Paper 2881 Gatewa Drive Pompano Beach, FL (800) real-time.ccur.com Memor-Shielded RAM Disk Support in RedHawk Linux 6.0 B: John Blackwood Concurrent Real-Time Linux Development Team March 23, 2011
2 Abstract This white paper describes a feature of RedHawk Linux that provides the abilit to control NUMA node page placement of RAM disk data. Placing RAM disk in the same NUMA node where the RAM disk applications execute can result in higher performance and reduced sstem-wide contention. The paper describes several test case scenarios that demonstrate the benefits of proper RAM disk page placement. Overview In Non-Uniform Memor Architecture (NUMA) computer sstems, access time depends upon the location relative to a processor. NUMA multiprocessor sstems are made up of two or more NUMA nodes, where each node contains one or more processors and. In NUMA sstems, all of is accessible to all processors. Memor accesses can be either local or remote -- local accesses to that resides on the same node as the processor, and remote accesses to that resides in a different node than where the processor resides. Remote accesses can be more expensive in terms of latenc and increased bus contention, especiall where the remote resides several NUMA node bus interconnections awa from the accessing processor. Historicall, RedHawk Linux has provided the abilit to -shield a NUMA node. Onl userspace belonging to applications that are biased to execute on the -shielded NUMA node will reside in that node's. Memor-shielding also moves user-space of tasks that are executing on other NUMA nodes out of the -shielded node's. RedHawk -shielding support greatl increases the amount of local accesses that a -shielded application makes and reduces the amount of cross-node bus contention. A RAM disk is a portion of RAM/ that is used as if it were a disk drive. RAM disks have fixed sizes and act like regular disk partitions. Access time is much faster for a RAM disk compared to a phsical disk. Data stored on a RAM disk is lost when the sstem is powered down; therefore RAM disks provide a good method for storing temporar data that require fast access. In RedHawk 6.0, support has been added that gives the user the abilit to specif the NUMA node(s) where the of a RAM disk will reside. A user ma control, on a per-ram disk basis, the NUMA nodes that will hold the data of the RAM disk. This new feature can further enhance the performance of -shield applications. RAM Disk NUMA Node Interface A user can specif the NUMA node page placement of a RAM disk through the use of /proc file interface. Additionall, the per-node page count values for a RAM disk ma also be viewed
3 The /proc/driver/ram/ram[n] /nodemask file ma be used to view and to set the NUMA nodes where the RAM disk should reside, where the nodemask is a hexadecimal bitmask value. B default, RAM disk ma reside on an NUMA node. For example, the default nodemask for RAM disk 2 on an 8 NUMA node sstem would be: # cat /proc/driver/ram/ram2/nodemask ,000000ff In order to setup RAM disk 2 so that its are allocated in NUMA node 3, the following commands are issued to set and check the nodemask value: # /bin/echo 8 > /proc/driver/ram/ram2/nodemask # cat /proc/driver/ram/ram2/nodemask , After RAM disk have been allocated for a RAM disk, the /proc/driver/ram/ram[n]/tat file ma be used to view the per-node page counts. For example: # cat /proc/driver/ram/ram2/tat Preferred node 3 : 2048 Performance Measurements This paper discusses two test scenarios that demonstrate the potential performance and determinism advantages of using RAM disk that are local to the processes accessing the RAM disk compared to RAM disk that reside in a remote NUMA node. The RedHawk Linux -shielded RAM disk support was used in these tests to control the NUMA node placement of the RAM disk. Although the test scenarios are not necessaril tpical of the tpe of and RAM disk usage patterns in actual application, some portion of the differences observed in these test scenarios would be seen in actual real-world applications. The tests were executed on a quadprocessor Opteron sstem with four NUMA nodes and four cores per node. The tests described below utilized two of the four NUMA nodes in the sstem. Testing Methodolog A file write/read program was used in both of the two test scenarios. This program opens either a /dev/ram[n] file directl, or opens a file that resides within the alread mounted RAM disk file sstem. The O_DIRECT open(2) flag is used so that the RAM disk accesses bpass the file sstem page cache and cause direct file data copies between the user's buffers in user-space and the RAM disk in kernel-space
4 The program first obtains a starting time using clock_gettime(2), and then executes either a loop of lseek(2) and write(2) calls, or a loop of lseek(2) and read(2) calls on the open file. After executing this set of sstem service calls, clock_gettime(2) is again used to obtain the ending time. The pseudo-code for the lseek and write loop is shown below: clock_gettime(clock_monotonic, start); for (loop = 0; loop < loopcount; loop++) { lseek(fd, 0, SEEK_SET); write(fd, buffer, size); } clock_gettime(clock_monotonic, end); In addition to this RAM disk write/read loop test, a CPU write/read test that continuall writes and reads to/from user-space buffers was used in the first test scenario below in order to generate CPU access traffic. This test was written in a wa where these CPU accesses tend to cause CPU cache misses, thus resulting in more actual phsical (non-cached) accesses. Test Scenario 1 In this scenario, one file write/read program was executed on the first NUMA node, where the program opened /dev/ram0 and issued 8 MB writes and reads directl to the raw RAM disk device. Using Remote RAM Disk Pages File w/r prog (1) CPU w/r progs (4) RAM disk Initiall, the RAM disk were setup to reside on a second (remote) NUMA node. With no other activit on the remote node, the timings for accessing the RAM disk were: milliseconds milliseconds - 4 -
5 Then when four copies (1 cop per CPU) of the CPU write/read program were executed on the remote NUMA node where the RAM disk resided, the timings for accessing the RAM disk increased to: milliseconds milliseconds Thus contention for accessing the remote RAM disk caused an increase of 53% for write calls and 49% for read calls in this test scenario. Using local RAM Disk Pages File w/r prog (1) CPU w/r progs (4) RAM disk The RAM disk were then placed in the local NUMA node. The same single file write/read program was executed on the first NUMA node. When there was no activit on the second NUMA node, the timings were: milliseconds milliseconds Then when the same four copies of the CPU read/write program were executed on the second NUMA node, the timings increased to: milliseconds milliseconds When the RAM disk page accesses were to local NUMA node, intense activit on the second NUMA node onl increased the RAM disk file write and read accesses b 5%. This test scenario demonstrates the benefits of localizing RAM disk accesses on a NUMA sstem when activit is occurring on other NUMA nodes
6 Test Scenario 2 In this test scenario: Two RAM disks were used: one disk's resided in the first NUMA node, and the second disk's resided in the second NUMA node. An ext3 file sstem was created in each RAM disk and both ext3 file sstems were mounted. Four copies of the file write/read program were executed on each of the two NUMA nodes, for a total of eight copies simultaneousl executing on the sstem. Each of the eight copies of the program wrote and read to their own 2MB file that was located on one of the mounted RAM disk file sstems. Each write and read was 2MB in size. Local RAM Disk Files File w/r progs (4) File w/r progs (4) RAM disk RAM disk In this first case, all of the eight file write/read program's RAM file resided in local NUMA node. The highest timings observed were: milliseconds milliseconds Remote Memor RAM Disk Files File w/r progs (4) File w/r progs (4) RAM disk RAM disk
7 In the second case, all of the eight file write/read program's RAM disk resided in the remote NUMA node's. The highest timings observed in this case were: milliseconds milliseconds Performance Difference When the RAM disk ext3 file sstem files are being remotel accessed, the writes took 19% longer, and the reads took 16% longer. Summar The performance measurements discussed in this paper have shown that using RedHawk Linux -shielded RAM disk support to place RAM disk in the same NUMA node where the RAM disk is most frequentl accessed can result in higher performance and reduced contention. About Concurrent Computer Corporation Concurrent (NASDAQ: CCUR) is a global leader in innovative solutions serving aerospace and defense; automotive; broadcasting; cable and broadband; financial; IPTV; and telecom industries. Concurrent Real-Time is one of the industr's foremost providers of highperformance real-time computer sstems, solutions, and software for commercial and government markets, focusing on areas that include hardware-in-the-loop and man-in-theloop simulation, data acquisition, industrial sstems and software. Operating worldwide, Concurrent provides sales and support from offices throughout North America, Europe, Asia and Australia. For more information, please visit Concurrent Real-Time Linux Solutions at or call (800) Concurrent Computer Corporation. Concurrent Computer Corporation and its logo are registered trademarks of Concurrent. All other Concurrent product names are trademarks of Concurrent, while all other product names are trademarks or registered trademarks of their respective owners. Linux is used pursuant to a sublicense from the Linux Mark Institute
Comparison of the real-time network performance of RedHawk Linux 6.0 and Red Hat operating systems
A Concurrent Real-Time White Paper 2881 Gateway Drive Pompano Beach, FL 33069 (954) 974-1700 real-time.ccur.com Comparison of the real-time network performance of RedHawk Linux 6.0 and Red Hat operating
Real Time Performance of a Security Hardened RedHawk Linux System During Denial of Service Attacks
A Concurrent Real Time White Paper 2881 Gateway Drive Pompano Beach, FL 33069 (954) 974 1700 real time.ccur.com Real Time Performance of a Security Hardened RedHawk Linux System During Denial of Service
White Paper. Real-time Capabilities for Linux SGI REACT Real-Time for Linux
White Paper Real-time Capabilities for Linux SGI REACT Real-Time for Linux Abstract This white paper describes the real-time capabilities provided by SGI REACT Real-Time for Linux. software. REACT enables
Recommended hardware system configurations for ANSYS users
Recommended hardware system configurations for ANSYS users The purpose of this document is to recommend system configurations that will deliver high performance for ANSYS users across the entire range
How to Choose your Red Hat Enterprise Linux Filesystem
How to Choose your Red Hat Enterprise Linux Filesystem EXECUTIVE SUMMARY Choosing the Red Hat Enterprise Linux filesystem that is appropriate for your application is often a non-trivial decision due to
Tableau Server 7.0 scalability
Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
- An Oracle9i RAC Solution
High Availability and Scalability Technologies - An Oracle9i RAC Solution Presented by: Arquimedes Smith Oracle9i RAC Architecture Real Application Cluster (RAC) is a powerful new feature in Oracle9i Database
Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
VMware vcenter Update Manager Performance and Best Practices VMware vcenter Update Manager 4.0
Performance Study VMware vcenter Update Manager Performance and Best Practices VMware vcenter Update Manager 4.0 VMware vcenter Update Manager provides a patch management framework for VMware vsphere.
DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering
DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
SQL Server 2008 Performance and Scale
SQL Server 2008 Performance and Scale White Paper Published: February 2008 Updated: July 2008 Summary: Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Packet Capture in 10-Gigabit Ethernet Environments Using Contemporary Commodity Hardware
Packet Capture in 1-Gigabit Ethernet Environments Using Contemporary Commodity Hardware Fabian Schneider Jörg Wallerich Anja Feldmann {fabian,joerg,anja}@net.t-labs.tu-berlin.de Technische Universtität
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Accelerating Microsoft Exchange Servers with I/O Caching
Accelerating Microsoft Exchange Servers with I/O Caching QLogic FabricCache Caching Technology Designed for High-Performance Microsoft Exchange Servers Key Findings The QLogic FabricCache 10000 Series
Chapter 2 Basic Structure of Computers. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan
Chapter 2 Basic Structure of Computers Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline Functional Units Basic Operational Concepts Bus Structures Software
NetIQ Privileged User Manager
NetIQ Privileged User Manager Performance and Sizing Guidelines March 2014 Legal Notice THIS DOCUMENT AND THE SOFTWARE DESCRIBED IN THIS DOCUMENT ARE FURNISHED UNDER AND ARE SUBJECT TO THE TERMS OF A LICENSE
FileMaker. Running FileMaker Pro 7 on Windows Server 2003 Terminal Services
FileMaker Running FileMaker Pro 7 on Windows Server 2003 Terminal Services 2001-2004 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker
Chapter 2: Computer-System Structures. Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture
Chapter 2: Computer-System Structures Computer System Operation Storage Structure Storage Hierarchy Hardware Protection General System Architecture Operating System Concepts 2.1 Computer-System Architecture
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
CMSC 611: Advanced Computer Architecture
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis Parallel Computers Definition: A parallel computer is a collection of processing
Multiprocessor Cache Coherence
Multiprocessor Cache Coherence M M BUS P P P P The goal is to make sure that READ(X) returns the most recent value of the shared variable X, i.e. all valid copies of a shared variable are identical. 1.
LS DYNA Performance Benchmarks and Profiling. January 2009
LS DYNA Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox HPC Advisory Council Cluster Center The
Cisco for SAP HANA Scale-Out Solution on Cisco UCS with NetApp Storage
Cisco for SAP HANA Scale-Out Solution Solution Brief December 2014 With Intelligent Intel Xeon Processors Highlights Scale SAP HANA on Demand Scale-out capabilities, combined with high-performance NetApp
Infor Web UI Sizing and Deployment for a Thin Client Solution
Infor Web UI Sizing and Deployment for a Thin Client Solution Copyright 2012 Infor Important Notices The material contained in this publication (including any supplementary information) constitutes and
iscsi Performance Factors
iscsi Performance Factors Table of Contents Introduction......1 General Testing Parameters......1 iscsi on SLES 10 SP1....2 Hardware Configuration......2 Test Setup......2 iscsi Write Performance Test
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
Multi-core and Linux* Kernel
Multi-core and Linux* Kernel Suresh Siddha Intel Open Source Technology Center Abstract Semiconductor technological advances in the recent years have led to the inclusion of multiple CPU execution cores
Parallels Cloud Storage
Parallels Cloud Storage White Paper Best Practices for Configuring a Parallels Cloud Storage Cluster www.parallels.com Table of Contents Introduction... 3 How Parallels Cloud Storage Works... 3 Deploying
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
Performance And Scalability In Oracle9i And SQL Server 2000
Performance And Scalability In Oracle9i And SQL Server 2000 Presented By : Phathisile Sibanda Supervisor : John Ebden 1 Presentation Overview Project Objectives Motivation -Why performance & Scalability
EXPLORING LINUX KERNEL: THE EASY WAY!
EXPLORING LINUX KERNEL: THE EASY WAY! By: Ahmed Bilal Numan 1 PROBLEM Explore linux kernel TCP/IP stack Solution Try to understand relative kernel code Available text Run kernel in virtualized environment
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
CQG Trader Technical Specifications. December 1, 2014 Version 2014-05
CQG Trader Technical Specifications December 1, 2014 Version 2014-05 Copyright 2014 CQG Inc. All rights reserved. Information in this document is subject to change without notice. Windows Vista, Windows,
Distributed Systems. REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1
Distributed Systems REK s adaptation of Prof. Claypool s adaptation of Tanenbaum s Distributed Systems Chapter 1 1 The Rise of Distributed Systems! Computer hardware prices are falling and power increasing.!
QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION
QLIKVIEW SERVER MEMORY MANAGEMENT AND CPU UTILIZATION QlikView Scalability Center Technical Brief Series September 2012 qlikview.com Introduction This technical brief provides a discussion at a fundamental
SAS Business Analytics. Base SAS for SAS 9.2
Performance & Scalability of SAS Business Analytics on an NEC Express5800/A1080a (Intel Xeon 7500 series-based Platform) using Red Hat Enterprise Linux 5 SAS Business Analytics Base SAS for SAS 9.2 Red
Solid State Drive Architecture
Solid State Drive Architecture A comparison and evaluation of data storage mediums Tyler Thierolf Justin Uriarte Outline Introduction Storage Device as Limiting Factor Terminology Internals Interface Architecture
Real-Time Analysis of CDN in an Academic Institute: A Simulation Study
Journal of Algorithms & Computational Technology Vol. 6 No. 3 483 Real-Time Analysis of CDN in an Academic Institute: A Simulation Study N. Ramachandran * and P. Sivaprakasam + *Indian Institute of Management
Programming Flash Microcontrollers through the Controller Area Network (CAN) Interface
Programming Flash Microcontrollers through the Controller Area Network (CAN) Interface Application te Programming Flash Microcontrollers through the Controller Area Network (CAN) Interface Abstract This
SQL Server Virtualization
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
Boost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
Random-Access Memory (RAM) The Memory Hierarchy. SRAM vs DRAM Summary. Conventional DRAM Organization. Page 1
Random-ccess Memor (RM) The Memor Hierarch Topics Storage technologies and trends Localit of reference Caching in the hierarch Ke features RM is packaged as a chip. Basic storage unit is a cell (one bit
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging
Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.
- An Essential Building Block for Stable and Reliable Compute Clusters
Ferdinand Geier ParTec Cluster Competence Center GmbH, V. 1.4, March 2005 Cluster Middleware - An Essential Building Block for Stable and Reliable Compute Clusters Contents: Compute Clusters a Real Alternative
On Benchmarking Popular File Systems
On Benchmarking Popular File Systems Matti Vanninen James Z. Wang Department of Computer Science Clemson University, Clemson, SC 2963 Emails: {mvannin, jzwang}@cs.clemson.edu Abstract In recent years,
Comparing the Network Performance of Windows File Sharing Environments
Technical Report Comparing the Network Performance of Windows File Sharing Environments Dan Chilton, Srinivas Addanki, NetApp September 2010 TR-3869 EXECUTIVE SUMMARY This technical report presents the
IxChariot Virtualization Performance Test Plan
WHITE PAPER IxChariot Virtualization Performance Test Plan Test Methodologies The following test plan gives a brief overview of the trend toward virtualization, and how IxChariot can be used to validate
Accelerating Business Intelligence with Large-Scale System Memory
Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness
How To Write An Article On An Hp Appsystem For Spera Hana
Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ
Improving Grid Processing Efficiency through Compute-Data Confluence
Solution Brief GemFire* Symphony* Intel Xeon processor Improving Grid Processing Efficiency through Compute-Data Confluence A benchmark report featuring GemStone Systems, Intel Corporation and Platform
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment
Red Hat Network Satellite Management and automation of your Red Hat Enterprise Linux environment WHAT IS IT? Red Hat Network (RHN) Satellite server is an easy-to-use, advanced systems management platform
WHITE PAPER. How To Compare Virtual Devices (NFV) vs Hardware Devices: Testing VNF Performance
WHITE PAPER How To Compare Virtual Devices (NFV) vs Hardware Devices: Testing VNF Performance www.ixiacom.com 915-3132-01 Rev. B, June 2014 2 Table of Contents Network Functions Virtualization (NFV): An
MySQL performance in a cloud. Mark Callaghan
MySQL performance in a cloud Mark Callaghan Special thanks Eric Hammond (http://www.anvilon.com) provided documentation that made all of my work much easier. What is this thing called a cloud? Deployment
Building Docker Cloud Services with Virtuozzo
Building Docker Cloud Services with Virtuozzo Improving security and performance of application containers services in the cloud EXECUTIVE SUMMARY Application containers, and Docker in particular, are
MOSIX: High performance Linux farm
MOSIX: High performance Linux farm Paolo Mastroserio [[email protected]] Francesco Maria Taurino [[email protected]] Gennaro Tortone [[email protected]] Napoli Index overview on Linux farm farm
NetFlow Collection and Processing Cartridge Pack User Guide Release 6.0
[1]Oracle Communications Offline Mediation Controller NetFlow Collection and Processing Cartridge Pack User Guide Release 6.0 E39478-01 June 2015 Oracle Communications Offline Mediation Controller NetFlow
QUALITY OF SERVICE FOR CLOUD-BASED MOBILE APPS: Aruba Networks AP-135 and Cisco AP3602i
QUALITY OF SERVICE FOR CLOUD-BASED MOBILE APPS: Aruba Networks AP-135 and Cisco AP3602i Conducted at the Aruba Proof-of-Concept Lab November 2012 Statement of test result confidence Aruba makes every attempt
How System Settings Impact PCIe SSD Performance
How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage
High Performance Computing in CST STUDIO SUITE
High Performance Computing in CST STUDIO SUITE Felix Wolfheimer GPU Computing Performance Speedup 18 16 14 12 10 8 6 4 2 0 Promo offer for EUC participants: 25% discount for K40 cards Speedup of Solver
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
AdminToys Suite. Installation & Setup Guide
AdminToys Suite Installation & Setup Guide Copyright 2008-2009 Lovelysoft. All Rights Reserved. Information in this document is subject to change without prior notice. Certain names of program products
Aras Innovator 10 Scalability Benchmark Methodology and Performance Results
Aras Innovator 10 Scalability Benchmark Methodology and Performance Results Aras Innovator 10 Running on SQL Server 2012 Enterprise Edition Contents Executive Summary... 1 Introduction... 2 About Aras...
High Availability of the Polarion Server
Polarion Software CONCEPT High Availability of the Polarion Server Installing Polarion in a high availability environment Europe, Middle-East, Africa: Polarion Software GmbH Hedelfinger Straße 60 70327
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
8x8 Network Monitoring Tool
8x8 Version 1.0, May 2011 The Champion For Business Communications Table of Contents Introduction...3 Overview of 8x8 VoIP Readiness Tools...3 Test #1: VoIP Quality and Connectivity speed...3 Test #2:
Lesson Objectives. To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization
Lesson Objectives To provide a grand tour of the major operating systems components To provide coverage of basic computer system organization AE3B33OSD Lesson 1 / Page 2 What is an Operating System? A
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
Performance Guide. 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317. http://www.ansys.com (T) 724-746-3304 (F) 724-514-9494
Performance Guide ANSYS, Inc. Release 12.1 Southpointe November 2009 275 Technology Drive ANSYS, Inc. is Canonsburg, PA 15317 certified to ISO [email protected] 9001:2008. http://www.ansys.com (T) 724-746-3304
ISPS & WEBHOSTS SETUP REQUIREMENTS & SIGNUP FORM LOCAL CLOUD
ISPS & WEBHOSTS SETUP REQUIREMENTS & SIGNUP FORM LOCAL CLOUD LOCAL CLOUD SETUP REQUIREMENTS Hardware Requirements Local Cloud Hardware Different hardware requirements apply depending on the amount of emails
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Configuration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
Amazon EC2 XenApp Scalability Analysis
WHITE PAPER Citrix XenApp Amazon EC2 XenApp Scalability Analysis www.citrix.com Table of Contents Introduction...3 Results Summary...3 Detailed Results...4 Methods of Determining Results...4 Amazon EC2
Three Key Design Considerations of IP Video Surveillance Systems
Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
Linux Process Scheduling Policy
Lecture Overview Introduction to Linux process scheduling Policy versus algorithm Linux overall process scheduling objectives Timesharing Dynamic priority Favor I/O-bound process Linux scheduling algorithm
evm Virtualization Platform for Windows
B A C K G R O U N D E R evm Virtualization Platform for Windows Host your Embedded OS and Windows on a Single Hardware Platform using Intel Virtualization Technology April, 2008 TenAsys Corporation 1400
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
Achieving Performance Isolation with Lightweight Co-Kernels
Achieving Performance Isolation with Lightweight Co-Kernels Jiannan Ouyang, Brian Kocoloski, John Lange The Prognostic Lab @ University of Pittsburgh Kevin Pedretti Sandia National Laboratories HPDC 2015
PARALLELS SERVER 4 BARE METAL README
PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels
Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing
Microsoft Windows Compute Cluster Server 2003 Partner Solution Brief Finite Elements Infinite Possibilities. Virtual Simulation and High-Performance Computing Microsoft Windows Compute Cluster Server Runs
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing
JBoss Data Grid Performance Study Comparing Java HotSpot to Azul Zing January 2014 Legal Notices JBoss, Red Hat and their respective logos are trademarks or registered trademarks of Red Hat, Inc. Azul
Understanding the Benefits of IBM SPSS Statistics Server
IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster
Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and NetApp Storage Systems
APPLICATION NOTE Network Attached Storage Interoperability Testing Interoperability Test Results for Juniper Networks EX Series Ethernet Switches and Storage Systems Copyright 2012, Juniper Networks, Inc.
Chapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content
Advances in Networks, Computing and Communications 6 92 CentOS Linux 5.2 and Apache 2.2 vs. Microsoft Windows Web Server 2008 and IIS 7.0 when Serving Static and PHP Content Abstract D.J.Moore and P.S.Dowland
ECLIPSE Performance Benchmarks and Profiling. January 2009
ECLIPSE Performance Benchmarks and Profiling January 2009 Note The following research was performed under the HPC Advisory Council activities AMD, Dell, Mellanox, Schlumberger HPC Advisory Council Cluster
Supported File Systems
System Requirements VMware Infrastructure Platforms Hosts vsphere 5.x vsphere 4.x Infrastructure 3.5 (VI3.5) ESX(i) 5.x ESX(i) 4.x ESX(i) 3.5 vcenter Server 5.x (optional) vcenter Server 4.x (optional)
Running FileMaker Pro 5.0v3 on Windows 2000 Terminal Services
Running FileMaker Pro 5.0v3 on Windows 2000 Terminal Services 2000 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 www.filemaker.com FileMaker
An Oracle White Paper September 2010. Oracle Database Smart Flash Cache
An Oracle White Paper September 2010 Oracle Database Smart Flash Cache Introduction Oracle Database 11g Release 2 introduced a new database feature: Database Smart Flash Cache. This feature is available
Introduction. Application Performance in the QLinux Multimedia Operating System. Solution: QLinux. Introduction. Outline. QLinux Design Principles
Application Performance in the QLinux Multimedia Operating System Sundaram, A. Chandra, P. Goyal, P. Shenoy, J. Sahni and H. Vin Umass Amherst, U of Texas Austin ACM Multimedia, 2000 Introduction General
