Dialogic PowerMedia XMS
|
|
|
- Gyles Cain
- 10 years ago
- Views:
Transcription
1 Dialogic PowerMedia XMS Application Note: Optimizing VMware Host Hardware and Virtual Machine to Reduce Latency June 2015 Rev 1.0
2 Copyright and Legal Notice Copyright 2015 Dialogic Corporation. All Rights Reserved. You may not reproduce this document in whole or in part without permission in writing from Dialogic Corporation at the address provided below. All contents of this document are furnished for informational use only and are subject to change without notice and do not represent a commitment on the part of Dialogic Corporation and its affiliates or subsidiaries ("Dialogic"). Reasonable effort is made to ensure the accuracy of the information contained in the document. However, Dialogic does not warrant the accuracy of this information and cannot accept responsibility for errors, inaccuracies or omissions that may be contained in this document. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH DIALOGIC PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN A SIGNED AGREEMENT BETWEEN YOU AND DIALOGIC, DIALOGIC ASSUMES NO LIABILITY WHATSOEVER, AND DIALOGIC DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF DIALOGIC PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY INTELLECTUAL PROPERTY RIGHT OF A THIRD PARTY. Dialogic products are not intended for use in certain safety-affecting situations. Please see for more details. Due to differing national regulations and approval requirements, certain Dialogic products may be suitable for use only in specific countries, and thus may not function properly in other countries. You are responsible for ensuring that your use of such products occurs only in the countries where such use is suitable. For information on specific products, contact Dialogic Corporation at the address indicated below or on the web at It is possible that the use or implementation of any one of the concepts, applications, or ideas described in this document, in marketing collateral produced by or on web pages maintained by Dialogic may infringe one or more patents or other intellectual property rights owned by third parties. Dialogic does not provide any intellectual property licenses with the sale of Dialogic products other than a license to use such product in accordance with intellectual property owned or validly licensed by Dialogic and no such licenses are provided except pursuant to a signed agreement with Dialogic. More detailed information about such intellectual property is available from Dialogic's legal department at 6700 de la Cote-de-Liesse Road, Suite 100, Borough of Saint-Laurent, Montreal, Quebec, Canada H4T 2B5. Dialogic encourages all users of its products to procure all necessary intellectual property licenses required to implement any concepts or applications and does not condone or encourage any intellectual property infringement and disclaims any responsibility related thereto. These intellectual property licenses may differ from country to country and it is the responsibility of those who develop the concepts or applications to be aware of and comply with different national license requirements. Dialogic, Dialogic Pro, Dialogic Blue, Veraz, Brooktrout, Diva, BorderNet, PowerMedia, ControlSwitch, I-Gate, Mobile Experience Matters, Network Fuel, Video is the New Voice, Making Innovation Thrive, Diastar, Cantata, TruFax, SwitchKit, Eiconcard, NMS Communications, SIPcontrol, Exnet, EXS, Vision, incloud9, NaturalAccess and Shiva, among others as well as related logos, are either registered trademarks or trademarks of Dialogic Corporation and its affiliates or subsidiaries. Dialogic's trademarks may be used publicly only with permission from Dialogic. Such permission may only be granted by Dialogic's legal department at 6700 de la Cote-de-Liesse Road, Suite 100, Borough of Saint-Laurent, Montreal, Quebec, Canada H4T 2B5. Any authorized use of Dialogic's trademarks will be subject to full respect of the trademark guidelines published by Dialogic from time to time and any use of Dialogic's trademarks requires proper acknowledgement. The names of actual companies and products mentioned herein are the trademarks of their respective owners. This document discusses one or more open source products, systems and/or releases. Dialogic is not responsible for your decision to use open source in connection with Dialogic products (including without limitation those referred to herein), nor is Dialogic responsible for any present or future effects such usage might have, including without limitation effects on your products, your business, or your intellectual property rights.
3 Table of Contents 1. Overview... 5 Intended Audience... 5 Recommended Servers... 5 VMware ESXi Environment... 5 How to Use This Guide Introduction... 7 What possible factors introduce latency?... 7 Can latency be improved to the same level as "bare metal" servers?... 7 Why is optimization required?... 8 What are the possible effects on PowerMedia XMS?... 8 How can latency be improved? Hardware Level Optimization BIOS Settings Related to CPU and Memory Power Management BIOS Settings VM Level Optimization Create Multiple vlan with Individual Set of Physical NICs Select Virtual NIC VMXNET3 or SR-IOV Passthrough for the VM Select SCSI Controller Type as Paravirtual Install VMware Tools with Force Updating VMXNET3 and SCSI Driver Configure CPU Affinity Configure NUMA Node Affinity Reserve CPU for VM Reserve Memory for VM Configure Physical NIC Configure Virtual NIC Disable Interrupts Coalescing Disable Large Receive Offload (LRO) Configure VM Latency Sensitivity SR-IOV Passthrough Recommended Reading VMware PowerMedia XMS
4 Revision History Revision Release Date Notes 1.0 June 2015 Initial release of this document. Last modified: June 2015 Refer to for product updates and for information about support policies, warranty information, and service offerings. 4
5 1. Overview This guide provides information on optimizing VMware ESXi, server settings, and Virtual Machine ("VM") guest machines to reduce latency prior to installing Dialogic PowerMedia Extended Media Server (also referred to herein as "PowerMedia XMS" or "XMS"). The information is intended to provide guidelines for system configuration of virtualized environments and is not specific to a PowerMedia XMS release; however, these guidelines were established during PowerMedia XMS Release 2.3 and 2.4 in-house testing with VMware ESXi 5.5. PowerMedia XMS installation and configuration are out of scope of this document. For more information, refer to the Dialogic PowerMedia XMS Installation and Configuration Guide available at Intended Audience This guide applies to users (e.g., system administrators) who are familiar with IP networks and Virtual Machine (VM) technology, planning of VM resources, and preparing the VM before PowerMedia XMS installation. Recommended Servers The underlying physical server can be any Intel-Architecture-based server that is compatible with the VMware vsphere Hypervisor; however, the Intel Xeon dual-processor server family is recommended. Minimum Recommended Server Configuration that can host 2 ESXi VM: Notes: Processor: 2 socket, Intel Xeon E at 2.4 GHz (32 logical processors) or higher Memory: 32 GB RAM or higher Storage: Dual 15k RPM 6 Gbps SAS, 300 GB HDD, RAID 0, RAID controller with 1 GB Cache or better Ethernet Adapter: Intel I350 Gigabit (total of 4 interfaces 2 for each VM under a different vswitch) 1. Dialogic does not specifically recommend any particular brand of server. 2. Although the Minimum Recommended Server Configuration has been specified above, it is recommended to opt for a higher available specification at the time of planning, especially if running at maximum density with audio, video, MSRP and/or fax processing. VMware ESXi Environment The VMware ESXi must be installed directly on top of the server. The server must have VMware vsphere ESXi 5.5 or higher installed. The VMware vcenter must be installed to manage the hypervisor environment. This is required since VMware Hardware Profile 10 VM can only be controlled through vsphere Web Client which is available with vcenter. 5
6 How to Use This Guide The following sections should be completed in the order presented to discuss, optimize, and configure various configuration options available for ESXi virtualized environment: Section 2: Introduction provides basic information about latency introduced by the ESXi environment and possible effects on PowerMedia XMS running over it. Section 3: Hardware Level Optimization provides guidance on optimizing different vendor-specific hardware on which the ESXi environment will be running. Section 4: VM Level Optimization provides procedures optimizing the VM on which PowerMedia XMS software will be running. Section 5: SR-IOV Passthrough provides information on how the VM can be configured to access system level hardware directly, bypassing the VM kernel layer. Section 6: Recommended Reading provides useful links for information about the ESXi environment. 6
7 2. Introduction Virtualization is a technology that allows for running multiple operating systems on a single server. This is achieved by adding an abstraction layer over the hardware installed on the system, which decouples VMs from the host. In virtualized configurations, resources are allocated to each of the operating systems either pre-configured or dynamically shared among the operating systems on-demand. Hypervisors enable virtualization. There are two types. Type 1 hypervisors run directly on the server hardware in what is often referred to as a "bare metal" configuration. Type 2 hypervisors run on top of an operating system. As of the publication date of this document, PowerMedia XMS only supports running in the type 1 hypervisor configuration (i.e., "bare metal" configuration). Virtualization technology provides users with flexibility in deploying solutions in the field and allows for optimized uses of the hardware resources. However, virtualization generally adds overhead and latency to system performance. As such, configurations should be set to optimize environments for real-time systems and to minimize performance latency. It is not recommended to run more than two VMs per hypervisor due to hypervisor switching and overhead. What possible factors introduce latency? Latency may be introduced in a virtualized environment as a result of factors that include, but are not limited to the following: 1. Resource contention due to multiple VMs running on a server sharing the same set of critical resources, especially CPU, memory and network I/O bandwidth. 2. Overhead added due to an extra layer of processing done to achieve virtualization through scheduling. By default, the VMware host scheduler is not optimized for applications that require low latency. 3. Power Management related settings, which are available at both hardware and vsphere level. Server manufacturers add power management related settings in their BIOS to reduce power consumption of different components, such as CPU, memory, network, disk, etc. When power consumption is reduced to these components, it can introduce additional latency due to difference between sleep and wake-up time and impact system performance. Can latency be improved to the same level as "bare metal" servers? Latency can be optimized to a certain optimized level, but cannot be matched to the same level as "bare metal" servers. This document will provide guidelines on where it can be optimized to reduce latency. These suggestions are based on a technical white paper by VMware that suggests various options to reduce the latency if required by applications, such as PowerMedia XMS, running over it. The guidelines that will be provided are based on testing PowerMedia XMS with the VMware ESXI 5.5 environment. These suggestions can help to reduce the latency and, in turn, to improve overall performance of a PowerMedia XMS system running over a virtualized environment. For current information on tuning the VMware ESXI 5.5 environment, refer to the VMware website at 7
8 Why is optimization required? There are several indications that can lead a user to decide whether the VM needs optimization. Some of the indications can be somewhat easily observed, while others are more subtle to observe. Some of the following observations correlate to performances on "bare metal" servers where optimization might be needed: 1. CPU usage will be higher than usual for both system and user processes (e.g., processes that consume CPU as high as 20%). 2. CPU "iowait" time is higher. 3. Network RTD (Round Trip Delay) will be higher than usual. It can be measured by sending ICMP request and its response back to requester. These above can be noticed, respectively, as follows: 1. By comparing the two different values from the same VM with and without any load. 2. When comparing with native server performance. 3. Less latency will typically be observed when running a single VM on the server hardware as compared to running multiple VMs on the same hardware since there will not be resource contention. What are the possible effects on PowerMedia XMS? PowerMedia XMS needs three (3) critical resources: CPU, network, and memory (in that order). If any of these resources are oversubscribed or there is contention for the resources, PowerMedia XMS performance will be affected. For example, if a bottleneck is only found at the network I/O level due to network resource sharing between multiple VMs, then eventually it will likely cause higher memory and CPU usage at each VM due to wait times at network level. Disk I/O could be another factor that impacts performance to a certain extent. Running PowerMedia XMS on a virtualized environment without any optimization could affect performance adversely. For example, the following might be observed on the system: 1. High network packet round trip delay, which would cause call connect time to be longer than usual. For example, a SIP INVITE message from PowerMedia XMS system would receive a delayed response. And similarly, a response to incoming SIP method would be seen as delayed at remote system. The overall system delays should be measured versus looking at one packet or call. Latencies might not be seen with every call or packet; however, one call or message that does not show any sign of latency does not mean there is no latency on the system overall. 2. CPU usage for the PowerMedia XMS media engine process "ssp_x86linux_boot" consuming 20 to 30% more CPU than expected. 3. Audio and/or video quality degradation due to delayed RTP packet arrival or packet loss at either side. Note: The effect on PowerMedia XMS system due to latency could be from remote end points as well. It makes sense to measure the latency on the systems that are interacting with PowerMedia XMS. 8
9 How can latency be improved? Latency can be improved by optimizing hardware settings and tuning the individual VM. Examples of areas that could be optimized include: Allocating CPU cores dedicated to VM rather shared among many VMs Upgrading VM hardware profile to 10 Setting CPU Affinity Specifying NUMA Using optimized vnic VMXNET3 driver Allocating dedicated resources such as Ethernet/CPU and reserving memory for each VM Using VMware Latency Sensitivity feature More details are available in the Hardware Level Optimization and VM Level Optimization sections of this document. 9
10 3. Hardware Level Optimization This section will cover BIOS-related settings that could affect VM performance by adding latency. As of the publication date of this document, VMware recommends to optimize Power Management related BIOS settings as described in the document Best Practices for Performance Tuning of Latency-Sensitive Workloads in vsphere VMs available at The following settings should be reviewed and, where deemed appropriate, the identified changes should be made. Please note, however, that these settings and their names may vary from vendor to vendor. Please contact the server manufacturer for more information. BIOS Settings Related to CPU and Memory The recommended BIOS settings, if available on the server, are as follows: Enable "Turbo Boost" Enable "Hyper-Threading" Enable "NUMA" Disable "Node Interleaving" Enable "Hardware Assisted Virtualization" Disable "C-State" Disable "C1E" Power Management BIOS Settings Power Management can be controlled through the BIOS or managed by the operating system. Many manufacturers added BIOS options to control power consumption. However, since power management adds latency, it is recommended to disable all power management related options in the BIOS setting since they may not be exposed at the operating system level. Refer to the following examples: Power Management Options Applicable to HP ProLiant Servers 1. HP Power Profile - Maximum Performance 2. HP Power Regulator - HP Static High Performance Mode 3. Advanced Power Management Options 3.1. Minimum Processor Idle Power Core State - No C-States 3.2. Minimum Processor Idle Power Package State - No Package State 3.3. Energy/Performance Bias - Maximum Performance 3.4. Intel QPI Link Power Management - Disabled 3.5. Collaborative Power Control - Disabled 3.6. Power Capping Support - Disabled 3.7. DIMM Voltage Preference - Optimized for Performance 3.8. Memory Power Savings Mode - Maximum Performance 3.9. Dynamic Power Capping Functionality - Disabled Memory Refresh Rate - 1x 10
11 Power Management Options Applicable to Dell PowerEdge 12th Generation Servers System Profile Settings: 1. Power Management Mode - Maximum Performance 2. CPU Power Management - Maximum Performance 3. Memory Frequency - Maximum Performance 4. C1E - Disabled 5. Monitor/Mwait - Disabled 6. Memory Refresh Rate - 1x Note: Please refer to the server manufacturer s BIOS manual to find appropriate settings. These settings may vary depending upon the server make and version of the BIOS used on the system. 11
12 4. VM Level Optimization This section covers configuration of the VM to optimize system performance by reducing latency. These settings are based on the VMware documentation for applications, such as PowerMedia XMS, which require low latency for real-time media handling. Create Multiple vlan with Individual Set of Physical NICs To improve performance of each VM, a set of VMs on a vlan can be assigned to a dedicated physical NIC. Some example configurations are as follows: An ESXi system that has 4 physical NICs and all of the VMs under a single vlan is generally a less desirable configuration from a performance point of view. If an ESXi system has 4 physical NICs where 4 VMs need to be hosted, it is recommended to have at least 2 vlan configured against 2 physical NICs, rather than having 1 vlan with all 4 physical NICs, and to create 2 VMs in each vlan. Taking these steps generally should improve performance due to network routing separation. Select Virtual NIC VMXNET3 or SR-IOV Passthrough for the VM While creating the VM, select the virtual NIC adapter called VMXNET3 or SR-IOV Passthrough. This vnic driver is optimized to work under virtualized environment to reduce latency. Important Note: Before selecting NIC, see the SR-IOV Passthrough section of this document. Select SCSI Controller Type as Paravirtual While creating the VM, make sure to select "SCSI controller type" as "Paravirtual". The Paravirtual SCSI controller is optimized to provide greater throughput, which in turn reduces CPU usage. Install VMware Tools with Force Updating VMXNET3 and SCSI Driver PowerMedia XMS requires updating Ethernet driver parameters to optimize the operating system driver for better performance. As of the publication date of this document, there is a known issue in the VMXNET3 driver that comes with Red Hat Enterprise Linux 6.4, which is causing the operating system to reboot while executing "ethtool" to fine tune its parameters. Updating the VMXNET3 driver version to NAPI resolved this issue. Important Note: If VMware Tools is already installed, use the below command to configure and force updating of the driver for Paravirtual VMXNET3 and SCSI adapter: /usr/bin/vmware-config-tools.pl --clobber-kernel-modules=vmxnet3,pvscsi 12
13 If VMware Tools is not installed on the VM, follow the instructions below to install and force updating of the driver for Paravirtual VMXNET3 and SCSI adapter. Note: The first couple of steps are performed on the host within Workstation menu. 1. Power on the VM. 2. After the guest operating system has started, prepare your VM to install VMware Tools. Choose VM > Install VMware Tools. Note: The remaining steps take place inside the VM. 3. Log in as root and mount the VMware Tools virtual CD-ROM image to a folder (e.g., /mnt). mount /dev/cdrom /mnt 4. Untar the VMware Tools tar file into a folder (e.g., /tmp). cd /tmp tar zxpf /mnt/vmwaretools <xxxx>.tar.gz unmount /dev/cdrom Note: <xxxx> is the build/revision number of the VMware Workstation release. 5. Run the VMware Tools installer and press Enter when asked questions in order to proceed with the default answer. cd /tmp/vmware-tools-distrib./vmware-install.pl --clobber-kernel-modules=vmxnet3,pvscsi Note: Update any other driver that needs updating (e.g., Paravirtual SCSI driver) if prompted by VMware Tool. 6. Reboot the server after installation completes. When the system is back up, check the version of the VMXNET3 driver by executing command "ethtool -i eth<n>" where n = Ethernet device number. The version of the driver is NAPI, which resolves the known issue observed (as of the publication date of this document) in Red Hat Enterprise Linux 6.4. After the VMXNET3 driver upgrade, the following screenshot appears (as of the publication date of this document). 13
14 Configure CPU Affinity To run real-time software on VMware ESXi, it is generally recommended to use CPU Affinity. In such a configuration, each virtual processor can get CPU resources directly from one or more of the available host CPUs, which in turn reduces the likelihood of virtual processors being rescheduled to give CPU time to another VM. Also, such a configuration results in each VM being more isolated, which helps real-time software run as though it were in a physical server environment. Due to the intensive use of the operating system kernel resources by PowerMedia XMS, it is highly recommended to set aside one physical (host) CPU for the VMware ESXi hypervisor. This host CPU should not be part of the affinity setting of any of the VMs. For example, on a dual-processor, four-core host system without hyper-threading system, there will be eight physical CPUs available to VMware ESXi. In this scenario, 2 VMs are configured with 2 virtual processors each. The system administrator could set the first VM CPU affinity to physical CPUs 0 through 3 (total of 4) and the second VM CPU with affinity to physical CPUs 4 through 6 (total of 3); this leaves physical CPU 7 unassigned and available to the VMware ESXi hypervisor. Note: Be careful not to cross physical processor boundaries when assigning CPU Affinity to VM so that all host CPUs assigned to a VM belong to the same host physical processor. To configure CPU Affinity, do the following (see the Configure NUMA Node Affinity section before configuring CPU Affinity): 1. Go to vsphere Web Client and select the VM that needs configuration. 2. Power off the VM. 3. Select the VM settings that require modifications. 4. Right-click Edit Settings > Virtual Hardware tab > CPU. 5. Expand CPU options and then specify the value in the Scheduling Affinity field. For example, if the vcpu is from 0 to 7 to one VM, then the value should be "0-7". 6. Click OK to submit changes. Configure NUMA Node Affinity NUMA is a block of memory segments that get assigned to a particular CPU core or processor socket based on the processor architectural design. A suggestion is to allocate CPU cores to a VM that belong to same NUMA node. Overlapping NUMA nodes can cause performance degradation due to accessing memory segments that are not local and belong to a different CPU core. On a NUMA-supported host system, it is recommended to set the NUMA node affinity (NUMA might need to be enabled in the BIOS if disabled by default). 14
15 In the above illustration, there are 2 CPU sockets with 2 NUMA nodes and each has 4 cores. If the VM-allocated CPU cores are distributed among multiple NUMA nodes, then performance generally degrades. Hence, for VM#2, which requires 4 CPU cores, it is advisable to allocate CPU cores from #4 to #7 rather than from #2 to #5. Configure NUMA Node Affinity from vsphere Web Client Processor affinity for vcpus to be scheduled on specific NUMA nodes and memory affinity for all VM memory to be allocated from those NUMA nodes can be set using the vsphere Client as follows: 1. Go to vsphere Web Client and select the VM that needs configuration. 2. Power off the VM. 3. Select the VM settings that require modifications. 4. Right-click Edit Settings > VM Options tab > Advanced. 5. Expand Advanced options and then click on Edit Configurations. 6. Look for the "numa.nodeaffinity" field. If it is already available, edit it. If not, proceed to Step If an entry is not available, add a new entry for "numa.nodeaffinity= 0, 1, n" where values "0..n" are the processor socket numbers by clicking on Add Row. 8. Click on OK to apply new value. vnuma is automatically enabled for VMs configured with more than 8 vcpus that are wider than the number of cores per physical NUMA node. 15
16 Find NUMA Nodes Allocated to VM on Linux System 1. Log in through SSH. 2. Run utility numactl with the command line below: #numactl --hardware The output will be something similar. Notes: 1. The information above represents guidelines. The system administrator should find the correct value of the NUMA as per system resources such as CPU sockets and memory installed on the system. 2. NUMA node boundary is calculated depending upon the processor architecture and is vendor-specific in the following ways: a. Available RAM / Number of cores on the system. b. Available RAM / Number of physical processor socket. c. Virtual NUMA gets assigned by VMware automatically when the number of vcpus is set to be >8. Reserve CPU for VM Reserve dedicated CPU in MHz or GHz for a VM. This requires setting "Latency Sensitivity" value to "High" (see the Configure VM Latency Sensitivity section of this document). This option results in a specified amount of CPU processing power being supplied to the VM. CPU reservation can be achieved as follows: Go to vsphere Web Client and select the VM that needs configuration. Power off the VM. Select the VM settings that require modification. Right-click Edit Settings > Virtual Hardware tab > CPU. Enter the amount of processing power in the Reservation text box and select the unit MHz or GHz. Click OK to submit changes. 16
17 Reserve Memory for VM Reserving memory for a specific VM causes the VM to be in possession of specified memory and will instruct vsphere not to reclaim the prior allocated memory when a VM frees it. The memory reservation for a VM helps improve the performance due to lesser scheduling and avoiding resource content at run time. Memory reservation can be achieved as follows: 1. Specify the memory for a VM. a. Go to vsphere Web Client and select the VM that needs configuration. b. Power off the VM. c. Select the VM settings that require modification. d. Right-click Edit Settings > Virtual Hardware tab > Memory. e. Enter the amount of memory in the RAM text box and select the unit MB or GB. f. Click OK to submit changes. 2. Reserve the memory for a VM. a. Go to vsphere Web Client and select the VM that needs configuration. b. Power off the VM. c. Select the VM settings that require modification. d. Right-click Edit Settings > Virtual Hardware tab > Memory. e. Select Reservation. f. Click OK to submit changes. Note: When the Memory option is set to Reservation for a VM, the VM will power on only when the reservation of a specified amount of memory is successful. Configure Physical NIC Disabling interrupt moderation is supported on most of the Gigabit Ethernet modules and delays the delivery of interrupts for a received packet to the host. It can be disabled by following the steps below through "esxcli": 1. List out the NIC module parameter available on the host using commands. a. "esxcli network nic list" this will return a list of NIC modules available on the host. b. "esxcli system module parameters list m <NIC Driver Module>" this will return the list of parameters available for the NIC module provided as input. 2. Disable interrupt moderation using a command. esxcli system module parameters set -m <NIC Driver Module> -p "<parameter name with its value>" For example, for an Intel 10-gigabit driver module, use this command: esxcli system module parameters set -m ixgbe -p "InterruptThrottleRate=0" Note: Disabling interrupt moderation reduces latency; however, it can come at a cost of some additional CPU usage. 17
18 Configure Virtual NIC As of the publication date of this document, VMware recommends using the VMXNET3 virtual NIC for the VM. The following parameters need to be adjusted for the low latency as per the VMware guidelines. Disable Interrupts Coalescing 1. Go to vsphere Web Client and select the VM that needs configuration. 2. Power off the VM. 3. Select the VM settings that require modification. 4. Right-click Edit Settings > VM Options tab > Advanced. 5. Expand Advanced options and then click on Edit Configurations. 6. Look for the "ethernet<x>.coalescingscheme" field. If it is already available for every Ethernet device, then edit it. If not, proceed to #7. 7. If an entry is not available, add a new entry for ethernet<x>. coalescingscheme = "disabled" where the values "x=0..n" are the Ethernet device IDs available to this VM. The following command will disable "coalescingscheme" for the Ethernet device 0. ethernet0.coalescingscheme = "disabled" 8. Click OK to apply new value. Disable Large Receive Offload (LRO) As of the publication date of this document, VMware recommends disabling LRO if applications running require low latency and use TCP for packet transmission. Disabling LRO can be done using the "modprobe" utility and, for persistent usage even after reboot, by specifying module parameters into "/etc/modprobe.conf" file. 1. Add a parameter by adding a line with "options vmxnet3 disable_lro=1" into "/etc/modprobe.conf" file. 2. Reboot the server to have the parameter take effect. If not, proceed to Step Set the parameter without rebooting the OS. a. Unload VMXNET3 module. modprobe -r vmxnet3 b. Disable parameter LRO using command. modprobe vmxnet3 disable_lro=1 c. Reload driver. modprobe vmxnet3 18
19 Configure VM Latency Sensitivity It is suggested to turn on the Latency Sensitivity features on the VM, which are intended to use with the PowerMedia XMS in order to achieve optimized performance. Note: This setting requires CPU reservation in MHz or GHz. If not reserved, a warning message will appear on the screen to reserve CPU. For more information, see the Reserve CPU for VM section in this document. To set the VM Latency Sensitivity value to High, follow the instructions below: 1. Go to vsphere Web Client and select the VM that needs configuration. 2. Power off the VM. 3. Select the VM settings that require modification. 4. Right-click Edit Settings > VM Options tab. 5. In the Latency Sensitivity field, select High from drop-down list. 6. Click OK to apply new value. The following shows a screenshot for reference: 19
20 5. SR-IOV Passthrough As of the publication date of this document, VMware suggests using the "SR-IOV Passthrough" adapter in case "bare metal" server-like performance is desired in a virtualized environment. For more information, refer to the technical white paper, Deploying Extremely Latency-Sensitive Applications in VMware vsphere 5.5, published by VMware for the comparison between "Native Server", "VM with SR-IOV + Latency Sensitivity as High", and "SR-IOV" only, which, as of the publication date of this document, is available at vsphere 5.5 and later releases support Single Root I/O Virtualization (SR-IOV). In vsphere, a VM can use an SR-IOV virtual function for networking. When SR-IOV is chosen as the network adapter, it bypasses the VMkernel for networking to reduce latency and improve CPU efficiency. SR-IOV is a specification that allows a single PCIe physical device under a single root port to appear as multiple separate physical devices to the hypervisor or the guest operating system. On supported hardware, one or more VMs can be configured to directly access one or more of the logical devices directly. When SR-IOV is used with Latency Sensitivity value set to High, VMware testing has shown that performance can be achieved as near as observed on native servers. Note: SR-IOV can only be used if the following criteria are met: 1. Must have input/output memory management unit (IOMMU) support and be enabled in BIOS. 2. Must support SR-IOV and be enabled in BIOS. Refer to the VMware vsphere Networking manual for more information on SR-IOV Passthrough network adapter. 20
21 6. Recommended Reading The following materials are related to the VMware ESXi environment and are available at their respective web links as of the publication date of this document. However, using a different hypervisor environment supported by PowerMedia XMS, such as Kernel Virtual Machine (KVM), XenServer VM, or Oracle VM, requires the user to study and learn to optimize the respective hypervisor environment. VMware Deploying Extremely Latency-Sensitive Applications in VMware vsphere Best Practices for Performance Tuning of Latency Sensitive Workloads in vsphere VMs Power Management and Performance in VMware vsphere 5.1 and Performance.pdf Performance Best Practices for VMware vsphere PowerMedia XMS PowerMedia XMS Release 2.x Documentation
How To Run Powermedia Xms On A Runspace Cloud Server On A Powermedia Media Server On An Ipad Or Ipad (For Free) On A Raspberry Powermedia (For A Powerpress) On An Ubuntu 2.5 (
Dialogic PowerMedia XMS and the Rackspace Managed Cloud Running the PowerMedia XMS Verification Demo in Rackspace Introduction This tech note provides instructions on getting a Dialogic PowerMedia XMS
Dialogic BorderNet Virtualized Session Border Controller (SBC)
Dialogic BorderNet Virtualized Session Border Controller (SBC) Installation and Set-Up Guide December 2013 www.dialogic.com Copyright and Legal Notice Copyright 2013 Dialogic Inc. All Rights Reserved.
Dialogic BorderNet 4000 Session Border Controller (SBC) Quick Start Guide
Dialogic BorderNet 4000 Session Border Controller (SBC) Release 3.4 Copyright and Legal Notice Copyright 2012-2016 Dialogic Corporation. All Rights Reserved. You may not reproduce this document in whole
Dialogic Brooktrout Fax Service Provider Software
Dialogic Brooktrout Fax Service Provider Software Installation and Configuration Guide for the Microsoft Fax Server July 2015 931-121-04 www.dialogic.com Copyright and Legal Notice Copyright 1998-2015
Dialogic Video Conferencing Demo Installation Guide
Dialogic Video Conferencing Demo Installation Guide October 2011 05-2700-001 www.dialogic.com Copyright and Legal Notice Copyright 2011 Dialogic Inc. All Rights Reserved. You may not reproduce this document
High Availability and Load Balancing for Basic Dialogic PowerMedia Extended Media Server (XMS) Configurations Using the Linux Virtual Server
High Availability and Load Balancing for Basic Dialogic PowerMedia Extended Media Server (XMS) Configurations Using the Linux Virtual Server Framework Introduction As more and more critical telecom applications
IMPORTANT NOTE. Dialogic Brooktrout SR140 Fax Software with T38Fax.com SIP Trunking Service. Installation and Configuration Integration Note
Dialogic Brooktrout SR140 Fax Software with T38Fax.com SIP Trunking Service IMPORTANT NOTE This document is not to be shared with or disseminated to other third parties, in whole or in part, without prior
Dialogic Brooktrout SR140 Fax Software with Broadvox GO! SIP Trunking Service
Dialogic Brooktrout SR140 Fax Software with Broadvox GO! SIP Trunking Service December 2010 64-0600-23 www.dialogic.com Copyright and Legal Notice Copyright 2010 Dialogic Inc. All Rights Reserved. You
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1
How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED
Dialogic 4000 Media Gateway Series as a Survivable Branch Appliance for Microsoft Lync Server 2010
Dialogic 4000 Media Gateway Series as a Survivable Branch Appliance for Microsoft Lync Server 2010 Deployment Guide December 2011 64-1146-02 www.dialogic.com Copyright and Legal Notice Copyright 2011 Dialogic
IMPORTANT NOTE. Dialogic Brooktrout SR140 Fax Software with Alcatel OmniPCX Office. Installation and Configuration Integration Note
Dialogic Brooktrout SR140 Fax Software with Alcatel OmniPCX Office IMPORTANT NOTE This document is not to be shared with or disseminated to other third parties, in whole or in part, without prior written
Dialogic System Release 6.0 PCI for Windows
Dialogic System Release 6.0 PCI for Windows Software Installation Guide March 2009 05-1957-004 Copyright and Legal Notice Copyright 2003-2009,. All Rights Reserved. You may not reproduce this document
This document is intended to provide details on installation and configuration of the DNS.
Dialogic Media Gateway Installation and Configuration Integration Note This document is intended to provide details on installation and configuration of the DNS. 1. Installing DNS on Windows Server 2003:
Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment
Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,
How To Save Power On A Server With A Power Management System On A Vsphere Vsphee V2.2.1.2 (Vsphere) Vspheer (Vpower) (Vesphere) (Vmware
Best Practices for Performance Tuning of Latency-Sensitive Workloads in vsphere VMs TECHNICAL WHITE PAPER Table of Contents Introduction... 3 BIOS Settings... 3 NUMA... 4 Choice of Guest OS... 5 Physical
Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009
Performance Study Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build 164009 Introduction With more and more mission critical networking intensive workloads being virtualized
Dialogic Conferencing API
Dialogic Conferencing API Programming Guide and Library Reference October 2012 05-2506-004 Copyright 2006-2012 Dialogic Inc. All Rights Reserved. You may not reproduce this document in whole or in part
Oracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V Technical Brief v1.0 September 2012 2 Intel Ethernet and Configuring SR-IOV on Windows*
Leveraging NIC Technology to Improve Network Performance in VMware vsphere
Leveraging NIC Technology to Improve Network Performance in VMware vsphere Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction... 3 Hardware Description... 3 List of Features... 4 NetQueue...
Deploying Extremely Latency-Sensitive Applications in VMware vsphere 5.5
Deploying Extremely Latency-Sensitive Applications in VMware vsphere 5.5 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 Latency Issues in a Virtualized Environment... 3 Resource
Dialogic Host Media Processing Software Release 3.0WIN
Dialogic Host Media Processing Software Release 3.0WIN Software Installation Guide February 2014 05-2504-004 Copyright and Legal Notice Copyright 2006-2014 Dialogic Inc. All Rights Reserved. You may not
How to Configure Intel X520 Ethernet Server Adapter Based Virtual Functions on Citrix* XenServer 6.0*
How to Configure Intel X520 Ethernet Server Adapter Based Virtual Functions on Citrix* XenServer 6.0* Technical Brief v1.0 December 2011 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software
Best Practices for Monitoring Databases on VMware Dean Richards Senior DBA, Confio Software 1 Who Am I? 20+ Years in Oracle & SQL Server DBA and Developer Worked for Oracle Consulting Specialize in Performance
Install Guide for JunosV Wireless LAN Controller
The next-generation Juniper Networks JunosV Wireless LAN Controller is a virtual controller using a cloud-based architecture with physical access points. The current functionality of a physical controller
Dialogic System Release 6.0 PCI for Windows
Dialogic System Release 6.0 PCI for Windows Release Update May 8, 2015 05-2221-105 Copyright and Legal Notice Copyright 2004-2015 Dialogic Corporation. All Rights Reserved. You may not reproduce this document
Intel Data Direct I/O Technology (Intel DDIO): A Primer >
Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
Silver Peak Virtual Appliances
Silver Peak Virtual Appliances Frequently Asked Questions (FAQ) VX Installation This section addresses questions or problems you may encounter during Silver Peak VX installation. 1. I need to increase
RSA Security Analytics Virtual Appliance Setup Guide
RSA Security Analytics Virtual Appliance Setup Guide Copyright 2010-2015 RSA, the Security Division of EMC. All rights reserved. Trademarks RSA, the RSA Logo and EMC are either registered trademarks or
VIRTUALIZATION-MANAGEMENT COMPARISON: DELL FOGLIGHT FOR VIRTUALIZATION VS. SOLARWINDS VIRTUALIZATION MANAGER
VIRTUALIZATION-MANAGEMENT COMPARISON: DELL FOGLIGHT FOR VIRTUALIZATION VS. SOLARWINDS VIRTUALIZATION MANAGER Your company uses virtualization to maximize hardware efficiency in your large datacenters.
Performance brief for IBM WebSphere Application Server 7.0 with VMware ESX 4.0 on HP ProLiant DL380 G6 server
Performance brief for IBM WebSphere Application Server.0 with VMware ESX.0 on HP ProLiant DL0 G server Table of contents Executive summary... WebSphere test configuration... Server information... WebSphere
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE QSG-07847-001_v01 October 2015 Quick Start Guide Requirements REQUIREMENTS This Quick Start Guide is intended for those who are technically comfortable with minimal
Vocera Voice 4.3 and 4.4 Server Sizing Matrix
Vocera Voice 4.3 and 4.4 Server Sizing Matrix Vocera Server Recommended Configuration Guidelines Maximum Simultaneous Users 450 5,000 Sites Single Site or Multiple Sites Requires Multiple Sites Entities
ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist
ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge
Monitoring Databases on VMware
Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com
Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure
Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged
Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine
Where IT perceptions are reality Test Report OCe14000 Performance Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine Document # TEST2014001 v9, October 2014 Copyright 2014 IT Brand
How System Settings Impact PCIe SSD Performance
How System Settings Impact PCIe SSD Performance Suzanne Ferreira R&D Engineer Micron Technology, Inc. July, 2012 As solid state drives (SSDs) continue to gain ground in the enterprise server and storage
Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study
White Paper Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study 2012 Cisco and/or its affiliates. All rights reserved. This
Configuration Maximums
Topic Configuration s VMware vsphere 5.1 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.1. The limits presented in the
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
Host Power Management in VMware vsphere 5
in VMware vsphere 5 Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction.... 3 Power Management BIOS Settings.... 3 Host Power Management in ESXi 5.... 4 HPM Power Policy Options in ESXi
Dialogic PowerMedia Extended Media Server (XMS) Quick Start Guide
Dialogic PowerMedia Extended Media Server (XMS) Quick Start Guide August 2012 05-2701-003 www.dialogic.com Copyright and Legal Notice Copyright 2012 Dialogic Inc. All Rights Reserved. You may not reproduce
vsphere Networking vsphere 6.0 ESXi 6.0 vcenter Server 6.0 EN-001391-01
vsphere 6.0 ESXi 6.0 vcenter Server 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE
MODULE 3 VIRTUALIZED DATA CENTER COMPUTE Module 3: Virtualized Data Center Compute Upon completion of this module, you should be able to: Describe compute virtualization Discuss the compute virtualization
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide
HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management
Novell Cluster Services Implementation Guide for VMware
www.novell.com/documentation Novell Cluster Services Implementation Guide for VMware Open Enterprise Server 11 SP2 January 2014 Legal Notices Novell, Inc., makes no representations or warranties with respect
Zeus Traffic Manager VA Performance on vsphere 4
White Paper Zeus Traffic Manager VA Performance on vsphere 4 Zeus. Why wait Contents Introduction... 2 Test Setup... 2 System Under Test... 3 Hardware... 3 Native Software... 3 Virtual Appliance... 3 Benchmarks...
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Scaling in a Hypervisor Environment
Scaling in a Hypervisor Environment Richard McDougall Chief Performance Architect VMware VMware ESX Hypervisor Architecture Guest Monitor Guest TCP/IP Monitor (BT, HW, PV) File System CPU is controlled
Enabling Technologies for Distributed and Cloud Computing
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF Multi-core CPUs and Multithreading
SUSE Linux Enterprise 10 SP2: Virtualization Technology Support
Technical White Paper LINUX OPERATING SYSTEMS www.novell.com SUSE Linux Enterprise 10 SP2: Virtualization Technology Support Content and modifications. The contents of this document are not part of the
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, Symmetrix Management Console, and VMware vcenter Converter
EMC Virtual Infrastructure for SAP Enabled by EMC Symmetrix with Auto-provisioning Groups, VMware vcenter Converter A Detailed Review EMC Information Infrastructure Solutions Abstract This white paper
Introduction to Windows Server 2016 Nested Virtualization
Front cover Introduction to Windows Server 2016 Nested Virtualization Introduces this new feature of Microsoft Windows Server 2016 Describes the steps how to implement nested virtualization Demonstrates
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Dialogic IMG 1010/1004 Integrated Media Gateway Downgrade System Software
Dialogic IMG 1010/1004 Integrated Media Gateway Downgrade System Software December 2009 www.dialogic.com Copyright and Legal Notice Copyright 2005-2010 Dialogic Corporation. All Rights Reserved. You may
Getting Started with ESXi Embedded
ESXi 4.1 Embedded vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
RESOLVING SERVER PROBLEMS WITH DELL PROSUPPORT PLUS AND SUPPORTASSIST AUTOMATED MONITORING AND RESPONSE
RESOLVING SERVER PROBLEMS WITH DELL PROSUPPORT PLUS AND SUPPORTASSIST AUTOMATED MONITORING AND RESPONSE Sometimes, a power fluctuation can damage a memory module, or a hard drive can fail, threatening
Accelerating High-Speed Networking with Intel I/O Acceleration Technology
White Paper Intel I/O Acceleration Technology Accelerating High-Speed Networking with Intel I/O Acceleration Technology The emergence of multi-gigabit Ethernet allows data centers to adapt to the increasing
VXLAN Performance Evaluation on VMware vsphere 5.1
VXLAN Performance Evaluation on VMware vsphere 5.1 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VXLAN Performance Considerations... 3 Test Configuration... 4 Results... 5
Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
Managing Multi-Hypervisor Environments with vcenter Server
Managing Multi-Hypervisor Environments with vcenter Server vcenter Server 5.1 vcenter Multi-Hypervisor Manager 1.0 This document supports the version of each product listed and supports all subsequent
An Oracle Technical White Paper June 2010. Oracle VM Windows Paravirtual (PV) Drivers 2.0: New Features
An Oracle Technical White Paper June 2010 Oracle VM Windows Paravirtual (PV) Drivers 2.0: New Features Introduction... 2 Windows Paravirtual Drivers 2.0 Release... 2 Live Migration... 3 Hibernation...
Intel I340 Ethernet Dual Port and Quad Port Server Adapters for System x Product Guide
Intel I340 Ethernet Dual Port and Quad Port Server Adapters for System x Product Guide Based on the new Intel 82580 Gigabit Ethernet Controller, the Intel Ethernet Dual Port and Quad Port Server Adapters
vsphere Networking vsphere 5.5 ESXi 5.5 vcenter Server 5.5 EN-001074-02
vsphere 5.5 ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
Plexxi Control Installation Guide Release 2.1.0
Plexxi Control Installation Guide Release 2.1.0 702-20002-10 Rev 1.2 February 19, 2015 100 Innovative Way - Suite 3322 Nashua, NH 03062 Tel. +1.888.630.PLEX (7539) www.plexxi.com Notices The information
Getting the Most Out of Virtualization of Your Progress OpenEdge Environment. Libor Laubacher Principal Technical Support Engineer 8.10.
Getting the Most Out of Virtualization of Your Progress OpenEdge Environment Libor Laubacher Principal Technical Support Engineer 8.10.2013 Agenda Virtualization Terms, benefits, vendors, supportability,
Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM
White Paper Virtualization Performance on SGI UV 2000 using Red Hat Enterprise Linux 6.3 KVM September, 2013 Author Sanhita Sarkar, Director of Engineering, SGI Abstract This paper describes how to implement
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
Developing Higher Density Solutions with Dialogic Host Media Processing Software
Telecom Dialogic HMP Media Server Developing Higher Density Solutions with Dialogic Host Media Processing Software A Strategy for Load Balancing and Fault Handling Developing Higher Density Solutions with
Enabling Technologies for Distributed Computing
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF Multi-core CPUs and Multithreading Technologies
Using esxtop to Troubleshoot Performance Problems
VMWARE TECHNICAL TROUBLESHOOTING NOTE VMware ESX Server 2 Using esxtop to Troubleshoot Performance Problems The VMware esxtop tool provides a real-time view (updated every five seconds, by default) of
Configuration Maximums
Topic Configuration s VMware vsphere 5.0 When you select and configure your virtual and physical equipment, you must stay at or below the maximums supported by vsphere 5.0. The limits presented in the
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters
Evaluation Report: Emulex OCe14102 10GbE and OCe14401 40GbE Adapter Comparison with Intel X710 10GbE and XL710 40GbE Adapters Evaluation report prepared under contract with Emulex Executive Summary As
BIG-IP Virtual Edition Setup Guide for Linux KVM. Version 11.4
BIG-IP Virtual Edition Setup Guide for Linux KVM Version 11.4 Table of Contents Table of Contents Legal Notices...5 Chapter 1: Getting Started with BIG-IP Virtual Edition...7 What is BIG-IP Virtual Edition?...8
StarWind iscsi SAN: Configuring Global Deduplication May 2012
StarWind iscsi SAN: Configuring Global Deduplication May 2012 TRADEMARKS StarWind, StarWind Software, and the StarWind and StarWind Software logos are trademarks of StarWind Software that may be registered
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems
Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems Applied Technology Abstract By migrating VMware virtual machines from one physical environment to another, VMware VMotion can
Bosch Video Management System High Availability with Hyper-V
Bosch Video Management System High Availability with Hyper-V en Technical Service Note Bosch Video Management System Table of contents en 3 Table of contents 1 Introduction 4 1.1 General Requirements
XenClient Enterprise Synchronizer Installation Guide
XenClient Enterprise Synchronizer Installation Guide Version 5.1.0 March 26, 2014 Table of Contents About this Guide...3 Hardware, Software and Browser Requirements...3 BIOS Settings...4 Adding Hyper-V
VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
Voice over IP (VoIP) Performance Evaluation on VMware vsphere 5
Voice over IP (VoIP) Performance Evaluation on VMware vsphere 5 Performance Study TECHNICAL WHITEPAPER Table of Contents Introduction... 3 VoIP Performance: Voice Quality and Jitter... 3 Evaluation of
Dialogic PowerMedia Media Resource Broker (MRB)
Dialogic PowerMedia Media Resource Broker (MRB) The PowerMedia Media Resource Broker (MRB), a standards- AS cluster AS cluster compliant, software-based Media Resource Broker that allows application developers,
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1)
Cisco Nexus 1000V Virtual Ethernet Module Software Installation Guide, Release 4.0(4)SV1(1) September 17, 2010 Part Number: This document describes how to install software for the Cisco Nexus 1000V Virtual
Configuring Your Computer and Network Adapters for Best Performance
Configuring Your Computer and Network Adapters for Best Performance ebus Universal Pro and User Mode Data Receiver ebus SDK Application Note This application note covers the basic configuration of a network
Full and Para Virtualization
Full and Para Virtualization Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF x86 Hardware Virtualization The x86 architecture offers four levels
How To Compare Two Servers For A Test On A Poweredge R710 And Poweredge G5P (Poweredge) (Power Edge) (Dell) Poweredge Poweredge And Powerpowerpoweredge (Powerpower) G5I (
TEST REPORT MARCH 2009 Server management solution comparison on Dell PowerEdge R710 and HP Executive summary Dell Inc. (Dell) commissioned Principled Technologies (PT) to compare server management solutions
Peter Senna Tschudin. Performance Overhead and Comparative Performance of 4 Virtualization Solutions. Version 1.29
Peter Senna Tschudin Performance Overhead and Comparative Performance of 4 Virtualization Solutions Version 1.29 Table of Contents Project Description...4 Virtualization Concepts...4 Virtualization...4
New!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
Dialogic PowerMedia XMS JSR 309 Connector Software Release 4.0
Dialogic PowerMedia XMS JSR 309 Connector Software Release 4.0 Installation and Configuration Guide with Oracle Communications Converged Application Server April 2015 Rev 1.3 www.dialogic.com Copyright
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers
Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers Enterprise Product Group (EPG) Dell White Paper By Todd Muirhead and Peter Lillian July 2004 Contents Executive Summary... 3 Introduction...
AXIS Camera Station Quick Installation Guide
AXIS Camera Station Quick Installation Guide Copyright Axis Communications AB April 2005 Rev. 3.5 Part Number 23997 1 Table of Contents Regulatory Information.................................. 3 AXIS Camera
DPDK Summit 2014 DPDK in a Virtual World
DPDK Summit 2014 DPDK in a Virtual World Bhavesh Davda (Sr. Staff Engineer, CTO Office, ware) Rashmin Patel (DPDK Virtualization Engineer, Intel) Agenda Data Plane Virtualization Trends DPDK Virtualization
INSTALLATION GUIDE. AXIS Camera Station
INSTALLATION GUIDE AXIS Camera Station About this Guide This guide is intended for administrators and users of the AXIS Camera Station, and is applicable for software release 3.50 and later. It covers
Deploying and Updating Avaya Aura Media Server Appliance
Deploying and Updating Avaya Aura Media Server Appliance Release 7.7 Issue 3 May 2016 2015-2016, Avaya, Inc. All Rights Reserved. Notice While reasonable efforts have been made to ensure that the information
RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vsphere 5 R E S E A R C H N O T E
RDMA Performance in Virtual Machines using QDR InfiniBand on VMware vsphere 5 R E S E A R C H N O T E RDMA Performance in Virtual Machines using QDR InfiniBand on vsphere 5 Table of Contents Introduction...
Abstract. Microsoft Corporation Published: November 2011
Linux Integration Services Version 3.2 for Hyper-V (Windows Server 2008, Windows Server 2008 R2, Microsoft Hyper-V Server 2008, and Microsoft Hyper-V Server 2008 R2) Readme Microsoft Corporation Published:
Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
