An Oracle White Paper December 2015. Oracle VM 3: 10GbE Network Performance Tuning

Similar documents
Corente Cloud Services Exchange (CSX) Corente Cloud Services Gateway Site Survey Form

An Oracle White Paper January Oracle WebLogic Server on Oracle Database Appliance

Improved Data Center Power Consumption and Streamlining Management in Windows Server 2008 R2 with SP1

Licensing Windows Server 2012 for use with virtualization technologies

How To Install An Orin Failver Engine On A Network With A Network Card (Orin) On A 2Gigbook (Orion) On An Ipad (Orina) Orin (Ornet) Ornet (Orn

SBClient and Microsoft Windows Terminal Server (Including Citrix Server)

Introduction to Mindjet MindManager Server

Ten Steps for an Easy Install of the eg Enterprise Suite

Licensing Windows Server 2012 R2 for use with virtualization technologies

MaaS360 Cloud Extender

Instant Chime for IBM Sametime Quick Start Guide

Firewall/Proxy Server Settings to Access Hosted Environment. For Access Control Method (also known as access lists and usually used on routers)

How to deploy IVE Active-Active and Active-Passive clusters

AvePoint High Speed Migration Supplementary Tools

This guide is intended for administrators, who want to install, configure, and manage SAP Lumira, server for BI Platform

Diagnosis and Troubleshooting

Helpdesk Support Tickets & Knowledgebase

Best Practice - Pentaho BA for High Availability

A Beginner s Guide to Building Virtual Web Servers

URM 11g Implementation Tips, Tricks & Gotchas ALAN MACKENTHUN FISHBOWL SOLUTIONS, INC.

McAfee Enterprise Security Manager. Data Source Configuration Guide. Infoblox NIOS. Data Source: September 2, Infoblox NIOS Page 1 of 8

AccessData Corporation AD Lab System Specification Guide v1.1

CenterPoint Accounting for Agriculture Network (Domain) Installation Instructions

Microsoft Exchange 2010 on VMware Design and Sizing Examples

Deployment Overview (Installation):

STIOffice Integration Installation, FAQ and Troubleshooting

ScaleIO Security Configuration Guide

SYSTEM MONITORING PLUG-IN FOR MICROSOFT SQL SERVER

Citrix XenServer from HP Getting Started Guide

Implementing ifolder Server in the DMZ with ifolder Data inside the Firewall

Getting Started Guide

Caching Software Performance Test: Microsoft SQL Server Acceleration with FlashSoft Software 3.8 for Windows Server

Readme File. Purpose. What is Translation Manager 9.3.1? Hyperion Translation Manager Release Readme

Using PayPal Website Payments Pro UK with ProductCart

Introduction LIVE MAPS UNITY PORTAL / INSTALLATION GUIDE Savision B.V. savision.com All rights reserved.

Prioritization and Management of VoIP & RTP s

E2E Express 3.0. Requirements

Preparing to Deploy Reflection : A Guide for System Administrators. Version 14.1

HOWTO: How to configure SSL VPN tunnel gateway (office) to gateway

Blue Link Solutions Terminal Server Configuration How to Install Blue Link Solutions in a Terminal Server Environment

How To Upgrade A Crptocard To A 6.4 Migratin Tl (Cpl) For A 6Th Generation Of A Crntl (Cypercoder) On A Crperd (Cptl) 6.

Installation Guide Marshal Reporting Console

Pexip Infinity and Cisco UCM Deployment Guide

Configuring an Client for your Hosting Support POP/IMAP mailbox

Software Distribution

ViPNet VPN in Cisco Environment. Supplement to ViPNet Documentation

An Oracle White Paper January Comprehensive Data Quality with Oracle Data Integrator and Oracle Enterprise Data Quality

Wireless Light-Level Monitoring

KronoDesk Migration and Integration Guide Inflectra Corporation

2. When logging is used, which severity level indicates that a device is unusable?

Junos Pulse Instructions for Windows and Mac OS X

1)What hardware is available for installing/configuring MOSS 2010?

Webalo Pro Appliance Setup

GETTING STARTED With the Control Panel Table of Contents

Configuring and Integrating LDAP

Traffic monitoring on ProCurve switches with sflow and InMon Traffic Sentinel

HP ExpertOne. HP2-T21: Administering HP Server Solutions. Table of Contents

Readme File. Purpose. Introduction to Data Integration Management. Oracle s Hyperion Data Integration Management Release 9.2.

Release Notes. Dell SonicWALL Security firmware is supported on the following appliances: Dell SonicWALL Security 200

Configuring BMC AREA LDAP Using AD domain credentials for the BMC Windows User Tool

Knowledge Base Article

Diagnostic Manager Change Log

In addition to assisting with the disaster planning process, it is hoped this document will also::

Aras Innovator Internet Explorer Client Configuration

Copyright 2013, SafeNet, Inc. All rights reserved. We have attempted to make these documents complete, accurate, and

Connector for Microsoft Dynamics Installation Guide

Aras Innovator Internet Explorer Client Configuration

Avatier Identity Management Suite

Installation Guide Marshal Reporting Console

FOCUS Service Management Software Version 8.5 for CounterPoint Installation Instructions

CallRex 4.2 Installation Guide

Information Services Hosting Arrangements

SITE APPLICATIONS USER GUIDE:

State of Wisconsin DET Agency Managed Virtual Services Service Offering Definition

efusion Table of Contents

Serv-U Distributed Architecture Guide

Exercise 5 Server Configuration, Web and FTP Instructions and preparatory questions Administration of Computer Systems, Fall 2008

AVG AntiVirus Business Edition

NETWRIX CHANGE NOTIFIER

FINRA Regulation Filing Application Batch Submissions

Regions File Transmission

Exercise 5 Server Configuration, Web and FTP Instructions and preparatory questions Administration of Computer Systems, Fall 2008

ROSS RepliWeb Operations Suite for SharePoint. SSL User Guide

Configuring and Monitoring SysLog Servers

Budget Planning. Accessing Budget Planning Section. Select Click Here for Budget Planning button located close to the bottom of Program Review screen.

Performance of an Infiniband cluster running MPI applications

April 3, Release Notes

iphone Mobile Application Guide Version 2.2.2

BackupAssist SQL Add-on

Ahsay NAS Client Utility v1.0.1.x. Setup Guide. Ahsay TM Online Backup - Development Department

Configuring and Monitoring AS400 Servers. eg Enterprise v5.6

AvePoint Discovery Tool User Guide

TaskCentre v4.5 MS SQL Server Trigger Tool White Paper

FOCUS Service Management Software Version 8.5 for Passport Business Solutions Installation Instructions

Transcription:

An Oracle White Paper December 2015 Oracle VM 3: 10GbE Netwrk Perfrmance Tuning

Intrductin This technical white paper explains hw t tune Oracle VM Server fr x86 t effectively use 10GbE (10 Gigabit Ethernet) netwrks. This dcument applies t Oracle VM 3 and is mstly but nt exclusively fcused n multi-scket servers. These servers are ppular in tday s deplyments and have NUMA (Nn-Unifrm Memry Access) latency effects that affect netwrk perfrmance. Netwrk perfrmance cnsideratins Netwrk perfrmance is an essential cmpnent f verall system perfrmance, especially in a virtual machine envirnment. Netwrks are used in separate rles: fr live migratin, cluster heartbeats, access t virtual disks n NFS r iscsi, system management, and fr the guest virtual machines wn netwrk traffic. These rles, and ther aspects f Oracle VM netwrking, are explained in Lking "Under the Hd" at Netwrking in Oracle VM Server fr x86. 10GbE netwrk intercnnects are increasingly deplyed in mdern datacenters t enhance perfrmance fr netwrk-intensive applicatins. It can be challenging t achieve 10G line rate r therwise achieve expected thrughput. This is especially the case when using a single stream f transmit/receive. The difficulty is increased with virtualized systems, which typically add verhead and latency fr virtual I/O. This is particularly challenging n systems with many CPUs and large RAM capacity. These systems are very attractive fr cnslidating multiple smaller physical servers int virtual machines, but they have NUMA (Nn-Unifrm Memry Access) prperties in which the memry latency (time t access memry lcatins in RAM) can vary widely depending n the CPUs and memry lcatins being used. This can reduce perfrmance and scalability fr high-speed netwrks whse inter-packet times make them sensitive t variatin in memry latency. Perfrmance can vary depending n whether interacting prcesses and kernel cmpnents, and the RAM lcatins they access, are clsely aligned n the same CPU nde (scket) r cre. Affinity t the same CPU ndes reduces memry latency. It is imprtant t knw the CPU tplgy and capabilities f the system t ptimally tune fr particular applicatin perfrmance needs. An Example Illustrating NUMA effects The fllwing example illustrates 10G netwrk perfrmance n tw different hardware architectures. Netperf TCP_STREAM was used fr measurement. The initial result was befre applying any tuning t imprve netwrk perfrmance. A Linux virtual machine with 2 VCPUs and 2GB f memry was used fr all tests. Netwrk perfrmance was measured between dm0 instances n tw machines (with direct access t the physical netwrk adapters), and between a dm0 n ne server and a guest virtual machine n the ther server. Nn-NUMA systems with adequate system resurces. 1

Tw systems, each with 2 sckets, 4 CPU cres per scket, and 2 threads per cre (a ttal f 16 CPU threads), and 24GB f memry dm0 with 8 VCPUs, and 2G f memry dm0 -> dm0 : ~9.4Gb/secnd dm0(a) -> a VM running n dm0(b) : ~7.8Gb/secnd NUMA systems with plenty f resurces Tw systems, each with 8 sckets, ten CPU cres per scket and 2 threads per cre (160 CPUs ttal), and 256GB f memry dm0 with 160 VCPUs, and 5GB memry dm0 -> dm0 : ~5.2Gb/secnd dm0(a) -> a VM running n dm0(b) : ~3.4 Gb/secnd Results n untuned large systems are lw cmpared t the nn-numa systems running the same sftware n servers with fewer CPUs and less memry. Thrughput between dm0 envirnments was reduced by 45%, and between dm0 and a guest n the adjacent server reduced by 56%. This illustrates pr scalability when tuning is nt prvided n machines that are nminally the same speed. Perfrmance Tuning Several imprtant tuning changes were made t imprve perfrmance n the larger machines. These changes reduced memry latency by restricting CPU scheduling and interrupt prcessing t a subset f the available CPUs. Other changes adjusted TCP parameters with values fr fast netwrks: thse changes are apprpriate fr bth small and large servers. Nte: use these changes as guidelines and adjust fr yur hardware architecture and resurces. They have nt been exhaustively tested n a wide range f cnfiguratins and wrklads. Changes shuld be individually evaluated in the cntext f perfrmance requirements, with cnsideratin f their effect n maintainability and management effrt. These changes include: Number f dm0 CPUs On a large 8-scket x86 system with NUMA prperties, the number f dm0 VCPUs was reduced frm 160 t 20, which is the number f CPU threads n the first scket f the system. This limits CPU scheduling n dm0 t the first CPU scket, which ensures lcal memry access and L2 cache affinity. dm0 VCPUs were pinned s virtual CPUs had strict crrespndence t physical CPUs (VCPU0 -> CPU0... and s n). The number f CPUs n a scket depends n the server s CPU tplgy, which can be determined by lgging int the server and issuing the cmmand xm inf. This displays the number f cres_per_scket and threads_per_cre. The prduct f these values is the number f CPU threads per scket t be used in the step belw. T pin and change dm0's VCPUs add: 2

dm0_vcpus_pin dm0_max_vcpus=x t the Xen kernel cmmand line (in this case, X=20), and rebt. In this example, the cmplete line frm /bt/grub/grub.cnf wuld be kernel /xen.gz dm0_mem=582m dm0_vcpus_pin dm0_max_vcpus=20 This had the largest effect f any f the tuning prcedures described in this dcument. Starting with Oracle VM Server 3.2.2, a newly installed dm0 is cnfigured with a maximum f 20 VCPUs t prvide better ut f the bx perfrmance fr large systems. There is nthing special abut that number, thugh it cvers mst servers. The precise recmmendatin is t set the number f dm0 virtual CPUs t at mst the number f physical CPUs per scket, and pin dm0 virtual CPUs t physical CPUs n the first scket. If hyper-threading is being used, cunt in units f threads rather than cres. Prt Interrupt Affinity Mst 10G prts use MSI-X interrupt vectrs. These are spread ver available dm0 VCPUs, and at least ne f them is assciated with CPU0. Since CPU0 is the default interrupt CPU fr many devices, interrupts can arrive n the 10G prt while the CPU is servicing an interrupt frm anther device and cannt be preempted. This delays handling the netwrk interrupt until the ther interrupt service rutine finishes. It may be useful t mve 10G prt interrupt vectrs t CPUs that are less busy servicing interrupts. This example shws hw t change interrupt affinity. 1. Find the IRQ IDs fr the prts by issuing cat /prc/interrupts grep ethx, replacing ethx with the name f the netwrk prts. The first clumn has the IRQ ID. An example f what this lks like (fr a smaller server that fits n the page) is shwn belw: # cat /prc/interrupts head -1; cat /prc/interrupts grep eth CPU0 CPU1 CPU2 CPU3 35: 3562070 243 1307766 249 PCI-MSI-edge eth4 39: 11828156 108367 101470 108262 PCI-MSI-edge eth0 40: 11748516 11991143 11688127 11988451 PCI-MSI-edge eth1 2. Find which CPU each interrupt vectr is assciated with. If yu have nly ne such prt, just cat /prc/irq/$irqid/smp_affinity, replacing $irqid with the value frm step 1. If yu have many prts yu can use a script like the fllwing. #!/bin/bash fr irqid in `cat /prc/interrupts grep eth awk {'sub(":","",$1);print $1'}`; d affinity=`cat /prc/irq/$irqid/smp_affinity` ech "irqid $irqid affinity $affinity" dne This types ut the IRQ number and the bit mask that represents which CPUs are assciated with that interrupt, with the rightmst bit representing CPU0. Fr example, a value f 1 wuld indicate CPU 3

0, and a value f f wuld indicate CPUs 0 t 3. If yu have an IRQ assigned t CPU 0, yu can reassign it t a less-busy CPU by replacing the cntents f smp_affinity with a new bit mask. Fr example, the fllwing cmmand sets the CPU affinity fr IRQ 323 t CPU 2: # ech 00004 > /prc/irq/323/smp_affinity This mechanism can be used t statically lad-balance interrupt prcessing, and can be useful if a particular CPU is unevenly laded. Alternatively, yu can use the irqbalance service which has similar functin but des nt assign specific IRQs t specific CPUs, which can be helpful t frce interrupts t the same CPU cre. If using static allcatin as described abve, turn ff the irqbalance service by issuing service irqbalance stp This change shuld be carefully evaluated since it invlves changing Linux kernel settings. Administratrs shuld cnsider whether the change is necessary t achieve perfrmance bjectives, and whether CPUs are heavily laded due t interrupt prcessing. Installatin standards and cmfrt level regarding mdifying kernel settings must als be bserved. Prcess Affinity In NUMA systems it is advisable t run interacting prcesses (in this case Xen s Netback and Netfrnt) n the same CPU scket. This helps reduce memry latency fr accessing shared data structures. T implement this, pin a VM s virtual CPUs t physical CPUs n the same scket as where dm0 runs. This shuld be dne selectively fr the mst perfrmance-critical VMs t avid putting excessive lad n the first server scket s CPUs. The recmmended methd is t use the Oracle VM 3 Utilities vm_vmcntrl cmmand t btain and change the virtual CPU settings. The Oracle VM 3 Utilities are prvided as an ptinal add-n t Oracle VM 3. Dcumentatin and dwnlad instructins can be fund at the Oracle VM Dwnlads page at http://www.racle.cm/technetwrk/server-strage/vm/dwnlads/index.html After installing the utilities, lg int the server hsting Oracle VM Manager, and then issue cmmands as fllws, replacing the variables fr administratr userid, passwrd, and guest VM name as needed: # cd /u01/app/racle/vm-manager-3/vm_utils #./vm_vmcntrl -u $ADMIN -p $PASSWORD -h lcalhst \ -v $GUEST -c vcpuget That displays the set f physical CPUs the guest can run n as Current pinning f virtual CPUs t physical threads. If n CPU pinning has been dne, this will be the ttal list f CPUs n the server. That is generally larger than the number f virtual CPUs in the guest (fr example, a server with 8 CPUs hsting guests with as few as ne VCPU). Fr example: #./vm_vmcntrl -u admin -p Welcme1 -h lcalhst -v myvm -c vcpuget Oracle VM VM Cntrl utility 2.0.1. Cnnected. Cmmand : vcpuget Current pinning f virtual CPUs t physical threads : 0,1,2,3,4,5,6,7 4

T pin the guest dmain t the physical CPUs n the first CPU scket, issue the fllwing cmmand, replacing N with 1 less than the number f CPUs per scket (because CPUs are cunted starting frm zer): #./vm_vmcntrl -u $ADMIN -p $PASSWORD -h lcalhst \ -v $GUEST -c vcpuset 0-N On a server with 20 CPUs per scket the value wuld be vcpuset 0-19. An example is: #./vm_vmcntrl -u admin -p Welcme1 -h lcalhst \ -v myvm -c vcpuset -s 0-19 Oracle VM VM Cntrl utility 2.0.1. Cnnected. Cmmand : vcpuset Pinning virtual CPUs Pinning f virtual CPUs t physical threads '0-19' 'myvm' cmpleted In this example we pinned the guest t a number f physical CPUs larger than the number f virtual CPUs. That lets the OVM scheduler dispatch the guest n any f the physical CPUs n the first scket. Yu can ptinally specify physical CPUs t exactly match the number f virtual CPUs. This might imprve Level 1 cache hit ratis, but culd als lck the guest nt heavily laded physical CPUs when there are less-utilized CPUs available. Anther way t specify physical CPUs, used if the Oracle VM 3 Utilities are nt installed, is t lcate and edit the VM cnfiguratin file by hand. First, use the Oracle VM Manager user interface t display the ID fr the Repsitry and virtual machine. Then lgin as rt t ne f the servers in the pl and edit the file vm.cfg in the directry /OVS/Repsitries/$RepsitryID/VirtualMachines/$VMID Change the cpus line in vm.cfg as fllws: vcpus = 2 cpus = "1,4" This will pin the VM's 2 VCPUs t CPU1 and CPU4. Nte that the cmmand xm vcpu-list can be used t verify that dmains have the crrect number f CPUs and pinning, including fr dm0. The vm_cntrl cmmand is recmmended because it is a dcumented methd rather than ut f band editing f internal file cntents. In these examples we pin guest virtual CPUs t the first scket alng with dm0. That prevents NUMA verhead, but als prevents using ther sckets. If this is dne fr all guests it wuld eliminate the benefit f using a server that scales t multiple sckets. Therefre, this technique shuld be used selectively fr virtual machines with the mst critical netwrk perfrmance requirements. Other guests can run n ther CPU sckets withut CPU pinning and experience better perfrmance than if all are assigned t an verladed first scket. Pinning CPUs is als an administrative and management verhead that shuld nt be dne indiscriminately. 5

NUMA latency and cache misses within a virtual machine can reduce perfrmance, independent f netwrk cncerns, s it is imprtant t size a virtual machine t nt have an excessive number f CPUs. One best practice is t size fr expected CPU lads plus sme headrm. Oracle VM lets yu set the number f CPUs the guest receives when it starts, and a maximum number yu can adjust it t during exceptinal lads. That makes it less likely that virtual CPUs straddle CPU sckets and experience NUMA latency, increases reuse f warm cache cntents n CPU cres, and reduces internal lck cntentin fr data structures that have t be serialized ver multiple CPUs. Pwer States Many servers ffer the ability t run with lwer pwer cnsumptin and generate less heat when idle r at lw lads. Sme servers take a lng time t recver frm pwer savings mde, which can affect latency service netwrk activity, s it may be wrth disabling sleep mdes ( C-states ) r avid using lw prcessr pwer cnsumptin mde ( P-states ). This is a server-dependent ptin that shuld be evaluated fr each specific server mdel. TCP Parameter Settings Default TCP parameters in mst Linux VM distributins are cnservative, and are tuned t handle 100Mb/s r 1Gb/s prt speeds, and result in buffer sizes that are t small fr 10Gb netwrks. Mdifying these values can lead t significant perfrmance gain in a 10Gbs VM netwrk. RTT, and BDP are essential values influenced by TCP parameters. Rund Trip Time (RTT) is the ttal amunt f time that a packet takes t reach a destinatin and get back t the surce. In a Gigabit netwrk this value is between 1 and 100ms. RTT can be measured using ping. Bandwidth Delay Prduct (BDP) is the amunt f data that can be in transit at any given time. It is the prduct f the link bandwidth and the RTT value. Assuming 100ms RTT: BDP = (.1s * (10 * 10^9 )bits ) = 134217728 Bytes. Buffer sizes shuld be adjusted t permit the maximum number f bytes in transit and prevent traffic thrttling. The fllwing values can be set n Linux virtual machines as well as dm0. Maximum receive scket buffer size (size f BDP) # sysctl -w net.cre.rmem_max=134217728 Maximum send scket buffer size (size f BDP) # sysctl -w net.cre.wmem_max=134217728 Minimum,initial,and max TCP Receive buffer size in Bytes # sysctl -w net.ipv4.tcp_rmem="4096 87380 134217728" Minimum, initial, and max buffer space allcated # sysctl -w net.ipv4.tcp_wmem="4096 65536 134217728" Maximum number f packets queued n the input side # sysctl -w net.cre.netdev_max_backlg=300000 Aut Tuning # sysctl -w net.ipv4.tcp_mderate_rcvbuf=1 Figure 1: Use the abve cmmands t adjust TCP parameters 6

These settings take effect immediately but d nt persist ver a rebt. T make these values permanent, add the fllwing t /etc/sysctl.cnf net.cre.rmem_max = 134217728 net.cre.wmem_max = 134217728 net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728 net.cre.netdev_max_backlg = 300000 net.ipv4.tcp_mderate_rcvbuf =1 Figure 2: Add these lines t /etc/sysctl.cnf Additinally, fr Oracle VM Server fr x86 prir t versin 3.2, netfilter shuld be turned ff n bridge devices. This is dne autmatically in versin 3.2 and later. # sysctl -w net.bridge.bridge-nf-call-iptables=0 # sysctl -w net.bridge.bridge-nf-call-arptables=0 # sysctl -w net.bridge.bridge-nf-call-ip6tables=0 T see what settings are in effect, type sysctl fllwed by the parameter name, fr example: # sysctl net.cre.rmem_max Jumb Frames The default MTU value is 1500, and mst 10G prts supprt up t 64KB MTU values. An MTU value f 9000 was adequate t imprve perfrmance and make it mre cnsistent. Jumb frames have been supprted in Oracle VM Server fr x86 starting with Release 3.1.1. Remember that if yu change the MTU, the changed value must set be set n all devices (like ruters) between the cmmunicating hsts. Nte that sme switch cnfiguratins preclude use f jumb frames (such as QS, trunking r even VLANs), depending n switch vendr r mdel. NIC Offlad Features Offlad features supprted by the NIC can reduce CPU cnsumptin and lead t a significant perfrmance imprvements. The settings belw shw values used during testing. Nte that Large Receive Offlad (LRO) must be left in its default state. If turned n, it will be autmatically set t ff when a prt is added as a bridge interface. An example f these settings is shwn belw. # ethtl -k ethx Offlad parameters fr ethx: rx-checksumming: n tx-checksumming: n scatter-gather: n tcp-segmentatin-fflad: n udp-fragmentatin-fflad: ff generic-segmentatin-fflad: n generic-receive-fflad: n large-receive-fflad: ff 7

Figure 3: Illustratin f ethtl settings Results The fllwing results were bserved after applying the abve tuning ptins. dm0 -> dm0: ~9.5Gb/s This is very clse t wire speed and the baseline nn-numa value, and 83% faster than the baseline NUMA value. dm0(a) -> a VM running n dm0(b): ~8.2Gb/s 5% faster than the nn-numa baseline, and 240% faster than the baseline NUMA value. VM n ne hst -> a VM n a different hst: ~8.2Gb/s shwing high bandwidth even ging thrugh tw bridged cnnectins. Depending n yur system architecture, netwrk, resurces, and applicatin requirements, yu may find that nt every recmmendatin in this paper is needed. On NUMA systems, the mst imprtant actin is reducing the number f dm0 VCPUs s they all fit n ne CPU scket/nde. Summary Netwrk perfrmance is a critical cmpnent f ttal system perfrmance, and can vary greatly depending n system cnfiguratin and tuning ptins. Perfrmance n large systems with NUMA behavir can be dramatically imprved by applying simple tuning prcedures. These tuning methds can be used with Oracle VM Server fr x86 3.1.1 and later. There is n single set f tuning rules that wrks in all situatins. Tuning recmmendatins must be balanced against ther aspects f ttal system perfrmance, and tested in realistic cnfiguratins t determine actual perfrmance effectiveness. It may be sufficient t apply a few f the tuning prcedures listed and then stp when acceptable perfrmance is achieved. Tuning has t be carefully evaluated in the cntext f installatin standards, supprtability, and administrative best practices. With these cntrls in place it is pssible t substantially imprve perfrmance. 8

Oracle VM 10GbE Netwrk Perfrmance Tuning December 2015 Authrs: Adnan Misherfi, Jeff Savit Oracle Crpratin Wrld Headquarters 500 Oracle Parkway Redwd Shres, CA 94065 U.S.A. Wrldwide Inquiries: Phne: +1.650.506.7000 Fax: +1.650.506.7200 Cpyright 2015, Oracle and/r its affiliates. All rights reserved. This dcument is prvided fr infrmatin purpses nly and the cntents heref are subject t change withut ntice. This dcument is nt warranted t be errr-free, nr subject t any ther warranties r cnditins, whether expressed rally r implied in law, including implied warranties and cnditins f merchantability r fitness fr a particular purpse. We specifically disclaim any liability with respect t this dcument and n cntractual bligatins are frmed either directly r indirectly by this dcument. This dcument may nt be reprduced r transmitted in any frm r by any means, electrnic r mechanical, fr any purpse, withut ur prir written permissin. Oracle and Java are registered trademarks f Oracle and/r its affiliates. Other names may be trademarks f their respective wners. Intel and Intel Xen are trademarks r registered trademarks f Intel Crpratin. All SPARC trademarks are used under license and are trademarks r registered trademarks f SPARC Internatinal, Inc. AMD, Optern, the AMD lg, and the AMD Optern lg are trademarks r registered trademarks f Advanced Micr Devices. UNIX is a registered trademark f The Open Grup. 0615 racle.cm