AIX NFS Client Performance Improvements for Databases on NAS



Similar documents
AIX 5L NFS Client Performance Improvements for Databases on NAS

Performance Characteristics of VMFS and RDM VMware ESX Server 3.0.1

Performance and scalability of a large OLTP workload

The Data Placement Challenge

mplementing Oracle11g Database over NFSv4 from a Shared Backend Storage Bikash Roy Choudhury (NetApp) Steve Dickson (Red Hat)

Future technologies for storage networks. Shravan Pargal Director, Compellent Consulting

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering

Measuring Interface Latencies for SAS, Fibre Channel and iscsi

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Oracle Database 11g Release 2 Performance: Protocol Comparison Using Clustered Data ONTAP 8.1.1

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

IOmark-VM. DotHill AssuredSAN Pro Test Report: VM a Test Report Date: 16, August

How To Evaluate Netapp Ethernet Storage System For A Test Drive

Array Performance 101 Part 4

SMB Direct for SQL Server and Private Cloud

Chronicle: Capture and Analysis of NFS Workloads at Line Rate

WHITE PAPER 1

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

PERFORMANCE TUNING ORACLE RAC ON LINUX

Maximizing SQL Server Virtualization Performance

GiantLoop Testing and Certification (GTAC) Lab

FAS6200 Cluster Delivers Exceptional Block I/O Performance with Low Latency

10th TF-Storage Meeting

A Survey of Shared File Systems

Secure Web. Hardware Sizing Guide

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Benchmarking FreeBSD. Ivan Voras

SAS System for Windows: Integrating with a Network Appliance Filer Mark Hayakawa, Technical Marketing Engineer, Network Appliance, Inc.

1 Storage Devices Summary

NFS Tuning for High Performance Tom Talpey Usenix 2004 Guru session.

<Insert Picture Here> Oracle Exadata Database Machine Overview

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Bright Idea: GE s Storage Performance Best Practices Brian W. Walker

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

Software-defined Storage at the Speed of Flash

Storage benchmarking cookbook

Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.

How To Build An Exadata Database Machine X2-8 Full Rack For A Large Database Server

Virtuoso and Database Scalability

System Architecture. In-Memory Database

Lab Evaluation of NetApp Hybrid Array with Flash Pool Technology

Comparison of Hybrid Flash Storage System Performance

Solid State Storage in Massive Data Environments Erik Eyberg

Chapter 10: Mass-Storage Systems

Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014

Sun 8Gb/s Fibre Channel HBA Performance Advantages for Oracle Database

Michael Kagan.

Oracle Database 11g Direct NFS Client. An Oracle White Paper July 2007

SLIDE 1 Previous Next Exit

Using Synology SSD Technology to Enhance System Performance Synology Inc.

NetApp Storage. Krzysztof Celmer NetApp Poland. Virtualized Dynamic Infrastructure. Applications. Virtualized Storage. Servers

Boost Database Performance with the Cisco UCS Storage Accelerator

Enabling Technologies for Distributed and Cloud Computing

SALSA Flash-Optimized Software-Defined Storage

SPC BENCHMARK 1 EXECUTIVE SUMMARY

RFP-MM Enterprise Storage Addendum 1

Overview of I/O Performance and RAID in an RDBMS Environment. By: Edward Whalen Performance Tuning Corporation

Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

ZFS Storage Solutions for Unstructured Data Challenges

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29

2009 Oracle Corporation 1

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

Doubling the I/O Performance of VMware vsphere 4.1

Oracle Database Scalability in VMware ESX VMware ESX 3.5

Deploying NetApp Storage Solutions in a Database Environment

Storage I/O Control: Proportional Allocation of Shared Storage Resources

Isilon IQ Scale-out NAS for High-Performance Applications

Scalable NAS for Oracle: Gateway to the (NFS) future

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

EMC Backup and Recovery for Oracle Database 11g Without Hot Backup Mode using DNFS and Automatic Storage Management on Fibre Channel

Accelerate Oracle Performance by Using SmartCache of T Series Unified Storage

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Enabling Technologies for Distributed Computing

Capacity planning for IBM Power Systems using LPAR2RRD.

Chapter 11: File System Implementation. Operating System Concepts 8 th Edition

EMC Unified Storage for Microsoft SQL Server 2008

CON9577 Performance Optimizations for Cloud Infrastructure as a Service

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Symantec Enterprise Vault And NetApp Better Together

Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2

Supported File Systems

A Deduplication File System & Course Review

The Leader in Unified All-Flash Storage. All-Flash Arrays for Enterprise and Cloud August 2013

SPC BENCHMARK 1 EXECUTIVE SUMMARY 3PAR INC. 3PARINSERV T800 STORAGE SERVER SPC-1 V1.10.1

Integrated Grid Solutions. and Greenplum

The Revival of Direct Attached Storage for Oracle Databases

Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture

Agenda. Capacity Planning practical view CPU Capacity Planning LPAR2RRD LPAR2RRD. Discussion. Premium features Future

Microsoft SQL Server 2012 on Cisco UCS with iscsi-based Storage Access in VMware ESX Virtualization Environment: Performance Study

Reference Architecture for a Virtualized SharePoint 2010 Document Management Solution A Dell Technical White Paper

Storage Performance Testing

Transcription:

AIX NFS Client Performance Improvements for Databases on NAS October 20, 2005 Sanjay Gulabani Sr. Performance Engineer Network Appliance, Inc. gulabani@netapp.com Diane Flemming Advisory Software Engineer IBM Corporation dianefl@us.ibm.com Slide 1

Agenda Why Databases on NAS? AIX NFS improvements - CIO Simple IO Performance Tests OLTP Performance Comparison Future work Conclusion Q&A Slide 2

Databases on SAN Databases do block based I/O and prefer raw blocks But, most admins still put a volume manager and a file system: JFS2, VxFS, UFS over HW RAID storage Reason: Simplicity -- Easier backups and provisioning Slide 3

NAS Myths Myth: NFS consumes more CPU for Databases Reality: Most NFS client performance problems with DB apps are due to kernel locking and are fixed in good NFS clients Slide 4

NAS Myths Myth: Ethernet is slower than SAN Reality: Ethernet is 1Gb, FC is 2Gb. 4Gb FC is almost here but so is 10GbE. Ethernet is likely to takeover bandwidth of FC soon; cost effectively Its easy to setup multiple wires to match FC bandwidth with just 1GbE, today Storage latencies at database layer are measured in msecs, differences between SAN and NAS latencies are in usecs 78usec more for 8K blocks < 0.1msec difference! Slide 5

Why NAS? Networking is simpler Networking promotes sharing Sharing = Higher utilization (~grid storage) NAS solutions are cheaper than SAN No one has won against Ethernet! Seriously, even blocks based storage is moving to Ethernet (iscsi) Slide 6

Why DB on NAS? Complex storage management is offloaded from the database servers to storage appliances Performance vs. manageability Database requirements different than traditional NFS workloads: home directories, distributed source code single writer lock can be a problem Slide 7

Industry Support for NAS Oracle On-Demand hosts ERP+CRM 1000+ Linux servers on NetApp NFS Yahoo runs Database on NetApp Sun 2003 presentation McDougall + Colaco promoted NFS NetApp #1 Leader of NAS $1.59b(FY05) IBM AIX 5.3 ML-03 improvements now No one got fired for buying from IBM! (IBM resells NetApp products) Slide 8

Oracle s Austin Datacenter More than 15,000 x86 servers 3.1 Petabytes of NetApp storage 100s of mission-critical hosted Oracle apps Source: Oracle Magazine Mar/Apr 2005 Slide 9

I/O, I/O It s off to disk I go Raw logical volumes No memory management overhead Buffered I/O Variety of kernel services Direct I/O No memory management overhead defaults to synchronous accesses over NFS Concurrent I/O DIO plus no inode/rnode contention Slide 10

Metrics: NFS File Transfer CPU utilization and throughput in KB/sec Options: Default (buffered I/O) and CIO AIX Client and Server p630 1.45GHz 4-way GbE 1500-byte MTU AIX 5.3 ML-03 Demonstrates algorithm behavior Atomic, Single-Threaded Slide 11

Throughput (KB/sec) 120000 100000 80000 60000 40000 20000 0 NFS File Transfer Sequential Reads and Writes Seq Write Buff I/O Throughput Buff I/O CPU Util Seq Read CIO Throughput CIO CPU Util 40 30 20 10 0 CPU Util (%) 256MB File, NFS V3, TCP, auth=sys, 32KB RPCs Slide 12

Metrics Simple I/O Load Generator CPU utilization, IOPS, throughput, etc. Options NFS client mount options to specify I/O mode: Default (buffered I/O), DIO, CIO NetApp Filer FAS880 ONTAP 6.5.1 AIX Client Model p630 1.45 GHz 4-way AIX 5.3 ML-03 GbE 1500-byte MTU Slide 13

SIO Throughput 50% Writers, 256MB file KB/sec 80000 70000 60000 50000 40000 30000 20000 10000 0 1 2 4 8 16 32 64 128 256 Buff I/O CIO DIO Number of Threads 256MB File, NFS V3, TCP, auth=sys, AIX v5.3 ML-3 client Slide 14

SIO Throughput 100% Writers, 256MB File KB/sec 60000 50000 40000 30000 20000 10000 0 1 2 4 8 16 32 64 128 256 Number of Threads Buff I/O CIO DIO 256MB File, NFS V3, TCP, auth=sys, AIX v5.3 ML-3 client Slide 15

SIO Scaled Throughput using CIO, 256MB file 200000 KBps/CPU 150000 100000 50000 0 1 2 4 8 16 32 64 128 256 Number of Threads 100% Writers 50% Writers 256MB File, NFS V3, TCP, auth=sys, AIX v5.3 ML-3 client Slide 16

NFS FT and SIO Summary CIO not appropriate for single threaded sequential read/write workloads. CIO aimed at workloads with higher concurrency such as databases. Significant gain in raw throughput performance using CIO vs. Buffered I/O for workloads with higher concurrency. CPU utilization illustrates issue with TCP socket lock contention with increased concurrency. Slide 17

OLTP Performance AIX 5.2 ML-5 and AIX 5.3 ML-03 (beta) PowerPC pseries 650 (2 * 1.2 GHz, 4 GB RAM) Oracle 10.1 1GbE for NFS or iscsi or 2Gb FCP card FAS 940c 48 Disk Spindles (4 x DS14) 144GB 10K RPM ONTAP 6.5.1 Slide 18

Transactions on AIX 5.2, 5.3, iscsi, FCP Chart 1: Transactions for protocols 16000 14000 12000 11817 12171 14139 transactions 10000 8000 6000 4710 OETs 4000 2000 0 kernel NFS 5.2 @60u NFS 5.3* 60u iscsi raw 5.3* 60u FCP raw 5.3* 60u OETs = Order Entry Transactions (an Oracle OLTP benchmark, approx 2:1 read to write ratio) Slide 19

Host CPU Utilization Chart 2: Transactions and cpu utilization for protocols 16000 14000 98 90 14139 100 12000 11817 12171 88 80 transactions 10000 8000 6000 4710 49 60 40 cpu OETs cpu 4000 20 2000 0 0 kernel NFS 5.2 @60u NFS 5.3 60u iscsi raw 5.3 60u FCP raw 5.3 60u Slide 20

Protocol Efficiencies Computed using (%CPU used/ OETs) * K: where K is a constant such that NFS CPU cost per K transactions is 1.0 and relative cost for iscsi and FCP is computed using the same constant K transactions. Chart 3: Relative CPU cost per protocol for fixed transactions 1.2 normalized cpu cost (lower is better) 1 0.8 0.6 0.4 0.2 1 0.89 0.75 relative cpu cost 0 NFS 5.3 60u iscsi raw 5.3 60u FCP raw 5.3 60u protocol Slide 21

Oracle Block Read Response Time Not typical response time. Random workload used. Filer with more cache improves block response time or more sequential reads improve avg. block response time. Chart 4: Oracle Block Read Response Time Block read Resp Time (lower is better) 35 30 25 20 15 10 5 0 29 kernel NFS 5.2 @60u 13 14 NFS 5.3 60u iscsi raw 5.3 60u 12 FCP raw 5.3 60u block read resp time protocol Slide 22

Future Work Testing and analysis on systems with higher numbers of CPUs. Further investigation on socket lock scaling issue on AIX Slide 23

Conclusions AIX 5.3 ML-03 with new Concurrent I/O CIO mount option delivers an OLTP performance on NFS comparable to that of block-based protocols iscsi and FCP Don t be afraid to deploy databases on AIX NFS, we will support you. Proof of AIX NFS performance up to 4 CPU completed. Slide 24

References AIX Performance with NFS, iscsi and FCP Using an Oracle Database on NetApp White Paper @ http://www.netapp.com/library/tr/3408.pdf NetApp Best Practices Paper located at: http://www.ibm.com/servers/storage/support/nas/ Download SIO tool http://now.netapp.com/now/download/tools/sio_ntap/index.shtml Improving Database Performance with AIX Concurrent I/O: http://www-03.ibm.com/servers/aix/whitepapers/db_perf_aix.pdf Slide 25

Questions/Answers Slide 26