Russ Fellows, Evaluator Group
SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. 2
Storage is the Heart of VDI * Network plays key role, but is typically not bottleneck in local VDI deployments 3
VDI Guidance Vendors Provide good info for Server and Network Guidance regarding Storage for VDI is limited: Capacity : Varies for pooled / persistent (20 GB persistent) Performance : Use 8 12 iops as an average The Problem Workload claims are inaccurate Claims of 8 IOPS / client are highly generalized Read / write ratio s shift towards read for heavier use A real VDI I/O is different than a synthetic I/O A real I/O cab be up to 2 MB, synthetic i/o tests use 512 Bytes Queuing theory dictates 2X headroom needed to manage peaks 4
Barriers to VDI Adoption May Require Active Network Connectivity Local mode removes this limitation Capital Investment Storage costs may exceed expectations Capital costs typically exceed traditional approach Storage configuration is critical to cost and performance Concerns over Technology or Approach Users dislike lack of control (particularly with pooled) Concerns over performance and flexibility 5
The impact of VDI on Storage VDI STORAGE CONSIDERATIONS 6
VDI Architectural Choices Persistent Desktops Fully personalized Individual application stacks for each user Linked Clones Could not be personalized (persona with View 5) Uses snapshot of a VM to as baseline desktop instance Builds remainder of desktop pool from gold-image Tradeoffs Space and performance efficiency with Linked Clones Users want personalization (some need customized apps) 7
Persona Layering Customization Objects User 1 User Apps User Data User Persona MS Office 2010 MS Office 2003 Application Pool Adobe CS5 Visual Studio OS Pool Firefox Symantec AV Win7 x86 Win XP Win7 64 User Persona User Data User Apps Firefox Symantec AV MS Office 2010 Win7 64 Read Only Writeable 8
Performance Considerations Server HW : Sizing Guidelines Good Starting Point Hypervisors & Brokers : Currently 2 major hypervisors and 2 brokers in the market Storage : Many issues - Covered next Networks: LANs : 1 Gb to desktop is sufficient in most cases SANs : Dedicated storage network for iscsi, NFS, FC or FCoE part of storage considerations WANs : Remote VDI use can be problematic without disconnected or checkout of instances 9
STORAGE PERFORMANCE FOR VDI 10
VDI Storage Performance Existing Deployments show Storage is Critical for Performance Issues: Difficult to rationalize vendor claims Vendors claim a VDI user at a $ BUT, there is no common measurements, configuration or workloads to make those claims Existing VDI tools test entire system Requires extensive server, network and software setup Considerable expense (100 s of servers) Existing storage benchmarks do not recreate VDI workload 11
Actual VDI Transfer Sizes 12
The VDI POC Dilemma Proof of Concept Issues: Proof of Concept can be costly Typically requires all server, network and storage Extensive setup time (can be multiple weeks) Recommendation: Use full tools : Large projects where Network, CPU and Memory are issues, use existing tools Storage VDI : When storage is primary concern, or where time and resources are constrained 13
VDI Performance Testing VDI-IOmark Login VSI View Planner SPC What it tests Storage Entire System Entire System Storage Workload Cost Equipment Required 100% real VDI No cost for Users Vendors pay 100% real VDI Cost to license 100% real VDI VMware partners only Non VDI Cost to license & publish Low High High Low Setup Time Low High High Low 14
VDI Storage Benchmark VDI-IOmark - A storage specific benchmark for VDI Tests storage only Accurate workloads based on actual VDI users Uses I/O replay to simulate storage I/O patterns Storage Agnostic Supports any storage supported by hypervisor Reduces Infrastructure 10X Reduction in capital requirements and test setup Each server can test up to 1,000 VDI users 12 CPU, 96 GB RAM, with multiple I/O ports 15
VDI-IOmark Methodology Benchmark Creation Real world VDI environment used for data capture Actual I/O captured in VDI configuration and workloads Benchmark Run Utilize driver to replay workloads (I/O replay) Does not require applications Result Reporting Results indicate number of users supported Benchmark runs and results audited by EGI for consistency Storage configuration options are included in report 16
REAL WORLD VDI EXAMPLES 17
Fortune 50 Firm #1 Currently have 10K seats, want 50K in 1 year Use concept of Pods for scalable unit 1 Pod = 1 rack Supports approximately 3,000 VDI instances Capex = $2.3 M, Storage about 33% of total cost Using predominately persistent VDI images Goals Image management is an issue Investigating Cloning / Layering, sees it as promising Want improved image management Lower storage to 25% of total cost Want Virus scan offload, with minimal storage impact 18
Fortune 500 - Firm #2 Using VDI for Mixture of Employees Both office workers and engineers Office = pooled, engineers = persistent Issues: Engineering design processes stress storage during compilations Solution: Overprovision memory, use swap to SSD RAM drive in guest OS for compile space 19
Real World Findings Storage was Major Cost and Performance Driver Products provide unknown price / perf. Must overprovision spindles to achieve performance VDI Applications Generally consistent apps for office workers (MS Office) Understand system wide impact of special purpose apps Desires: Improve price / performance of storage Modular, easy to deploy building blocks to scale Improve POC and performance validations 20
VDI RECOMMENDATIONS 21
VDI Networks LAN Issues: Consider flat, layer 2 network to alleviate multi-hop and ISL limitations of traditional LANs SAN Issues: Dedicated SAN not required, but must have dedicated QoS Storage protocols do matter FC has excellent latency with low CPU overhead NFS provides good parallelism of protocol and access to.vmdk s iscsi can provide good performance, configuration is critical WAN Issues: WAN access still a factor for VDI, consider alternatives 22
VDI Memory & Storage Memory Overuse can have Dramatic Impact on Performance and Storage Utilization Swapping / Paging is almost always bad Use Windows Paging File : If hypervisor doesn t permit memory overcommit If hypervisor doesn t support swap to SSD Do not to use Windows Paging : If Hypervisor supports overcommit and swap to SSD Hypervisor can use memory more efficiently Also can swap to SSD is much faster than OS paging 23
Hypervisor Memory Mgmt. Page Sharing Multiple VM s sharing a single page of memory Memory Overcommit Uses OS native memory management indirectly Memory Compression On-the-fly compression on a per page instance 24
Hypervisor I/O Overhead Hypervisor Queuing can Occur at High Loads Seen instances where hypervisor kernel is adding 200 500 ms of delay to I/O SSD Storage may Highlight Inefficiencies Fast storage makes other components look slow Recommendations: Spread workload over more devices or files Storage access protocols have an impact (FC, iscsi, NFS) Multiple SCSI controllers, multiple paths Use raw device as last resort RDM in VMware, Direct LVM, Pass-through 25
Performance Requirements Architecture for I/O Optimization Persistent vs. Non-Persistent : Understand Storage Impact When to use Hypervisor tools vs. Storage tools Requires use of Solid State Storage On a $ / iops basis, SSD and Flash less expensive Use of Solid State limited to master image Use Traditional storage for changes (i.e. writes) Caching Appliances may Improve Performance Virtualization and VDI specific Storage Vendors include: (Whiptail, I/O Turbine, Virsto, Atlantis Computing, Nimble Storage, Tintri, etc.) 26
VDI Storage Best Practices Use Clones when Possible Either pools, or linked clones on VM Or, utilize storage writeable clones Separate Master Image from Other Data Place Master Image on Solid-State Ensure master image resides on Solid-State Either VM or Storage Cloning may work Maintain differences in separate disk area Swap / page area, User profile data, etc. This data should be places on spinning media 27
VDI Best Practices Cont. Separate OS and User Data (good advice generally) Utilize Some form of Writeable Clones Layering technologies Hypervisor based clones Storage writeable clones Utilize VAAI storage API s if available Space Efficiency via Clones or Thin Provisioning MUST Utilize SSD for Some Portion of Data Using clones concentrates I/O s for reads Use Solid State for golden image read data (10 20 GB) 28
Storage for VDI Storage Performance Solid state as storage Wide Stripping VAAI Large cache (read &write) Auto-tiering Do not use hypervisor snaps Nice to Have: Storage snapshots (vs. hypervisor snapshots which impact performance) Storage Efficiency Layering Technologies Writeable Storage Clones Thin Provisioning Little Impact: Deduplication (if using clones or pooled VDI instances) 29
Real World VDI Conclusions Ranked Impact of Choices on Performance: 1) Architecture, 2) Storage, 3) Hypervisor & Broker, 4) Networks, 5) Server HW First: Understand Architectural Implications Pools, persistent and layered images, storage implications Next: Optimize Storage Investigate methods to reduce storage capacity Optimize for performance (SSD, tiering, caching, etc.) Choose Hypervisor & Broker Optimize Network to Support These Choices 30
Attribution & Feedback The SNIA Education Committee would like to thank the following individuals for their contributions to this Tutorial. Authorship History Additional Contributors Russ Fellows: Original Presentation - Fall 2011 Updates: New Layering, Hypervisor Performance and Case Studies added Please send any questions or comments regarding this SNIA Tutorial to tracktutorials@snia.org 31
For more Information contact: Russ Fellows : russ@evaluatorgroup.com THANK YOU! 32