Implementing Enterprise Disk Arrays Using Open Source Software. Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012



Similar documents
Analysis of VDI Storage Performance During Bootstorm

IOmark- VDI. Nimbus Data Gemini Test Report: VDI a Test Report Date: 6, September

Configuration Maximums

EMC VNX FAMILY. Copyright 2011 EMC Corporation. All rights reserved.

Intel Solid- State Drive Data Center P3700 Series NVMe Hybrid Storage Performance

N /150/151/160 RAID Controller. N MegaRAID CacheCade. Feature Overview

Dell Compellent Storage Center SAN & VMware View 1,000 Desktop Reference Architecture. Dell Compellent Product Specialist Team

LSI MegaRAID CacheCade Performance Evaluation in a Web Server Environment

SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

IOmark- VDI. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VDI- HC b Test Report Date: 27, April

Cisco Unified Computing System and EMC VNXe3300 Unified Storage System

Unleash the Performance of vsphere 5.1 with 16Gb Fibre Channel

Certification Document macle GmbH GRAFENTHAL R2208 S2 01/04/2016. macle GmbH GRAFENTHAL R2208 S2 Storage system

Configuration Maximums

1. Specifiers may alternately wish to include this specification in the following sections:

VMware Best Practice and Integration Guide

HP Proliant BL460c G7

Addendum No. 1 to Packet No Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department

Certification Document macle GmbH Grafenthal-S1212M 24/02/2015. macle GmbH Grafenthal-S1212M Storage system

Violin Memory Arrays With IBM System Storage SAN Volume Control

Configuration Maximums VMware Infrastructure 3

HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX

Accelerating Applications and File Systems with Solid State Storage. Jacob Farmer, Cambridge Computer

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Certification Document bluechip STORAGEline R54300s NAS-Server 03/06/2014. bluechip STORAGEline R54300s NAS-Server system

VMware Software-Defined Storage & Virtual SAN 5.5.1

Storage Protocol Comparison White Paper TECHNICAL MARKETING DOCUMENTATION

Dynamically unify your data center Dell Compellent: Self-optimized, intelligently tiered storage

UCS M-Series Modular Servers

Accelerating I/O- Intensive Applications in IT Infrastructure with Innodisk FlexiArray Flash Appliance. Alex Ho, Product Manager Innodisk Corporation

WHITEPAPER: Understanding Pillar Axiom Data Protection Options

Configuration Maximums

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

EMC Business Continuity for Microsoft SQL Server 2008

HITACHI VIRTUAL STORAGE PLATFORM FAMILY MATRIX

Intel RAID SSD Cache Controller RCS25ZB040

How SSDs Fit in Different Data Center Applications

SAN TECHNICAL - DETAILS/ SPECIFICATIONS

SLIDE 1 Previous Next Exit

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

MESOS CB220. Cluster-in-a-Box. Network Storage Appliance. A Simple and Smart Way to Converged Storage with QCT MESOS CB220

Large Scale Storage. Orlando Richards, Information Services LCFG Users Day, University of Edinburgh 18 th January 2013

ARCHITECTURE WHITE PAPER ARCHITECTURE WHITE PAPER

NIVEO Network Attached Storage Series NNAS-D5 NNAS-R4. More information:

NexentaConnect View Edition

EMC XTREMIO EXECUTIVE OVERVIEW

PS Series Storage Array - Configuration, Operation and Management

The functionality and advantages of a high-availability file server system

Cloud Storage. Parallels. Performance Benchmark Results. White Paper.

Taking Linux File and Storage Systems into the Future. Ric Wheeler Director Kernel File and Storage Team Red Hat, Incorporated

Virtual SAN Design and Deployment Guide

VTrak SATA RAID Storage System

EMC Unified Storage for Microsoft SQL Server 2008

How To Build A Cisco Ukcsob420 M3 Blade Server

Storage Solutions to Maximize Success in VDI Environments

VDI Optimization Real World Learnings. Russ Fellows, Evaluator Group

WHITE PAPER 1

Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems

SAN Conceptual and Design Basics

NEC Express5800/R120f-2M System Configuration Guide

Pivot3 Reference Architecture for VMware View Version 1.03

Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers

The HBAs tested in this report are the Brocade 825 and the Emulex LPe12002 and LPe12000.

Entry level solutions: - FAS 22x0 series - Ontap Edge. Christophe Danjou Technical Partner Manager

The IntelliMagic White Paper: Storage Performance Analysis for an IBM Storwize V7000

Installing and Configuring SAS Hardware RAID on HP Workstations

Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with Internal PCIe SSD Storage

MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products

E4 UNIFIED STORAGE powered by Syneto

SAN Implementation Course SANIW; 3 Days, Instructor-led

THE SUMMARY. ARKSERIES - pg. 3. ULTRASERIES - pg. 5. EXTREMESERIES - pg. 9

Pricing - overview of available configurations

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

nexsan NAS just got faster, easier and more affordable.

SSDs: Practical Ways to Accelerate Virtual Servers

Reference Architecture: Lenovo Client Virtualization with VMware Horizon

UN 4013 V - Virtual Tape Libraries solutions update...

F600Q 8Gb FC Storage Performance Report Date: 2012/10/30

SSDs: Practical Ways to Accelerate Virtual Servers

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

FlexArray Virtualization

NEXENTA S VDI SOLUTIONS BRAD STONE GENERAL MANAGER NEXENTA GREATERCHINA

Solid State Storage in Massive Data Environments Erik Eyberg

Chapter 12: Mass-Storage Systems

Arrow ECS sp. z o.o. Oracle Partner Academy training environment with Oracle Virtualization. Oracle Partner HUB

Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati

Increase Database Performance by Implementing Cirrus Data Solutions DCS SAN Caching Appliance With the Seagate Nytro Flash Accelerator Card

Michael Kagan.

PADS GPFS Filesystem: Crash Root Cause Analysis. Computation Institute

Virtual Desktop Infrastructure (VDI) Overview

Increasing Storage Performance

Choosing Storage Systems

HP SN1000E 16 Gb Fibre Channel HBA Evaluation

Enabling Technologies for Distributed and Cloud Computing

VMware VSAN och Virtual Volumer

Transcription:

Implementing Enterprise Disk Arrays Using Open Source Software Marc Smith Mott Community College - Flint, MI Merit Member Conference 2012

Mott Community College (MCC) Mott Community College is a mid-sized community college located in Flint, Michigan. ~800 faculty and staff ~12,000 students ~3,500 computers (academic + administrative)

Who Am I? Marc Smith is the Manager of Computing Services at MCC. Received B.S. in computer science from The University of Michigan-Flint. Prior to the current management role, served as a systems engineer. Extensive experience in Linux system administration, virtualization (server and desktop), and Storage Area Network (SAN) technologies.

History November 2010: Learned about new VMware View feature that allowed replicas to live on a different datastore, preferably SSD that would boost VDI performance. December 2010: Expensive SSD-backed disk arrays drove us to look at other options. Discovered an open source project called SCST. January 2011: Implemented a Dell R710 with (6) SATA SSDs, PERC H700 controller, (2) Fibre Channel HBAs, Gentoo Linux + SCST as pilot disk array for VDI. May 2011: SCST disk array pilot so successful, decided to implement two additional (24) slot SCST-based disk arrays to put all VDI VMs on SSD-based storage. December 2011: VDI implementation growing rapidly, plans to add additional SSD disk arrays are in the works...

New Solution SCST + Gentoo Linux disk arrays were working great, albeit a few issues: o Management - All CLI / configuration file driven, no UI for provisioning storage. Lack of other personnel @ MCC with the required skill-set for maintaining the arrays. o Updates - No good/easy method for controlled updates (`emerge --sync`) and no simple solution for rolling back. o OS Disk - Wasting (2) precious slots in the disk array chassis for the boot / OS volume (RAID1). Other options? o Openfiler - "Unified Storage" open source software solution; limited block-level (via SCST) storage support, much more focused on NAS (eg, CIFS, NFS, etc.).

ESOS Is Born! We didn't have the tool for the job, so we decided to develop one ourselves: ESOS - Enterprise Storage OS o A quasi Linux distribution that includes everything (software) that you need to setup a "storage server" (eg, disk array). o Includes Linux kernel (3.x), SCST, GLIBC, BusyBox, QLogic FC HBA firmware, RAID controller configuration utilities (eg, MegaCLI), and many more utilities/tools that are needed/useful for a block-level storage solution.

ESOS Platform Features ESOS is memory resident -- it boots off a USB flash drive, and everything is loaded into RAM. If the USB flash drive fails, ESOS will send an alert email, and you can simply build a new ESOS USB flash drive, then replace the failed drive and sync the configuration. Kernel crash dump capture support. If the ESOS Linux kernel happens to panic, the system will reboot into a crash dump kernel, capture the /proc/vmcore file to the esos_logs filesystem, and finally reboot back into the production ESOS kernel -- all automatically. ESOS sends an email alert on system start-up and checks for any crash dumps.

ESOS Platform Features (con't) Two operating modes: Production (default) & Debug. With "Production" mode, the performance version of SCST (make 2perf) is used. If you find you're having a problem and not getting sufficient diagnostic logs, simply reboot into "Debug" mode (full SCST debug build, make 2debug) and get additional log data. Enterprise RAID controller CLI configuration tools. Popular RAID controller CLI tools are included (the default) with ESOS (eg, LSI MegaRAID, Adaptec AACRAID, etc.) which allows configuration (add/delete/modify) of volumes / logical drives from a running ESOS system.

Storage Provisioning Features Per initiator device visibility management (LUN masking). Thin provisioning. Implicit asymmetric logical unit access (ALUA). SCSI persistent reservations. Data deduplication (via LessFS). Several different I/O modes for virtual SCSI devices, including the ability to take advantage of the Linux page cache (vdisk_fileio), or share an ISO image file as a virtual CDROM device (vcdrom) on your SAN.

Supported Hardware ESOS should be compatible with any popular enterprise RAID controller and Tier-1 server hardware. It currently supports the following front-end target types: o Fibre Channel -- QLogic FC HBAs only; those are the only FC adapters that SCST has target drivers for. o iscsi o SCSI RDMA Protocol (SRP -> InfiniBand) o Coming soon: Fibre Channel over Ethernet (FCoE) -- Already implemented in SCST, but still relatively new.

Cons Still being developed; the core of ESOS (SCST) is mature and stable, the whole package of ESOS (Linux kernel, SCST, user-land software, firmware, CLI tools, etc.) is new. o That being said, we (MCC) have been using Gentoo + SCST for our production VDI datastores for 1.5 years, and using ESOS on development VDI datastores for the last several months. We intend to start using ESOS in production this summer. No high availability / replication. o We are currently only using SCST/ESOS with our VDI environment; we use floating pools spread across multiple of these "storage servers", so losing an entire storage server only decreases the number of available machines.

Planned Features / Additions A text-based user interface (TUI) that will provide an easy to use interface with convenient storage provisioning functions. o Nearly all core SCST functionality will be implemented in the TUI, local LSI MegaRAID volume configuration, and typical system setup tasks (network configuration, mail, accounts, etc.). o See following slides for preview screen shots. High availability / replication: o Likely done via DRBD (block device mirroring) + ALUA (control I/O path between primary/secondary nodes) + Linux-HA project. o Idea still needs to be tested and validated.

Planned Features / Additions (con't) VMware vstorage APIs for Array Integration (VAAI): o These SCSI primitives are being implemented in SCST; WRITE SAME is already done, and the rest are coming.

TUI Preview Screen Shots (1)

TUI Preview Screen Shots (2)

TUI Preview Screen Shots (3)

Our Current Configuration Each storage server (disk array) base configuration: o SuperMicro 2U 24x2.5in Bay Chassis w/900w Redundant Power Supplies o (2) Intel Xeon Processor E5645 (12M Cache, 2.40 GHz, 5.86 GT/s Intel QPI) ( 5520 Chipset Motherboard) o 12 GB 1333MHz DDR3 ECC Memory (6 GB usable with mirroring mode) o (1) LSI MegaRAID 9280-24i4e SATA/SAS 6Gb/s PCIe 2.0 w/ 512MB (with FastPath) o (2) QLogic 8GB Single Port Fibre Channel HBA We currently own (3) of these chassis (one development, two production). We have requisitioned an additional two that we are expecting to arrive soon. ~$6,000 per 24 slot server/chassis as described above.

Our Current Configuration (con't) (1) global hot spare in each chassis; (4) RAID 5 volumes with five disks each; (1) RAID 5 volume with three disks. Performance (direct IO, `fio` tool, sustained IO per volume): o 4K Random Read IOPS: ~84K o 4K Random Write IOPS: ~15K o 4K Mixed Random RW (75/25): ~27K / ~9K

Our Current Configuration (con't) Disks used in the (2) production arrays: o Crucial RealSSD C300 256GB CTFDDAC256MAG o Purchase price ~$450 each. o $450 * 24 = $10,800 per array/server for ~6 TB raw SSD storage. Disks chosen for the additional (2) new arrays: o Samsung 830 Series MZ-7PC256B/WW 2.5" 256GB SATA III MLC o Purchase price ~$275 each. o $275 * 24 = $6,600 per array/server for ~6 TB raw SSD storage. Large cost difference (reduction) in consumer SSD disk prices between last year and now.

Questions / Comments http://code.google.com/p/enterprise-storage-os/