Report - Datacenter Consolidation & Virtualization Project



Similar documents
What s New with VMware Virtual Infrastructure

How To Virtualize A Server At Swic

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

Medical Center Trims Budget by $600,000 by Switching to Hyper-V Private Cloud

June Blade.org 2009 ALL RIGHTS RESERVED

Windows Server 2003 Migration Guide: Nutanix Webscale Converged Infrastructure Eases Migration

Server Consolidation & Virtualization Category Improving State Operations

NETAPP WHITE PAPER USING A NETWORK APPLIANCE SAN WITH VMWARE INFRASTRUCTURE 3 TO FACILITATE SERVER AND STORAGE CONSOLIDATION

Addendum No. 1 to Packet No Enterprise Data Storage Solution and Strategy for the Ingham County MIS Department

How Customers Are Cutting Costs and Building Value with Microsoft Virtualization

Daly Computers Webinar for MEEC: Moving to a Virtualized Environment

Virtual Machine Environments: Data Protection and Recovery Solutions

Why is the V3 appliance so effective as a physical desktop replacement?

Tomato Disaster Solution (TDS)

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider

Simplifying the Transition to Virtualization TS17

StoneFly SCVM TM for ESXi

Cost Benefits of Server Virtualization

Data Center Optimization. Disaster Recovery

Virtualization: Benefits & Pitfalls. Matt Liebowitz, Kraft Kennedy Tim Garner, Aderant Mike Lombardi, Vertigrate Sergey Polak, Ropes & Gray LLP

Leveraging Virtualization for Disaster Recovery in Your Growing Business

HP Data Protector software Zero Downtime Backup and Instant Recovery. Data sheet

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

29/07/2010. Copyright 2010 Hewlett-Packard Development Company, L.P.

TOSM Server Backup Service

Virtualization of CBORD Odyssey PCS and Micros 3700 servers. The CBORD Group, Inc. January 13, 2007

Server-centric client virtualization model reduces costs while improving security and flexibility.

Comparing the Cost of Ownership of Physical PCs, VDI, and TetherView Desktops

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider

IT Strategic Plan INFRASTRUCTURE PROPERTIES AND PLANNING

Virtualizing Exchange

Electronic Records Storage Options and Overview

Sustain.Ability Honeywell Users Group EMEA. Virtualization Solutions: Improving Efficiency, Availability and Performance

Evolving Datacenter Architectures

Neverfail Solutions for VMware: Continuous Availability for Mission-Critical Applications throughout the Virtual Lifecycle

DeltaV Virtual Studio

CA Cloud Overview Benefits of the Hyper-V Cloud

Best Practices in Business Recovery: Colocation or DRaaS? By Brien M. Posey

Optimization, Business Continuity & Disaster Recovery in Virtual Environments. Darius Spaičys, Partner Business manager Baltic s

BackupAssist Common Usage Scenarios

DeltaV Virtual Studio

Why a Server Infrastructure Refresh Now and Why Dell?

Leveraging Virtualization in Data Centers

SQL Server Virtualization

Program Summary. Criterion 1: Importance to University Mission / Operations. Importance to Mission

NETGEAR SMB Storage Line Update and ReadyNAS 2100 Introduction

Installation Guide. Step-by-Step Guide for clustering Hyper-V virtual machines with Sanbolic s Kayo FS. Table of Contents

WhitePaper. Private Cloud Computing Essentials

Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006

StorageCraft Technology Corporation Leading the Way to Safer Computing 2009 StorageCraft Technology Corporation. All Rights Reserved.

How To Make A Cloud Based System A Successful Business Model

W H I T E P A P E R. Reducing Server Total Cost of Ownership with VMware Virtualization Software

Expert Reference Series of White Papers. Visions of My Datacenter Virtualized

Information Technology White Paper

Hyper-converged Solutions for ROBO, VDI and Transactional Databases Using Microsoft Hyper-V and DataCore Hyper-converged Virtual SAN

Virtualization and Disaster Recovery

Migrating Control System Servers to Virtual Machines

Server Virtualization A Game-Changer For SMB Customers

For nearly 40 years, Metrologic Instruments has been designing and manufacturing both

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

Information Services hosted services and costs

Business Virtualization

Daly Computers Webinar for MEEC: P4000 SAN Solutions

Developing a Backup Strategy for Hybrid Physical and Virtual Infrastructures

Taking the Disaster out of Disaster Recovery

Pacific Life Insurance Company

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

WHO ARE WE AND WHAT WE DO?

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Top 5 reasons to virtualize Exchange

An Oracle White Paper November Oracle Real Application Clusters One Node: The Always On Single-Instance Database

Microsoft Virtual Desktop Infrastructure (VDI) FAQ

Server Consolidation with SQL Server 2008

CA arcserve Unified Data Protection virtualization solution Brief

Performance characterization report for Microsoft Hyper-V R2 on HP StorageWorks P4500 SAN storage

SERVER VIRTUALIZATION.. ROADMAP REPORT..

Windows Server 2008 R2 Hyper-V Live Migration

Extending the Power of Your Datacenter

Online Storage Replacement Strategy/Solution

CA ARCserve Replication and High Availability Deployment Options for Hyper-V

The Benefits of Virtualization for Your DR Plan

Virtualization with Windows

Technology Insight Series

DeltaV Virtualization

Backup & Disaster Recovery for Business

OPTIMIZING SERVER VIRTUALIZATION

Best Practices for Virtualised SharePoint

Rethink Disaster Recovery with Microsoft

Server Virtualization and Consolidation

Protecting Virtual Servers with Acronis True Image Echo

DATA PROGRESSION. The Industry s Only SAN with Automated. Tiered Storage STORAGE CENTER DATA PROGRESSION

Transcription:

Introduction AgIT stood up its first virtual environment in mid 2007 (Microsoft) and another in late 2007 (VMware). The VMware virtual services were made available to all departments within the College of Agriculture. The overall objective of the of the AgIT virtualization service was, to convert as many physical servers into the virtual environment as possible. Doing so would ultimately lower cost, speed deployment, provide additional disaster recovery options, ease testing and development, and provide unprecedented mobility, flexibility and reliability. A second-order objective sought to move all physical servers that couldn t be virtualized into the data center to leverage security, a more controlled environment and better utilize the data center facility located in SMITH 114. There were server and network equipment distributed throughout the Ag campus creating power and cooling challenges at many of these locations. These conditions were leading to an increasing number of infrastructure investments being made at distributed sites while the data center remained under utilized. Finally, many of the devices serving up departmental applications were running on outdated and/or repurposed equipment not designed as application servers. And only a few servers/desktops were under warranty. These conditions introduced performance issues and increased risks of downtime and/or data loss. Landscape Prior to Project At the time of the call for participation in the Campus Datacenter Virtualization/Consolidation Project (February 2010), there were 112 servers housed in the AgIT facility located in Smith Hall SMTH. Of those, 102 were delivering college-level services for AgIT, Genomics Center and the Center for Environmental Regulatory Information Systems. Only 10 departmental servers were housed in the datacenter. Similarly, the College of Agriculture COA VM environment consisted of 56 virtual servers. Of those, 40 were serving central functions and the remaining 16 were hosting applications for college departments. One department had also stood up its own VM environment (MS Hyper-V) that housed 5 VMs. It was clear in February, 2010 that minimal progress had been made toward moving departmental servers to the data center, or towards the college virtualized environment. The vast majority of departmental server infrastructure was still a mixture of desktop and server devices, many of which were located in same office as the people supporting them, and/or spread across various buildings.

Campus Datacenter Virtualization/Consolidation Project Costs The Datacenter Virtualization/Consolidation Project provided 75K from the S 3 Program. In addition, it purchased our existing VM environment for use in the Physical Facilities datacenter (50K). From the Project (S 3 ) 75K From sale of existing VM equipment 50K 125K With this 125K investment, plus a 25K investment from AgIT, the four R815 computers and 60 terabytes of SAN storage was purchased to create the new virtual environment. An additional 47K was invested by AgIT for upgrades to existing VMware licenses, MS Datacenter OS licenses, USB dongle support and 96 terabytes of NAS storage used to provide backup for the VM servers for a total investment of ~72K from AgIT. The back-up for the environment is provided via Microsoft Data Protection Manager or other backup mechanisms within each VM server. Total project expenses were: R815 servers 35,723 60 terabytes SANS storage (mirrored) 114,199 96 terabytes NAS Storage Hardware (Tier 2 backup) 33,191 USB dongle support 255 MS Datacenter OS License 7,583 Vmware upgrade to plus for Current Licenses 6,173 New Vmware Licenses (Vmware gift) 0 197,124 It can be observed the largest expense to the project was storage and storage backup. The storage requirements were more a function of consolidation than virtualization as many departmental servers being virtualized had large amounts of attached storage. The storage costs were exacerbated by the fact that COA already had a standing configuration requirement for its existing VM environment that called for mirrored SANs nodes. Though the additional costs associated with this standard were a reality for COA, it will not likely be the case for comparable projects. Similarly, other projects won t necessarily require a MS Datacenter OS License. In COA s case, there were enough departmental servers being upgraded from older OS s, at the same time they were being virtualized, that it was more cost effective to purchase the datacenter license.

COA had also implemented an n+1 strategy sizing for the VM server cluster to allow live, real-time management of the environment and improve fail-over and disaster recovery. Though and accepted best practice, it is not necessary a requirement for other future projects. Hardware Migration The next phase of the virtualization/consolidation plan consisted of physically moving all existing server equipment located in departmental areas to the SMTH datacenter. This process began in late February, and within a week 115 devices were moved from department locations to the SMITH datacenter bringing the total number of devices hosted in the facility to 227. Devices for this metric are defined as any device associated with the server included UPS, external hard/tape drives, CD towers etc. Physical devices previously hosted in datacenter 112 Physical devices moved to datacenter from departmental locations 115 227 Virtualization / Consolidation Consolidation and virtualization of servers/peripherals was the final phase of the project. Through consolidation, 80 devices were eliminated completely. Another 50 servers were moved from physical devices to the virtual environment. Of the 227 physical devices located in the SMITH datacenter, 131 were eliminated through virtualization and/or consolidation bringing the total number of virtual servers hosted in the environment to grow from 56 to 126 and the total number of physical devices to 97. Physical servers consolidated 15 Virtual servers in old environment 56 Physical servers virtualized 65 Total new virtual servers created 1 70 Physical servers eliminated 80 Total # of virtual servers 126 Physical servers in old environment 126 Physical servers removed 80 Total # of physical servers remaining 46 Other devices eliminated Other devices remaining Storage 26 Storage 11 Tape 5 Tape 3 UPS 7 UPS 19 Network 8 Network 12 Other 5 Other 5 51 50 Total physical devices eliminated 131 Total physical devices remaining 2 96 1 Five virtual servers represent new need that was not directly related to the pilot 2 One server & four storage devices for the new environment bring actual # of devices remaining to 102

Cost Savings The table below contains the cost savings elements of the project. With an investment of ~196.7K and an annual savings of ~53.2K, the ROI is 3.7 years. An approximate total savings of 68.6K will be realized over the 5-year life of the new virtual environment before reinvestment is required. Estimated Yearly savings figures Project Costs ROI Energy Savings 1 Capital Savings 2 Space Total Yearly 4 Savings 3 Savings Capital Invested Yearly Maint. 5,230.80 35,386.00 19,334.29 53,151.09 197,124.00 6,800.00 3.7 Yrs 1 Average estimated annual energy savings thru 2013 (4,733-FY10, 5,051-FY11,5,389-FY 12 5,750-FY13). Sixteen Jiang cluster nodes were moved from Lilly to Smith as part of this project resulting in energy savings relocation from Smith to Lilly. Additional storage nodes were also added to MATH. 2 Annual hardware replacement avoidance based on self-reported budget reported by departments divided by the average lifecycle reported. Servers + peripherals devices (130 devices @ 4.5-yr avg. lifecycle) - 35,386. Servers alone (65 physical servers @ 1000/server @ 4.5-yr avg. lifecycle) - 14,258. 3 Space savings is calculated based on local market values of 14 per square foot for recovered office space or 670 per month for each rack eliminated. Rack space value of 670 per month was acquired from PRF and is their cost per unpowered standard datacenter rack 4 Total energy, capital & space savings, minus annual maintenance. An additional cost not represented in the savings demonstrated above is the additional electricity consumed by locating mirror SANs nodes and backup NAS devices in the MATH datacenter. 2 nd Order Cost Savings Several 2 nd order savings were not calculated nor included in the savings totals. Those realized are below. It should be noted that FTE savings were not included as dollars saved since FTE will be reinvested in provision and support of other services. ~40% less FTE required to manage virtual servers vs. physical servers energy saved in the various buildings from which departmental server equipment was removed space saved in buildings from which departmental equipment was removed diminished risks to data via fully redundant, off-site mirrored storage additional security afforded by hosting servers in datacenter better service responsiveness vi a greater flexibility and shorter set-up time afforded by virtual environment

Conclusion The SMITH Datacenter project has demonstrated an approximate 3.7-year return on the project investment thus allowing for approximately 1.3 years of savings (68.6k) before we have to reinvest in the lifecycle replacement of the environment. While it can be argued that the resulting savings from the pilot project isn t significant, it is clear that there a number of 2 nd order savings that return significant value to the institution. It s also apparent that we are provisioning server hosting services in a much more collaborative, efficient and sustainable manner. It should also be noted that AgIT had preexisting specification standards for its virtual environment that may not be necessary in other applications. These standard requirements relate primarily to redundancy, fail-over and disaster recovery standards for the college. These requirements necessitated more investment than may be necessary for other applications. Taking this into account however, it is still evident from the AgIT pilot that an institutional strategy for continuing the consolidation and/or virtualization of campus datacenters should continue.

Following are lessons learned during the pilot. Storage One of the challenges of virtualization lays in the fact that many applications servers, which in themselves are easily virtualized, are attached to a myriad of storage devices that vary greatly in type and size. There are some departmental servers attached to very large and inexpensive storage devices such as numerous external USB/Fire-wire devices, or inexpensive network attached storage (NAS) devices. Virtual environments aren t designed to accommodate connections to many and varied storage devices. Support of fewer storage device connections that are also typically higher performance connections, requires that storage devices be of higher capacity. Higher performance/capacity devices require more sophisticated technology and management tools making them more expensive. FREE vs. FEE Departmental IT staff have historically been tasked to provide services at the least cost possible, using any reasonable means possible. This has resulted in the common practice of hardware reuse to accommodate this. Many obtain free equipment discarded by other groups on campus. In many cases a number of these old devices will be obtained to build one application server, storing the others from which parts can be scavenged to keep the device running well past its intended life-cycle. From a local perspective this practice facilitates a greater number of services to be provided at a very low cost to the department. Though services are perhaps of less quality due to the fact they re running on older, slower equipment, departments will argue that a slow services is better than no service. Of course, all of this comes at a fairly high cost to the institution in terms of risk, FTE, energy consumption and 2 nd -order problems caused by this approach. Many departmental IT staff are still concerned that this new approach will lower institutional cost while eventually increasing costs locally. Hardware Dongles Some application software requires hardware license dongles to be attached to an interface on the physical computer (serial, parallel, usb port). The VSX environment does not support these devices. Thus there are some servers in departments that can t be virtualized. ESX does support using dongles. We were able to accommodate hardware dongles by incorporating the use of a AnywhereUSB device from Digi. High Performance Computing There are a number of applications, such as gene sequencing, that requires significant computational resources. While there is little advantage to vitalizing such applications, many of these run on very old equipment and represent the top consumers of electricity and cooling capacity. Given the potential energy savings, the question remains as to whether these devices should be replaced as part of the pilot.

Incompatible Applications Some applications submitted for possible virtualization include Apple equipment, devices running the Open BSD operating system. These represent ~25 of the 120 hardware devices being considered for Virtualization. Difficult Applications Oracle databases, GIS apps, hardware clusters, etc., are among applications that are difficult to virtualize. These types of applications will require more time to move into the virtual environment than the project timetable allowed. There are ~25 such devices that that were not part of the initial project, but will be moved in a later phase. Managing Allocation & Growth The project was offering a new virtual environment at greatly reduced investment levels for the College. The College, in turn, was offering free use of the environment by departments through its lifecycle (4 years). However, funding restrictions necessitated a design that accommodated only the hardware inventoried in April, plus allow for some modest growth. As nice as the new environment is, there are limits to the resources available...storage being the primary limiting factor. This coupled with shrinking IT budgets forced us to use the new resources as prudently as possible. We asked that the data being moved from existing hardware to the virtual environment that be assessed as to whether any amount of that data could be accommodated on 2 nd tier (NAS) storage or on the Fortress archive services. Doing so allowed us conserve our direct connected SANS resources and allow for a larger number of VMs in the environment. We also asked the amount of storage requested for each virtual server be limited to that stated at the time the equipment was moved to the datacenter. Policy for accommodating new growth is being developed. Pent up Need We didn t want the process of moving existing hardware to the new environment to hold up critical new need for servers. We asked that the number of requests for new VM s be limited to only those that were mission critical and/or time sensitive until we ve had moved existing hardware. An additional 20 servers were created between the time we began the project and this report.