Hybrid Clouds: are we there yet? by Johan De Gelas on 10/18/2010 2:05:00 PM Posted in IT Computing. Page 1



Similar documents
EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

VMware vsphere-6.0 Administration Training

Stretching VMware clusters across distances with EMC's Vplex - the ultimate in High Availability.

Virtualization Technologies and Blackboard: The Future of Blackboard Software on Multi-Core Technologies

Amazon EC2 XenApp Scalability Analysis

NET ACCESS VOICE PRIVATE CLOUD

Enabling Technologies for Distributed and Cloud Computing

Evaluation of Multi-Hypervisor Management with HotLink SuperVISOR

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

SUSE Cloud 2.0. Pete Chadwick. Douglas Jarvis. Senior Product Manager Product Marketing Manager

How To Create A Cloud Based System For Aaas (Networking)

Philips IntelliSpace Critical Care and Anesthesia on VMware vsphere 5.1

Enabling Technologies for Distributed Computing

JOB ORIENTED VMWARE TRAINING INSTITUTE IN CHENNAI

When talking about hosting

Best Practice of Server Virtualization Using Qsan SAN Storage System. F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series

Expert Reference Series of White Papers. VMware vsphere Essentials

Virtualization & Cloud Computing (2W-VnCC)

An Introduction to Private Cloud

HP + Veeam: Fast VMware Recovery from SAN Snapshots

Global Headquarters: 5 Speen Street Framingham, MA USA P F

VMware vsphere Design. 2nd Edition

Monitoring Databases on VMware

White Paper : Virtualization and Cloud Computing Virtualization and Cloud Computing: The primary solution to the future of Testing

Expert Reference Series of White Papers. Visions of My Datacenter Virtualized

SPEED your path to virtualization.

Bla Bla Bla Cloud. Massimo Re Ferre VMware Staff Systems Engineer vcloud Architect VMware Inc. All rights reserved

SolarWinds Virtualization Manager

VMWARE VSPHERE 5.0 WITH ESXI AND VCENTER

StACC: St Andrews Cloud Computing Co laboratory. A Performance Comparison of Clouds. Amazon EC2 and Ubuntu Enterprise Cloud

WHITEPAPER. One Cloud For All Your Critical Business Applications.

Lecture 02b Cloud Computing II

OVERVIEW. The complete IaaS platform for service providers

vsphere Private Cloud RAZR s Edge Virtualization and Private Cloud Administration

Vmware VSphere 6.0 Private Cloud Administration

Quorum DR Report. Top 4 Types of Disasters: 55% Hardware Failure 22% Human Error 18% Software Failure 5% Natural Disasters

SolarWinds Virtualization Manager

VMware vsphere 5.0 Boot Camp

Hosting Blackbaud Software in the Cloud

How Customers Are Cutting Costs and Building Value with Microsoft Virtualization

C a r l G o e t h a l s T e r r e m a r k E u r o p e. C a r l. g o e t h a l t e r r e m a r k. c o m

Using VMware VMotion with Oracle Database and EMC CLARiiON Storage Systems

Best Practices for Implementing iscsi Storage in a Virtual Server Environment

Index C, D. Background Intelligent Transfer Service (BITS), 174, 191

Journey to the Private Cloud. Key Enabling Technologies

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

HyperQ DR Replication White Paper. The Easy Way to Protect Your Data

VXLAN: Scaling Data Center Capacity. White Paper

Using SUSE Cloud to Orchestrate Multiple Hypervisors and Storage at ADP

Silver Peak Virtual Appliances

Virtual SAN Design and Deployment Guide

vcloud Air Disaster Recovery Technical Presentation

NetScaler VPX FAQ. Table of Contents

Veritas Storage Foundation High Availability for Windows by Symantec

MICROSOFT CLOUD REFERENCE ARCHITECTURE: FOUNDATION

How To Make A Virtual Machine Aware Of A Network On A Physical Server

Simplified Private Cloud Management

IN DETAIL. Smart & Dedicated Servers

Directions for VMware Ready Testing for Application Software

Top 5 reasons to virtualize Exchange

CA Cloud Overview Benefits of the Hyper-V Cloud

vcloud Air Simone Brunozzi, VP and Chief Technologist, vcloud 2014 VMware Inc. All rights reserved.

Your Guide to VMware Lab Manager Replacement

- Brazoria County on coast the north west edge gulf, population of 330,242

OnApp Cloud. The complete platform for cloud service providers. 114 Cores. 286 Cores / 400 Cores

The future is in the management tools. Profoss 22/01/2008

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD

VMware for your hosting services

HIGH-SPEED BRIDGE TO CLOUD STORAGE

STeP-IN SUMMIT June 18 21, 2013 at Bangalore, INDIA. Performance Testing of an IAAS Cloud Software (A CloudStack Use Case)

THE VIRTUAL DATA CENTER OF THE FUTURE

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Overview. The OnApp Cloud Platform. Dashboard APPLIANCES. Used Total Used Total. Virtual Servers. Blueprint Servers. Load Balancers.

Pivot3 Reference Architecture for VMware View Version 1.03

VMUG - vcloud Air Deep Dive VMware Inc. All rights reserved.

Preparation Guide. How to prepare your environment for an OnApp Cloud v3.0 (beta) deployment.

VMware vsphere 5.1 Advanced Administration

Servervirualisierung mit Citrix XenServer

VMware vsphere 5.0 Evaluation Guide

Enterprise Cloud Solutions

Data Center Networking Designing Today s Data Center

VMware VDR and Cloud Storage: A Winning Backup/DR Combination

Remote PC Guide Series - Volume 1

Evolving Datacenter Architectures

VIRTUALIZATION 101. Brainstorm Conference 2013 PRESENTER INTRODUCTIONS

Cloud Computing and the Internet. Conferenza GARR 2010

WHITE PAPER. How To Build a SAN. The Essential Guide for Turning Your Windows Server Into Shared Storage on Your IP Network

The VMware Administrator s Guide to Hyper-V in Windows Server Brien Posey Microsoft

CHAPTER 2 THEORETICAL FOUNDATION

CA ARCserve Family r15

Windows Server 2008 R2 Hyper-V Live Migration

A Cloud WHERE PHYSICAL ARE TOGETHER AT LAST

Aerohive Networks Inc. Free Bonjour Gateway FAQ

Server Virtualization A Game-Changer For SMB Customers

Windows Server 2008 R2 Hyper V. Public FAQ

Going Hybrid. The first step to your! Enterprise Cloud journey! Eric Sansonny General Manager!

VMware System, Application and Data Availability With CA ARCserve High Availability

SteelFusion with AWS Hybrid Cloud Storage

Transcription:

Hybrid Clouds: are we there yet? by Johan De Gelas on 10/18/2010 2:05:00 PM Posted in IT Computing Just 9 months ago, almost 1500 out of a total of 3146 readers (46%) told us in a poll that they feel that "Cloud Computing is hot air, it will blow over". And that is understandable given the confusing buzz surrounding the "Cloud". Don't worry, this article is not about philosophizing about the "Cloud Computing" hype. That is definitely not the Anandtech way. We will focus on the infrastructure part, the part that we know best. We visited VMworld Europe 2010 in Kopenhagen just a few days ago. Talking to a lot of different vendors in a few days made the light shine brighter through the clouds. Think Cloud Computing is crazy rambling? Meet the hybrid cloud with VM teleportation, extremely stretched virtual networks and unified communications... But we should not forget that most Anandtech readers are still hardware enthusiasts. Why should you care about "fluffy" cloud computing? Let us give you a crash course before we dive into the underlying technology. Page 1 Cloud computing for startups: no infrastructure people necessary The simplest way of cloud computing: renting virtual machines to avoid hiring infrastructure people. It is very tempting for "developers startups" that simply want to develop and let a software service. Buying hardware and getting support of a third party integrator is a costly and time consuming endeavour: you do not know how succesful your application will be so you might over or undersize the hardware. If you have no infrastructure knowledge in house and you have to call in the help of a third party integrators for every little problem or upgrade, consulting costs will explode quickly. After a few years, your integrator is the only one with intimate knowledge of your server, networking and OS configuration. So even if the integrator is too expensive or gives you a lousy service, you can not switch quickly to another one. Developers just want an OS to run their software on, and that is what the public clouds deliver. In Amazon EC2 you simply chose an instance. An instance is a combination of virtual hardware and an OS template. So in a few minutes you have a fully installed windows or linux VM in front of you and you can start uploading your software. Getting your application available on the internet could not be simpler. The number of instances on EC2 grows with an incredible pace, so they must be doing something right. According to Rightscale, 10.000 instances were launched per day at the end of 2007. At the end of 2009, this number was multiplied by 5! In theory, using a public cloud should be very cheap. Despite the fact that the public cloud letter has to make a healthy profit, the cloud vendor can leverage the economies of scale. Examples are bulk buying servers and being able to invest in expensive technologies (cooling, high efficient UPS) that only make sense in large deployments. The reality is however that renting instances at Amazon EC2 is far from cheap. A quick calculation showed us that for example reserving 10 (5 large + 5 small) Amazon EC2 Windows based instances cost about $19000 per year (Tip: Linux instances cost a lot less!). The one time fee ($8750) alone costs more than a fast dual socket server and the yearly electricity bill. As you can easily run 10 VMs on one server, it is clear that adopting an Amazon based infrastructure is far from a nobrainer if you have the expertise in house. http://www.anandtech.com/print/3975 1/7

The cost savings must come from the reducing the number of infrastructure professionals or third party consultants that you hire. Page 2 Public versus private cloud Just a few years ago, getting an application or IT service running took way too long. Once the person in charge of the project got permission to invest in a new server, the IT infrastructure guys ordered a server and it probably took a few weeks before the server arrived. Then if all went well, the IT infrastructure guys installed the OS and gave the person in charge of the software deployment a static ip to remotely install all the necessary software. In the virtualization age, the person in charge of the application project calls up the infrastructure people and a bit later a virtual machine is provisioned and ready to install. So if you have already invested a lot in a virtualized infrastructure and virtualization experts, it is only logical that you want the flexibility of the public cloud in house. Dynamic Resource Scheduling, as VMware calls it, is the first step. A cluster scheduler that shutdowns unnecessary servers, boots them up if necessary and places VMs on the best server is a step forwards to a "private cloud". According to VMware, about 70 to 80% of the ESX based datacenters are using DRS. Indeed, virtualization managers such as VMware vcenter, Citrix Xencenter, Convirt and Microsoft System Center VMM have made virtualized infrastructure a lot more flexible. Some people feel that "Private clouds" are an oxymoron, because unless you are the size of Amazon or Google, they can never be as elastic as the "real" clouds, can not leverage the same economies of scale and do not eliminate CAPEX. But that is theoretical nitpicking: public clouds can indeed scale automatically (See Amazon's Auto Scaling here) with much greater elasticity, but you will probably only use that with the brakes on. Scaling automatically to meet the traffic requirements of a DOS attack could be pretty expensive. Terremark Enterprise Cloud allows the virtual machines to "burst" for a while to respond to peak traffic, but it is limited to for example 1 GHz of extra CPU power or 1 GB of memory. It won't let you triple your VM resources in a few minutes, avoiding a sky high bill afterwards. And the CAPEX elimination? Just go back one page and look at the Amazon EC2 pricing. If you want to run your server 24/7 using the "pay only what you use" pricing will cost you way too much. You will prefer to reserve your instances/virtual machines, and pay a "set up" or one-time fee: a capital investment to lower the costs of renting a VM. The Cloud Killer feature The real reason why cloud computing is attractive is not elasticity or economies of scale. If it works out well, those are bonuses, but not the real "killer feature". The killer is instantantous self-service, IT consumption or in real human language: the fact that you can simply login and can get what you need in a few minutes. This is the feature that most virtualization managers lacked until recently. OpenQRM, an Open Source Infrastracture Management Solution is a good example how the line between a public and private cloud is getting more blurred. This datacenter manager does not need a hypervisors installed anymore. It manages physical machines and installs several different hypervisors (Xen, ESX and KVM) on bare metal in the same datacenter. The Cloud Plugin and Visual Cloud Designer make it possible to make a virtual machines on the fly and attacha "pay as you use" accounting system to it. OpenQRM is more than a virtualization manager: it is allows you to build real private clouds. So the real difference between a flexible and intelligent cluster and a private clouds is a simple interface that allows the "IT consumer" to get the resources he/she needs in minutes. And that is exactly what VMware's new vcloud Director does: adding a self service portal that allows the user to get the resources that he/she needs quickly all within the boundaries set by the IT policies. So private clouds do have their place. A private cloud is just a public cloud which happens to be operated by an internal infrastructe staff rather than an external one. Or a public cloud is a private cloud that is outsourced. Both are accessible on the internet and on the corporate LAN, and some private clouds might even be larger than some public ones. Maybe one day, we will laugh with the "Cloud Computing" name, but Infrastructure as a quick Service (IaaS) is here to stay. We want it all and we want it now. Page 3 The best of two worlds http://www.anandtech.com/print/3975 2/7

Most existing companies have already invested quite a bit of money and time in deploying their own infrastructure and building up expertise. Also, thinking smartly out of the box in infrastructure land pays off in most cases. And lastly, few people will place their sensitive IP related data somewhere in an external datacenter. It will be no surprise that the "hybrid cloud" is the ideal model for most companies out there. Just like in the business world, you outsource some of your processes (HR, Facility management etc.) but things related to your core business stay inside. If you are an engineering company, your engineering data should stay inside the walls of your own datacenter. vsphere 4.1 and vcloud Director, one of the possible building blocks of a hybrid cloud" The Hybrid cloud model means you should be able to move VMs from your own datacenter to a public cloud and back. The reality is that it is not that simple to upload a VM to a public cloud service, and that it pretty hard to import the work that you have done in a public cloud back in to your own datacenter. If you want to get idea what it really involves, look here and here. Many public cloud vendors, formely hosting providers, are now adding up and download capabilities to their self service portals. Being able to quickly download and upload virtual machine between your own infrastructure and that of a hosting provider is the first step towards the "hybrid cloud". Let it be clear: the fully automated hybrid cloud where you manage all your VMs through one interface, moving VMs easily and quickly from your private to a public cloud is not here yet. So what do we need besides management software such as vcloud Director? You have probably guessed it already: a storage and networking bridge between datacenters. Page 4 Building a bridge between two datacenters is pretty easy and mature. In fact many of you do this on a daily basis: you make a VPN connection between your own network and a remote network. Site to site VPNs have been available for quite some time now. And that is exactly what Amazon offers with their VPC product. Howie Xu, R&D Director from VMware, presented VMware s vision concerning the future of networking. VMware virtual switch was the first step. The vswitch offered portgroups which allowed you to separate storage, console and VM networking from each other. A vswitch was always limited to one server, one host. In vsphere 4.0, the distributed vswitch was born. In 4.1 with VLANs and traffic shaping was introduced. Distributed switches can span several servers, making it easier to configure networking for a complete cluster instead of configuring vswitches on different hosts. http://www.anandtech.com/print/3975 3/7

The next step would be to creat a Distributed Virtual Network, a virtual networking chassis. This vchassis would be able to create a distributed network from layer 2 to 7 accross two or more datacenters. Unfortunately, the presentation did not give any insight on how this will be accomplished. Page 5 Hybrid "Storage" Cloud A good network connection between two datacenters is the first step. But things really get interesting when you can move your data quickly and easily to another place. Moving a VM (vmotion, Xenmotion, live migration) towards a new location is nice, but you are just moving the processing and memory part. Moving a VM to a location that is hundreds of kms/miles away behind a firewall and letting it access storage over a long distance is a bad idea. The traditional way to solve this would be a fail-over mirroring setup. One node is the "original" active node. This is the one that is written to: the application will send writes to handle an OLTP transaction for example. The other node is the passive node and is synced with the "original" one on a block level. It does not handle any transactions. You could perform a "fake" storage migration by shutting down the active node and letting the passive node take over. Nice, but you do not get any performance scalability. In fact, the "original" storage node is slowed down as it has to sync with the passive one all the time. And you can not really "move around" the workload: you must first invest a lot of time and effort to get the mirror up and running. Another solution is simply merge two SANs to one. You place one SAN in a datacenter, and the other one in another datacenter. Since high-end optical fibre channel cables are able to bridge about 10 kms, you can build a "stretched SAN". That is fine for connecting your own datacenters locally, but it is nowhere near our holy grail of an "hybrid cloud". Storage vmotion and vmotion are relatively affordable solutions to create a hybrid cloud at first sight. But at second thought, you'll understand that moving a VM between your private cloud and public cloud without down time will turn out to be pretty challenging. From the vsphere Admin guide: "VMotion requires a Gigabit Ethernet (GigE) network between all VMotion-enabled hosts." A dedicated gigabit WAN link is not something that most people have access to. And expanding the VLAN accross datacenters can be pretty hard too. It will work, but it is not supported by VMware (as far as we know) and will cause a performance hit. We have not measured this yet... but we will. VM Teleportation A little rant: I have learned to stay from presentations of quite a few vice presidents. Some of these VPs seem to be so out of touch with technical reality that I could not help wondering if they ever set foot in the tech company they are working for. What supposed to be a techy presentation turns out in an endless repitition of "we need to adapt to the evolving needs of our customers" and "those needs are changing fast". And then endless slides with smiling suits and skyscrapers. Chad Sakac, the EMC VMware Technology Alliance VP, restored my faith in VPs: a very enthusiatic person which obviously spends a lot - probably too much - of time with his engineers in the EMC and VMware labs. Technical deep dives are just a natural way to express himself. If you don't believe me, just ask anyone attending the sessions at VMworld or EMCworld or watch this or this. Chad talked about EMC's "VM teleportation" device, the EMC vplex. At the physical level, the vplex is a dual fully redundant Intel Nehalem (2.4 GHz) based server with 4 redundant I/O modules. http://www.anandtech.com/print/3975 4/7

Each "Director" or Nehalem Server has 16-64 GB of RAM. Four GB is used for the software of the VPLEX engine, the rest is used for caching. The VPLEX boxes are expensive ($77000) but the VPLEX engine does bring the "ideal" hybrid closer. Place one VPLEX in each datacenter on top of your SANs (can be EMC or other). The cool thing about this VPLEX setup is that it is able to move a large VM (and even several large ones), even if that VM belongs to heavy OLTP database server, over to a remote datacenter very quickly and with acceptable performance impact. Of course EMC does not disclose fully how they have managed to made this work. What follows is summary of what I managed to jot down. Both datacenters have part of the actual storagevolumes and are linked to each other with at least 2 fast FC network links. The underlying SANs have probably some form of networking RAID similar to the HP lefthand devices. The virtual machine and data that is being moved, uses "normal" vmotion (no storage vmotion) towards the other datacenter. The VM can thus start immediately after the end of vmotion, after copying the right pages of the original host's memory. That takes only a minute or less, and meanwhile the VM keeps responding. The OLTP application on top is not disrupted in any way, just slowed down a bit. You can see below what happened when 5 OLTP databases were moved. A Swingbench benchmark on top of these VMs measured the response time before, during and after the movement of the 5 VMs. http://www.anandtech.com/print/3975 5/7

As the cache of the remote VPLEX device is "cold", it needs to get a lot of data from the other side. The remote VPLEX cache gets a lot of cache misses at first. So if you run a transactional load in the VM, you will notice higher latency after the vmotion: about 2.5 times higher (7.5 instead of 3 ms). After some time (40 minutes in this case), the second cache is also filled with the most requested blocks. A directory based distributed cache coherence makes sure the VPLEX node can answer to the I/O requests, whether it be reads or writes. This is very similar to directory based CPU caches (here: the VPLEX cache) and how they interact with the RAM (here: the distributed "network RAID" virtual storage volume). The underlying layer must take care of the rest: writes should not happen in both datacenters on the same block. So in the case of vmotion, the writes will be written by the VM that is active. So if the vmotion is not over yet, writing will happen first on the original location, and once the VM has been moved, the changes will be written at the new location. EMC call this VPLEX device a geo disperse, cache coherent storage federation device. Right now the VPLEX METRO allows synchronous syncing over a distance of 100 km. The requirements are pretty staggering: An IP network with a minimum bandwidth of 622 Mbps The maximum latency between the two VMware vsphere servers cannot exceed 5 milliseconds (ms), about 100 km with a fiber network. The source and destination ESX servers must have a private network on the same IP subnet and broadcast domain. So this is definitely very cool technology, but only for companies with deep pockets. EMC does not want to stop there. A few months ago EMC anounced VPLEX Metro (5ms or about 100 km) synchronous. In 2011, the VPLEX family should be able to bridge 1000 km using asynchronous syncing. Later, even higher distances will be bridged. That would lead to some massive migrations as you can see here. More info can als be found on Chad's personal blog. Page 6 Telecom finally enters the 21st century Telecom devices used to belong to a dark world where the vendors still rule and customers have to cope with their wims. Although modern telecom software is interfacing with mailboxes, VOIP and web conferencing, most telecom vendors force upon the sysadmins proprietary boxes which are completely closed. The vendors felt they could ignore the evolution towards modern flexible virtualized clusters. The motto was "We can only support you if you use our software on our hardware with specialized firmware". The "it will not work otherwise" smokescreen needed to hide the fact that vendors made customers pay premiums for outdated hardware. Mitel put a cat among the pigeons by offering their software a virtual appliance, i.e. an OVF image. The Virtual Mitel Communications Director (VMCD) does not demand its own server like the rest of the haughty telco software, but humbly installs itself on the virtual layer of your datacenter. http://www.anandtech.com/print/3975 6/7

It only requires that you reserve 2 to 4 cores and 2 to 4 GB of RAM so it can do its work for up to 1000 active users. Those cores must be an 2 GHz EPT (Hardware MMU) enabled Nehalems or better. The only supported hypervisor so far is VMware's vsphere 4.0 update 2. Summary It is pretty clear: advanced virtualization technology brings the advanced capabilities of the "public clouds" inside the datacenters of many enterprises. The virtual intelligent cluster is not going away, especially now that even the most "stubborn" applications such as OLTP databases and telco software are being virtualized. New "hybrid cloud" management software (vcloud Director, openqrm) will allow the sys admin people to offer the users an easy self service portal. At the same time the sys admins get a single pane to control both the public as the private cloud resources. We are not there yet. More advanced "cloud" networking software and storage migration tools will make it a lot simpler to seamlessly move virtual machines from your own premisses to the large datacenters and back. But we are getting close. At the high end, EMC VPLEX technology shows that this will become a reality even more for massive migrations that involve moving hundreds of VMs over great distances. And if you only want to move a few VMs from time to time, that is already possible with some careful tweaking, although not fully supported. Just take a look at the benchmark below, done by the EMC lab. VMware vsphere build-in Storage vmotion does the job of moving a complete VM + datastore to another datacenter a lot slower than the VPLEX setup (option 2) but it works without disruption. It won't take long before the Hybrid Cloud will also arrive in the smaller and medium IT business too. http://www.anandtech.com/print/3975 7/7