Microsoft SQL Server 2012 for Private cloud (Part 1) Darmadi Komo - Senior Technical Product Manager DARMADI KOMO: Hello, everyone. This is Darmadi Komo, senior technical product manager from SQL Server marketing. Today, I'm very excited to talk to you about SQL Server 2012 for private cloud. There are two parts of this talk. Today, we're going to talk about part one, and let me just go through the agenda for part one. First, we're going to cover why private cloud is important, and why you should consider looking at private cloud. We're also going to go through the solution walkthrough for SQL Server 2012 for private cloud.
And we're going to have case studies that I'll be sharing with you some customers that already started implementing this, including Microsoft. There will be a part two of this talk that I'll be covering in the next session, right? So, let's start with why private cloud. Cloud computing is a new paradigm shift that is happening in our marketplace today, and different companies are looking at private cloud differently, and there are many definitions out there that might be confusing. So, I'm taking from Gartner, which is a reputable source, this dated March of 2011. Gartner published this virtualization to cloud computing roadmap, which details different stages of computing environments starting from server virtualization to public cloud.
Now, when we talk about private cloud it is on stage number three, and usually companies will start with stage number one, which is server virtualization, which allows them to consolidate their existing resources, optimize the hardware for usage, saving capital expenses to make sure that they get the most ROI out of their hardware servers, and then they'll move to stage two, which is more distributed virtualization where they have multiple hosts doing virtualization, and each one of them will support one another in terms of downtime, speed, flexibility, and automation. Then once they have stage one and stage two, customers will move to what Gartner called private cloud, which is what we're talking about here, which adds the following things like selfservice agility, standardization, IT as a business, and usage metering. So, not only that the resource has been virtualized and also flexible and have less downtime, private cloud adds self-service capability where users can actually deploy the computing resources themselves and IT will be able to monitor what is being deployed, and finally be able to meter the usage and maybe providing chargeback to the users. So, this is the private cloud that we're talking about. Private cloud is happening. These are just some of the results that we get from public link that shows in the bottom where more than 50 percent of customers are planning or doing private cloud today, federal agencies are doing it, as well as commercial companies, and the adopting of virtualization and cloud computing will be increasing dramatically in the next 12 months. And once they implement private cloud, the number one thing that they're looking for is self-service. So, it is happening, and you are encouraged to look at this solution.
The next portion of this presentation is talk in detail about SQL Server for private cloud. It is a solution based on Microsoft Hyper-V and Windows Server and System Center products. All of these products are already out in the marketplace where you can go and purchase and either implement it yourself, we also have reference architecture, or we have appliance that goes with this. We'll talk about more of this in session number two. But first, let's go through the solution and we'll go through this slide in quite lengthy. All right, so what we have here is SQL Server for private cloud solution where we break this solution into four different buckets or four different pillars, if you will. The first one is resource pooling. The second one is elasticity. The third one is self-service. The last one is control and customize. So, if you look at the subtitle, it kind of maps directly into Gartner's way of doing private cloud.
The very first pillar, resource pooling, is all about consolidating existing databases, whether it's running SQL Server or other databases out there. The benefits are clear for resource pooling. It obviously will reduce capital expenses because we will be able to squeeze more computing power from your existing hardware. We're also able to reduce operating expenses by allowing you to manage less hardware. And this is a good way to realize green IT to reduce the space needed, power needed to have all this hardware running, right? So, that's resource pooling on consolidating databases. The second pillar, elasticity, allows you to scale this resource more efficiently by having multiple hardware working together, creating more agility, as well as more dynamic infrastructure where you can scale up or down according to user's need. The third pillar, self-service, allows faster time to market where end users will be able to deploy these computing resources as they need it, without IT intervention, or little IT intervention. It reduces administrative overhead, because business unit now can request computing resources directly to IT. Once they are preapproved, they can go ahead and deploy these computing resources. The last portion, control and customize, allows the IT department to standardize all these computing resources' deployment so that they can set standard policies according to it. For example, all the computing resources have standard templating, things like that. Not only that, but the IT department can also monitor the usage of these resources over time, and see where they want to beef up their infrastructure here or reduce the infrastructure over there. And finally, the IT department can also provide chargeback based on the usage information. I have a link here on the right side on call to action that's a link to a Microsoft website that allows you to learn more about SQL Server for a private cloud solution. So, now let's go through in very detail each of these steps.
The first one, resource pooling, there are a number of steps that we think it is quite essential to complete these pillar. The first one is discovery. There are many companies or organizations out there that are running SQL Server or other databases, and creating this thing called database sprawl. So, the first step is to identify those, and the tool to do that is Microsoft Assessment and Planning Toolkit or MAP toolkit that allows you to scan your existing network infrastructure and find those databases. You'll be surprised how many databases that are running that you don't even know they're running. Once you identify those, MAP toolkit will produce a very professional-looking document, Excel and Word document, and they will be able to let you know how many computing resources have been used, which hardware is being run on, and what's the hardware utilization on those. Once you have that information, it is quite easy to group that information into different buckets, which I'll be talking to you later in the case studies, and consolidate them, right? We have many tools to do that. You can upgrade to the latest SQL Server using Upgrade Advisor, or you can leave it as is running SQL Server 2005 or 2008, or if you have other databases such as Sybase, MySQL or Oracle, you can use SQL Server Migration Assistant, which is a free tool from Microsoft that allows you to migrate those databases into SQL Server. It will migrate both schema, data, and objects automatically. Once you identify those SQL Server boxes or other database boxes, System Center has a tool called Virtual Machine Manager that allows you to basically convert your physical database into a virtual database. We call that P2V or physical to virtual migration. So, it is an automated tool where System Center Virtual Machine Manager will basically connect to your existing hardware that's running SQL Server, for instance, and gather all the information, copy all the data, and then will create a VM or virtual machine out of it, without making changes to your existing physical SQL Servers. So, your existing SQL Server will still be running. What this tool allows you to do is to create a copy of that infrastructure in the virtual machine format. Once you have it in a virtual machine format, you can just import those into System Center Virtual Machine Manager running Hyper-V, and you can manage those multiple virtual machines. So, to recap these first steps, resource pooling, consider it contains multiple ways of doing it. There are a couple steps. First is to discover, consolidate existing databases, migrating the database from other providers such as Sybase, MySQL or Oracle into SQL Server, move them from physical to a virtual environment, and manage all of them through a virtualization environment under Hyper-V. All right, let's move onto the second pillar, which is elasticity.
Once you have the database consolidated, and then running in a virtual environment, you're starting to wonder whether or not these resources will be highly available. So, this is the first time that you can set up clustering for that, right, either using local or remote clustering, failover clustering support from Windows Server and SQL Server 2012, and allows you to increase the uptime of this cluster of VMs, either the downtime that is recurring on the planned basis or unplanned basis. So, failover clustering will help you protect your infrastructure in case of unplanned downtime. There's also another technology called Live Migration from Hyper-V that allows you to move VMs from one host to another during planned downtime such as doing patching, doing upgrade on the OS and so on and so forth. With Hyper-V as well you can increase virtual machine density in terms of CPU memory such as Dynamic Memory that allows you to specify the start of the memory usage and the maximum memory usage, and the VM will basically make use of those memory according to its needs. This also is allowing you to install all the virtual machines using Windows Server Core. Windows Server Core is another mode in the Windows Server installation where we only install a very basic configuration of Windows Server, no UI, it's only command line, and so you interact with everything using PowerShell. What it does is allows IT administrator to bypass a lot of the patches, over 50 percent of the patches, according to our study, with Windows Server Core, because it's very little surface to worry about. So, there's very little services that are being installed. So, it will be very, very secure. Finally, you have a way to load balance virtual machines where you can basically have multiple VMs, virtual machines, running on different hosts, and you can move them automatically according to the load of a particular host using System Center Virtual Machine Manager and the combination with that and System Center Operations Manager.
So, pillar number two is basically about setting up high availability, disaster recovery, allow you to scale your virtual machine memory, CPU, as well as load balance. So, the first two pillars, resource pooling and elasticity, is all about setting the infrastructure, migrate your existing physical infrastructure into a virtual machine infrastructure, and run the things well. Now you have the infrastructure set up, it is a good time to talk about self-service, which is the third pillar. It used to be in some organizations -- maybe it's still doing the same thing -- where business unit would need to request IT department whenever they want to have new computing resources, and sometimes it can take a week and take days or maybe months according to depending on different organizations. So, what we're going to do is here is System Center Virtual Machine Manager allows you to do a self-service, meaning that once it's preapproved, a business unit can deploy these computing resources automatically based on their own demand, without IT intervention or with very little IT intervention. But to do that System Center Virtual Machine Manager provides this mechanism called templating. Templating is a way to create base virtual machines that will include the OS running SQL Server, maybe some anti-virus preinstalled, all kinds of different settings that are preinstalled, and save it as a template. Once it's in the template format, business users can deploy new virtual machines using this template over and over again, right? So, in order to do that, we have System Center Virtual Machine Manager self-service portal as part of Virtual Machine Manager that allows preapproved workflow between business units and IT to allow IT to create a sandbagged environment for these business users, meaning that the IT can support multiple business units with these different sandbags.
Once they link that with the right templates, business users can deploy this template, the virtual machines out of this template anytime they want to, and then remove these virtual machines anytime they want to. It is very self-service and allows business users to do this based on business need and without IT's intervention. Now customers have self-service capabilities, IT needs to make sure that they can monitor these activities. So, that's why we talk about the last pillar, which is control and customize. Control and customize basically allows the IT department to set aside a standard policy for the template, but not only that, but allows the IT department to assign costs for each of the virtual machine templates. Once it's been deployed, Virtual Machine Manager will record what's being deployed, so the IT department will be able to go to the reporting and see the usage reporting out of it over time for each of the different business units. And with a cost assigned to each template and the usage, the IT department will also be able to provide chargeback through the reporting mechanism to each of the business units. So, the business unit can in turn do internal payback to the IT department, realizing IT as a service within the company. Obviously, to manage all those, the IT department will make use of Virtual Machine Managers and Operations Manager management packs. So, all these things are working very well from resource pooling consolidation perspective, elasticity to scaling the resource efficiently, to self-service to deploy resource on-demand, to finally control and customize for IT department to drive standardization and compliance. This is the entire solution for SQL Server for private cloud.
In the next section I'm going to talk about a couple of case studies. The first one is the Target Corporation. For those of you who live in the U.S. will probably recognize Target as a major brand in retail, similar to Wal-Mart. They have a lot of stores, over,1700 stores. However, the unique thing about Target is there is no single IT person in the store at all. So, everything is managed centrally, even though there are some servers on each store to run mission critical applications. So, Target decided to use Hyper-V and SQL Server for private cloud to deploy this solution to all 1,700 stores. And they were able to reduce the physical server to two from seven servers per store. They are going to deploy about 3,600 hosts and 15,000 virtual machines by the end of 2012, all of them running mission critical SQL Server applications such as checkout, such as inventory, and things like that in each of the stores. This is really SQL Server for private cloud at scale, right, running Hyper-V, running System Center in Target stores, right?
The next case study I want to talk to you about is for Microsoft IT. Microsoft IT, as a company we also run a lot of applications, in this case over 2,700 applications. There are about 3,000 SQL Server instances, over 50,000 databases. However, the unique thing is a lot of the hosts are running very low CPU utilization, and the reason is very simple. Whenever they are doing capacity planning in most organizations, they always measure it based on the peak usage. How many of you know that peak usage is not always the same throughout the entire 24 hours in a day? Peak hours could be one hour in the whole day, two hours in the whole day, and so on and so forth. The rest of the time those servers are not really being utilized fully. It's the same thing in Microsoft as well. In fact, once the Microsoft IT folks do scanning on their infrastructure, they found out that there are many servers that are running on very low capacity, but some of them are running at very high capacity. So, they grouped them, right, based on very low, which means 1 to 5 percent,
low, 5 to 20 percent, moderate, which is 20 to 50 percent, or high, 50 percent or greater. These are based on CPU resources. So, once they grouped them into four categories, they decided that they will not virtualize those computing resources that are in the high category, meaning the 50 percent or greater. So, they're going to just virtualize those that are in the first three categories, and that's a lot of them. So, here's a busy chart on the saving. Let me explain to you what it means. The red bars represent the actual physical host, the blue represents the virtual machine hosts, and the green bars represent the virtual machines itself. Since we started this project back in 2007, we have seen the dramatic reduce of physical hosts over time. Today, we are 2012, and we have seen almost a 50 percent reduction on the physical host, while the virtual machine hosts increased a very small percentage compared to that. So, we're saving a lot on the hardware. However, we see tremendous usage of virtual machines. So, you see the green bars there. And the saving is just amazing. The projection until 2017 is we will be having very small number of physical hosts, medium sized virtual machine hosts, and a lot of virtual machines that are running on the virtual machine hosts. So, you see that saving is tremendous just from a hardware perspective, right? Microsoft IT is the same as other organizations, also has budget because of the accounting, to pay on the usage of Microsoft software, right? So, software and hardware perspective customer will be able to save tons, tremendously over this period of time by doing virtualization, as well as private cloud.
Another chart that I want to show you is the numbers that Microsoft IT calculated based on both operating as well as capital expenses. The benefit is clear: operating and capital expenses will be reduced dramatically, more service will be available with the scalability as well as the clustering capabilities that we talked about earlier, as well as your improved environmental sustainability or green IT, because less physical footprint, physical servers is required for running this thing. I have a slide here to talk about the architecture. If you look at it from the left side, there are legacy servers that are using manual build process and everything. Once we migrate this to System Center Virtual Machine Manager using templates, we group those servers into a cluster or non-cluster SQL Servers. And with the availability technologies on SQL Server we can make sure that all these different servers are highly available, and there is going to be downtime protection, both planned and unplanned downtime, in case of disaster.
Obviously, a lot of these applications are running the latest SQL Server, but there are some of them still running SQL Server 2005, SQL Server 2008, because of the ISV applications. We're coming to the conclusion of the presentation. I hope this is quite useful. For more information on these particular solutions, SQL Server for private cloud, we put together a website that I have listed here that you can find out about more. In the next session I'll be talking about part two and actually demoing this solution to you. Goodbye and thank you for watching. END