IT Handbook Private Cloud The cloud is a complicated place, but it can also be a treasure trove of flexibility and cost savings. Build your company s cloud from start to finish by going beyond simple technology changes. By Bob Plankers
private cloud isn t a quick process. It starts with understanding expectations and defining the cloud in your environment, then building on the model you ve created. Be sure to include the whole organization, its processes and its technologies when you construct the cloud. Here, you ll find 10 steps to follow when conceiving, building and maintaining your private cloud. Journeying to the cloud is a huge trend in IT. The problem is that the term cloud computing means something different to everyone. In order to start your journey, your organization needs to be realistic about its cloud computing goals. Many organizations find themselves looking toward private clouds only after they ve realized the promises of virtualization, like data center consolidation, power savings and cost savings over physical hardware. Others find themselves wanting to take their virtualization to the next level, with standardization and automation as part of their IT processes. But fewer organizations are ready to work on organizational changes, tackling the harder people problems like silos, duplication of services, security and management of services. These problems are usually not technical in nature but run roughshod over organizational boundaries and long-standing political domains. And there are many misconceptions about the term cloud, usually because there are so many definitions of it. One common misconception is that private clouds are completely based in virtualization. Though virtualization usually plays a major role in a private cloud deployment, a private cloud can also just mean a shared infrastructure. Take, for example, Google s Gmail or Microsoft s SkyDrive. Both are public cloud services that don t rely much on virtualization. 2
Instead, massive amounts of physical hardware are in use behind the scenes. The same is true of a private cloud for your organization, where a shared service is created to replace many different duplicate services, and the use of virtualization is evaluated only as part of that service s implementation. For example, a shared file server service might replace dozens of departmental file servers, and it might be implemented with physical hardware because of the incompatibility between VMware vmotion and Microsoft Cluster Service. of the Journey and the Cloud You should expect that there cannot be true self-service IT within your organization. IT departments have spent years wrapping process and procedure around the act of creating and managing servers, usually with good reason. Often these processes are responsible for monitoring systems, determining sizing and dependencies, documenting system designs and responsibilities, handling licensing, and more. Allowing anyone to provision a server or service without approval mechanisms in place might be appropriate for certain lab or development environments, but in a production IT environment it is a quick path to chaos, sprawl and outages. However, it is reasonable to expect that much of the provisioning process can be automated and standardized through the use of workflow tools and approval mechanisms, like those found as part of Embotics V-Commander or enstratus Networks offerings. Expect the journey to the cloud to be less about technological challenges and more about people challenges, as processes are torn down and recreated, routine tasks automated, and standardization championed. An IT department that is heavy-handed and unresponsive to users needs may not be in the right place to start rethinking themselves and their work. Similarly, an IT department that is overworked may not have enough free time to pursue cloud solutions, despite the time savings the cloud would provide. It is very important that management prioritizes IT work appropriately and 3
that it backs up the IT department in the face of complaints about delays in other work due to the focus on cloud computing. The adage It takes money to make money can be adapted to IT staff time; it takes an upfront time investment to save lots of time later. Finally, expect that all levels of management, including human resources, will need to support a transition to the cloud. Not only will all facets of the organization see delays as IT works to improve itself, but IT workers whose primary jobs consist of the tasks being automated might also consider themselves targets for layoffs. They may actively undermine the process. Plan for personnel issues and, from the beginning, communicate to staff that they are valuable and that these efforts are intended to free them to do more interesting, more productive work for the organization. Working toward a private cloud model is difficult when you don t understand the services your organization relies on. Documentation is key; without it the relationships between systems are hard to decipher, service-level agreements are unknown and it s easy to make false assumptions. The needs of the people using these services should also be documented so that new cloud services can be built to meet them. This is especially true when centralizing duplicate services within an organization. There was a reason a department built its own infrastructure instead of using shared services; find out the reason to get their buy-in and avoid conflicts. Documentation also lends itself to standardization, since a standard that does not account for all needs and system design requirements will quickly have exceptions. Performance information is also crucial to moving toward shared infrastructure and cloud-based solutions. A year or more of historical performance data, as high-resolution as is practical, can be very helpful for determining capacity needs and system sizing. 4
While it isn t required that a private cloud be based on virtualization, it is the common model. Virtualization usually drives certain knowledge and behaviors within organizations. For example, most virtualization software requires centralized storage. That same centralized storage will be a building block for a private cloud, so the knowledge gained in implementing virtualization is very beneficial to private clouds. Likewise, virtualization is usually quite disruptive to data center networks. At the very least, it can turn static traffic patterns into dynamic ones. The move toward shared computing models and cloud-based computing continues that trend and increases the reliance on networks, which usually drives up bandwidth needs. The dialogue started among your virtualization administrators, storage administrators and network administrators as a result of planning for virtualization will become crucial as you advance into the cloud, especially when planning to serve remote offices and mobile users. That Go Hand in Hand is one of the key goals that organizations have when moving to a private cloud. However, automation is incredibly difficult without standardization. For example, with standards for operating systems and server builds you can make assumptions about locations of files, sizes of file systems and authentication mechanisms. Based on those assumptions, you can script the installation of application software and middleware such as Web servers, application servers and firewall rules. This makes an installation easily repeatable, which anyone involved in rapid deployment or disaster recovery would be very happy about. Standardization can be difficult for an organization that has not practiced it. But once you take on standardization, the time savings can be enormous. Consider an organization that has had no standards for operating systems, operating system versions or build processes. Every server is different and 5
every operation needs special attention. Procedures for patching or installing software differ each time, and success rates waver because of the variations in each host. This usually has two consequences: an incredible amount of staff time is spent performing routine tasks on these servers, and many routine tasks, like patching security vulnerabilities, are skipped because they are too difficult and unpredictable. Standardizing on one or two operating systems and automating build and application deployment processes yields massive IT productivity gains. Once you ve automated much of your environment, you can deliver self-service portals and service catalogs. Though it is unlikely that your organization will ever be 100% self-service-driven, many processes can be automated with workflows; the only interactions, then, are approval processes. The IT department can focus on more important issues how to best support and monitor an application or service, for instance. It also improves the lives of application administrators and developers by giving them a consistent and repeatable platform to build on. And it means that IT operations staff can build useful, repeatable procedures for handling incidents and monitoring system alarms, instead of each server being a one-off exception. It may even open the door to automated responses to alarms. 6. Take a Look at Chargeback As clouds form and workloads centralize, it is important for organizations to keep track of resource usage and verify that resources are consumed fairly and organizational priorities are accounted for. A chargeback model is one of the most powerful yet most resisted forms of resource accounting. It can be difficult to implement chargeback in an organization with no history of accounting for resource consumption, because it requires inventorying and justifying every server and application as it moves to the cloud. The process itself is good for an organization; it reduces waste, seriously curtails sprawl, and puts pressure on application and system administra- 6
tors to right-size virtual machines. Moving forward carefully and working creatively with management and the CFO can yield some good solutions to budgetary issues, and care should be taken to make the chargeback process as unobtrusive and low-overhead as possible. Organizations that cannot do chargeback right away can usually do showback, where reports are generated for management that show where resources in the cloud are being used. Showback is an excellent first step toward a real chargeback model, and it is useful in the initial stages of a private cloud to set budgets and expectations. Many organizations that employ showback techniques treat the model almost like chargeback. Specific projects and departments are assigned a dollar amount, except the bill is never sent to the customer. It is a powerful way to track and conserve resources, but the method can be completely foreign to developers, application administrators and other staff members who have never needed to justify or account for their resource use before. Care should be taken to ease staff into these new procedures. Security is always a big part of IT, and when you re moving toward the cloud, it is a good time to reconsider your approaches to security. It s also a good time to consider new technologies. While cloud computing doesn t necessarily require virtualization, the use of virtualization opens the door to features like inter-virtual machine (VM) firewalling and intrusion detection, agent-free antivirus scanning, and other features via APIs like VMware s VMsafe. While many clouds are built using traditional approaches to security, being open to new approaches can save time and money while adding flexibility. For example, inter-vm firewalling and intrusion detection may replace complex private VLAN setups, saving time and reducing complexity. 7
Another type of security measure is disaster recovery (DR) with its many products and options dedicated to maintaining off-site copies of virtual machines. Replication of storage at a virtual machine level frees the storage administrators from having to acquire and maintain costly array-based replication licenses, WAN accelerators and Fibre-Channel-to-IP converters. Replication can also be done to disparate arrays, which usually isn t possible with arraybased options. You can easily manage recovery point objectives (RPOs) and recovery time objectives (RTOs) at a VM level with newer cloud-oriented options. Some products also manage failover and failback and can significantly reduce the effort needed to maintain your organization s disaster recovery playbook by automatically applying DR rules to new VMs. Too often new servers are added to DR plans after implementation, leaving the servers unprotected in the interim. Centralization of services into a private cloud has many benefits, but it doesn t make performance monitoring any easier. Relocation of services often means more dependence on network performance, which, in turn, calls for extensive monitoring, plus the tools that perform that task. An increasing number of performance monitoring tools provide a single monitoring interface that is very useful to system, storage and network administrators who troubleshoot problems. The information gleaned from application monitoring system reports are just symptoms of a problem, not root causes. But it saves enormous amounts of time to be able to rapidly tell that what looks like a network problem is actually a storage issue. Some performance monitoring tools also offer features that aid help desk and support efforts, where end users, developers and admins can trigger high-resolution recordings of network, storage and VM performance data while a problem is occurring. This is especially useful for intermittent problems and situations that do not trigger other performance alarms. In addition, the data can rapidly 8
pinpoint the root cause of a problem. Application monitoring is often greatly improved in a private cloud environment, mostly because of better documentation of requirements and the inventory process that organizations use to prepare for consolidation. Virtualization also provides high-availability and fault tolerance options at the virtual machine level, as well as high availability through the application within a VM. Private clouds and virtualization technology decouple organizations from many problems that IT groups have been trying to solve for years. Centralizing, standardizing and automating workloads and workload management tasks frees time to do other things such as keeping an eye on new technologies. That, in turn, reduces reliance on external consultants and builds knowledge and expertise in-house. Computer scientist Alan Kay was on to something when he said that the best way to predict the future is to invent it. That is absolutely true within organizations, too. The right team in place that has an open mind about how organizational goals can be achieved can reshape IT, making it more predictable and easier to support. Ultimately, instead of just trying to keep up, the staff will have more time to do things that move the organization forward. 10. Remember, We re All in One of the biggest changes an organization makes on the path to the cloud is internal cooperation. Years of building political and operational walls between parts of your organization serves only as a barrier to a cloud project. Private clouds can be quite expensive, and you will not realize any cost- and time-saving benefits when individual departments or divisions implement the technology on their own. Retaining flexibility and meeting the needs of all aspects of your organization are crucial as you centralize into a private cloud. To 9
do this, though, all parties must be open and honest about their needs, have useful documentation, and work in an iterative fashion. Be sure to make room in a cloud plan for adjustment and change as everyone learns how to work in the new environment. Silos within IT need to disappear. Very often an organization s network, storage and system administrators work separately and become territorial about their work. The most effective implementations of virtualization and private clouds are supported by teams with members from each of these areas, working together for the benefit of the organization. Applications in the cloud often depend on networking, especially when applications are centralized in data centers that are not local to the users. Storage is crucial to virtualization, and decisions made by storage administrators have long-lasting effects on service delivery, service-level agreements, costs and time. New technologies allow great efficiencies to be gained if IT staff members remember that it isn t their storage or their network or their systems. The cloud and its infrastructure belong to the organization. Systems can be tuned to reduce load on networks and storage. Cloud environments have also begun to replicate in software what storage and network admins have always known as hardware features, such as firewalls and storage replication. The move to the cloud brings automation and standardization, which may cause hard feelings for staff members who are responsible for the way things are or whose jobs can be automated. Create good avenues of communication, assign no blame and be sure the IT staff understands that the changes will give them more important and more interesting work to do in the cloud. The IT landscape has changed, your organization is changing with it, and experience with cloud computing is a marketable skill. Changing attitudes, more than changing technologies, will go a long way toward a successful private cloud implementation. n 10
about the author Bob Plankers is a virtualization and cloud architect at a major Midwestern university. He also contributes to SearchServer- Virtualization.com and Search- DataCenter.com, and is the author of the The Lone Sysadmin blog. 10 Steps to is a SearchCloudComputing.com e-publication. Margie Semilof Editorial Director Lauren Horwitz Executive Editor Christine Cignoli Senior Features Editor Phil Sweeney Managing Editor Eugene Demaitre, Martha Moore Associate Managing Editors Linda Koury Director of Online Design Rebecca Kitchens Publisher rkitchens@techtarget.com TechTarget 275 Grove Street, Newton, MA 02466 www.techtarget.com 2012 TechTarget Inc. No part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. TechTarget reprints are available through The YGS Group. About TechTarget: TechTarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts. 11