1 Best Practices for Windows Server Consolidation: A Comprehensive Guide
2 Introduction Server consolidation promises great benefits, but can entail serious pitfalls as well. About 60% of IT shops are now consolidating, and nearly half of those projects will fail to meet their full expectations, according to Gartner. But with good planning, you can deliver exactly the results you expect, and more. This article will give you a framework to structure your project, and some tips to help you plan, execute, and succeed, on time and on budget. Before Marketing Engineering Operations Finance Consolidation from A to Z Because consolidation impacts your organisation on all levels, it s critical to understand, plan, and communicate the overall flow as effectively as possible. So let s begin with a high-level view of where we re going with this process. The basic steps are: IT De-centralized management. Significant workload. Inconsistent practices. 1. Identify your goals After Marketing Operations Formulate the goals of your project. Consolidation has benefits far beyond cost; identify them and you ll be in a better position to justify your project. 2. Characterise the user requirements Engineering Finance User satisfaction is critical to the success of your project. Understand IT their unique needs to ensure they re met in the consolidated environment. 3. Characterise the servers Inventory your servers to assess which Centralized administration. of the current configurations and Streamlined work flow. attributes must be preserved in the Best practices management. consolidated environment. 4. Plan the consolidated environment By definition, consolidation means going from many devices to fewer. Virtualisation vastly simplifies this transition. If you attempt to blend all users and
3 Virtualization for Flexible Management Marketing Operations applications to a one-size-fits-all storage pool, the result may be unmet expectations. A virtualisation layer provides the necessary flexibility to ensure that divergent user and server requirements can be individually accommodated. 5. Select the proper migration tool Every consolidation project has unique requirements, and each of the data migration tools has its own strengths. Picking the right one will simplify your job. 6. Implementation This is the easiest part when all of your background work is done. You ve already overcome the roadblocks, and mitigated the technical risks. Now a careful, staged process will make the implementation flow.. Engineering Virtual Servers Finance Virtualization lets you maintain user autonomy, security, and performance within a consolidated hardware infrastructure. 1. Identify Your Goals A consolidation project may not be difficult to justify, especially when you consider all of the side benefits. Server management is an issue; that s a widely-known fact. Most organisations have seen the server count grow faster than the IT staff. Seven million new Windows servers were brought online during 2005, a year when IT budgets increased by only a few percent. As server proliferation has galloped along, management workload has often overwhelmed the IT staff s bandwidth to effectively manage all those devices. Consequently, consolidation is driven by a clear and compelling set of motives: - Too Many Servers, Too Few People: An unmanageable number of servers does more harm than simply driving up management burden. It drives up risk as well. Downtime, reduced productivity, and lost data can all result from simple errors made during routine tasks. Whether applying the latest Microsoft service packs or running backups, the tasks that administrators face on a daily basis can get so repetitive that people can easily succumb to simple yet catastrophic, mistakes.
4 - Cost of Power, Cooling, and Space: Servers located in data centers consume precious space. At $250 to $1,000 per square foot typical costs for data centre construction space is at a premium. In addition, the servers require network ports, Fibre Channel ports and power ports. Not to mention the electricity to run all those devices. A recent study completed by Google concluded that power will soon cost more than the servers themselves - Takes Too Long to Deploy Servers: Putting a new server online often requires coordination among multiple groups, coordination that can extend the time required to as much as three months. In Consolidation Process 1. Identify your goals. 2. Characterize the user requirements. 3. Characterize the servers. 4. Plan the consolidated environment. 5. Select the proper migration tool. 6. Implement. A constant view of your goals and your user requirements will keep the project on track. today s do-it-now environment, that s often unacceptable. - Need for Shared Data Access: Configuring file storage to be accessible by users across multiple platforms (Windows, Linux, UNIX, Mac) requires special attention and planning. When this flexibility is needed, complexity grows. - Security Exposure: Different data types require different security levels. Ensuring that each data type is afforded the correct security can be a challenge. Mistakes can cause important information to be compromised. The Solution Consolidation addresses all these issues because it not only reduces the number of physical boxes, it also simplifies management. This enhances your ability to apply best practices across all user groups and data types. Disaster recovery planning, for example, can be difficult to accomplish across a population of distributed devices, but become manageable when the objective is to replicate a consolidated storage pool. Backup, a major problem in distributed settings, is simpler in a consolidated setting. The same is true for anti-virus implementation, patch management, and security administration. Planning for Consolidation So how do you begin? We ve found that it s critical to begin with a detailed assessment of your current environment, including the devices themselves and the organisational dynamics that impact each device. We ll discuss the organisational part first.
5 2. Characterise User Attributes Organisational dynamics matter because distributed environments often spawn a variety of management and usage models. If the planning for the consolidated environment fails to take them into account, there may be deficiencies that lead to unhappy users, schedule slips, and budget overruns. By understanding the dynamics upfront, you ll be well equipped to deal with them in your plan. The objective is to map the user groups and their key requirements. Here are some elements to bear in mind. - Divergent Requirements: System administrators sometimes have very good reasons to keep file servers separate. It may be that users have conflicting requirements, such as backup windows that occur at completely different times, or IP addresses that are in two different subnets and cannot be brought into a single subnet and routed. - Individual Control: User groups may have a strong preference for individual control. Engineering teams, for example, may wish to ensure service levels for their users, and to provide cost controls. Security may play a role as well. Groups that maintain confidential information, such as HR, may resist sharing a server with another group due to perceived security risks. - Service Level Expectations: Different groups are likely to have different service level expectations. The CFO and his team, for example, may value stability. For them, a consistent, reliable system is often the top priority. The engineering test teams, on the other hand, may value performance; being able to count on peak throughput during test cycles may be essential to ensure their work gets done on time, but they may be perfectly content to accommodate scheduled downtime during other hours. - Resistance to Change: End users may not want to re-map their access to servers, especially if their interaction with the system is necessary but not perceived as high-value. Similarly, webmasters or content managers will resist a requirement to scan their HTML pages and change the hard-coded absolute paths. By understanding each group s expectations, you can ensure they will each be met in the consolidated system. Planning is the key element; anticipating requirements and including them in your plan will ensure a smooth rollout.
6 3. Characterise Server Attributes The second step is to characterise the servers to be migrated. Again, the objective is to map the key attributes. Here are some generic categories that may be applicable for your organisation - Performance characteristics: Create your own specific definition here. It could be CIFS or NFS specific in read/write throughput or very generic IOPS numbers. In any case, create three tiers of performance that describe your file servers to be consolidated. - Administrative domains: A large number of organisation have adapted a hub-and-spoke model for IT teams where sub-organisational teams have their own IT team to perform dayto-day administration tasks. It s helpful to characterise these teams in one of three categories: o Skill based: Teams who just want to perform a certain skill level operations, such as managing Windows users or managing NIS etc. o Departmental: Teams who can do everything pertaining to their own server. o Corporate: Teams who have no interest in doing any administrative tasks and would gladly let the corporate IT teams manage the systems for them). Map all your file servers into one of these three groups based on administrative domains. - Protocol access patterns: This item will differentiate file servers based on their need to serve only UNIX, Windows or mixed mode files. - Quality of storage: With SATA and SAS drives gaining popularity, some departments want to save cost by utilising them for non-critical operations. Create three tiers of storage out of your planned storage infrastructure. That this could be based on RAID-level, disk type, disk speed, vendor, make or models. - Availability constraints: Different departments will have different availability requirement. This item will categorise your file servers based on Service Level Agreements (SLAs). - Security levels: Lastly, classify your existing file servers based on the level of security to be implemented on them. The degree of security may differ based on your company s business model and access patterns. Feel free to create your own three tiers. Sample pre-consolidation checklists Two sample pre-consolidation checklists are shown below. You will want to create a checklist for each for every server that is selected for consolidation. In this example, Poseidon is a mid-level file server and Triton is a low end server.
7 Server name : Poseidon IP address : Location : LB1-10 Team : Eng-Dev01 VLAN tag : 1032 FC : Brocade Network : Cisco Contact : Ginny Server name : Triton IP address : Location : LB1-35 Team : Eng-QA01 VLAN tag : 989 FC : Brocade Network : Cisco Contact : XXXXXXX
8 4. Define Your Consolidation Strategy When your checklists are complete, you can use them to define your new server environment. The idea is to create virtual resources whose characteristics map to the requirements of each physical resource. You are likely to have multiple servers with similar requirements; these can all be mapped into a smaller number of virtual platforms. It s also possible that you have some servers that must remain segregated, due to security or performance concerns; such devices can be mapped into their own virtual platforms. Here are some mapping approaches to consider: - 1) System availability focused: Create pools of virtual platforms to meet various requirements for availability. o Five-nines availability pool: Four-node cluster protected by snapshots, tape & near line backup, and a comprehensive disaster recovery protection plan. o Four nines availability pool: Three-node cluster protected by snapshots, tape & near line backup, but may not have any geographically separate disaster recovery plan. o Allocation: Use the checklists to assess and distribute your physical servers to any one of these pools. - 2) User group/vlan/subnet focused: Create pools based on physical and datalink layer attributes. o Assume that the availability concerns for all the servers in this model are same. o Offers better control for the network administrators to maintain IP ACL separation if required by the end users o A simpler approach to move servers from physical to virtual infrastructure. However, this method doesn t offer the flexibility required by most of today s sophisticated data centre needs. - 3) Performance requirements: Create pools based on performance capabilities. In this case, performance will be dependent on characteristics of the attached disk arrays (for example, high performance will be available on virtual servers mapped to high-end disk, low performance on virtual servers mapped to SATA disk). o Groups may be classified based on spindle speed, disk types or raid levels. o A block virtualisation or file virtualisation appliance can be used to create storage pools to make the management of these storage resources less complex. o Physical servers can now be placed within a virtual infrastructure fitting best for its disk-quality needs.
9 - 4) Management effort: Create pools based on based on data protection and other data management requirements. o The pools are classified based broadly on data backup needs o Mission-critical pool will have data being backed up to a near-line storage before being moved on to a tape backup o Business critical pool data will be backed up only to a tape (offline) media o Business important pool will only have snapshot protection o Archival data pool will have a strict expiration policy associated with it. Now that you have classified your existing file servers into multiple groups based on various individual characteristics, it is time to decide on the technology to be used. There are multiple options to choose from, some with attendant pros and cons. Choosing the appropriate virtualisation technology will render the process less daunting. Virtualisation technologies Platform virtualisation: Platform virtualisation solutions create a hardware abstraction layer on top of your choice of initial hardware. Solutions like Intel Virtualisation Technology, VMware Virtual Center, and XEN 3.0 allow administrators to run multiple operating systems concurrently and serve files from them. Pros: Primarily meant for application server consolidation, this technology is best suited for customers who are looking for small scale consolidation for file servers. Ease of use and very simple management interfaces decrease the workload on the administrators and allow installations with limited administration resources to implement the consolidation process. Cons: Capacity and performance scalability may be limited by the base operating system and its associated hardware. This solution can also suffer from inherent Operating System (OS) security vulnerabilities and so may be open to virus attacks. Name-space virtualisation: Name-space server virtualisation allows system administrators to run multiple different servers in the back-end, with hardware or software solution on the front-end allowing the servers to provide a unified file access space. Solutions from Acopia, NuView, Neopath, etc., are examples of hardware solutions. Microsoft s DFS solution is a software only alternative.
10 Pros: Software solutions are very cost effective. The simplified management allows administrators to consolidate a number of servers without adding to the administrative overhead. A software solution can also emulate a distributed file system for overcoming capacity scalability limitations. Cons: The underlying servers in this solution continue to suffer from all the normal capacity and performance limitations, security vulnerabilities and hardware stability related issues. The solution sometimes can appear to have a number of different failurepoints and this involves multiple vendors which could cause difficult to resolve support issues, as different vendors shift blame to each other. Server virtualisation: Server virtualisation technology essentially emulates the whole server, including the hardware, OS and file system. Microsoft Virtual server technologies and ONStor Virtual Server technologies are two examples of this option. The latter in particular employs an appliance model which creates a highly scalable and customisable environment. This solution allows multiple flexible consolidation options by providing virtual IP addresses, virtual DNS and NetBIOS names. This solution also offers a simple, yet effective file system consolidation option through some adroit use of directory quota implementation. A valuable benefit that solutions of this type offer is a flexible privilege delegation mode. This can be of two types: Server-based delegation: Departmental users that want to control their own destinies can utilise this feature, wherein virtual server management can be completely sliced off and delegated to departmental system administrators. This allows them to create file systems, NFS and CIFS shares, control user access, all within the confinement of their own virtual server. This approach will permit no view or access to any other virtual server belonging to another department. Role-based delegation: In this mode, administration of a virtual server, or groups of virtual servers, can be delegated to a group of users based on each group s role or roles in the organisation. For example: a backup administrator can perform backup and restore operations within one virtual server, or multiple servers, but nothing else. Similarly a storage administrator can perform storage allocation and management duties, but does not have the option to deal with the Windows AD configuration. In this model, user accounts will be maintained within a UNIX or Windows AD domain structure, not within
11 the device itself. Privilege mapping can be dynamically changed by simply modifying the AD roles. Finally, there are some of the miscellaneous concerns that one should keep in mind while designing a file server consolidation platform. Billing chargeback: More and more organisations are cutting back on overhead and want to have a tighter control on costs and how and where resources are deployed. This model will be similar to a Storage Service Provider, but focused towards internal customers, not external. It is important to inquire about the vendor s capabilities to offer high-level SLA and Quality of Service modelling. Many vendors provide pluggable and highly tunable billing and chargeback modules that can be used to calculate the exact dollar amount being spent not only on hardware but also on management and administration overhead. Dynamic load balancing: Fileserver usage patterns and the load on the system are often controlled by external variables. Today s virtualisation solutions offer various methods to dynamically distribute the CPU, Memory and IO bandwidth to the file servers that need those resources, in an efficient, performance optimising manner. Trending and profiling: Consolidation eliminates proliferation, but that also means that your individual usage patterns may become difficult to predict. A good trending and profiling solution bundled with the virtualisation system will encourage the dynamic provisioning of storage and other resources using an auto-grow or thin-provisioning model. Keeping utilisation in the high 70% range normally proves to offer the best return on capital investment as long as this doesn t result in additional management cost. Segmented monitoring and reporting: As hardware devices collapse into powerful virtualisation platforms, it become ever more important to collect data from them and extract valuable information which can be sorted into customisable reports. These reports can be plugged into enterprise management software or can be additionally used for other purposes. Most vendors offer extensive SNMP (Simple Network Management Protocol) MIBs (Management Information Base) to monitor the system with a sufficient level of details and granularity.
12 Licensing costs. Some NAS vendors charge separately for CIFS and NFS protocols. In these deployments, you can save money by placing multi-protocol requirements only where they re absolutely necessary. Final tips for defining your consolidated environment: - Use the checklists to help distribute load equally into the post-consolidation virtualisation platforms. - Keep the multi-protocol file servers in a single group, separate from single-protocol only file servers. You can then design more secure ACL lists on the listening interfaces of the server and upstream routers/switches. In some highly secure networks, it is common network administrators to disable all the NFS related ports from the switches and routers on the path to a Windows/CIFS file server and vice-versa 5. Select a Data Migration Tool Migrating data to your new, post-consolidation environment can be complex, but picking the right migration tool goes a long way to ensuring a clean process. Here are some elements to consider in selecting a tool. - Will the migration be offline on online? The answer to this question usually will come from your user community. If your user community cannot take any down time at all, then you will need some device/system to provide a transparent migration staging cache. Products like EMC Rainfinity series provide this solution. This solution is very efficient; however some companies find that the solution s high cost is not completely justified. - Can you employ virtualised migration devices? Some appliances, like IBM San Volume Controllers, allow a LUN to be imported in image mode and then moved onto a virtualised platform. The target LUN could be a LUN or a managed VLUN. This could be an easy and fast method; however the high associated cost and the vendor lock-in can be a problem. - Migration through restore: You can create the new environment by executing a data restore from your backup copy. If the backup is up-to-date, and you can afford the associated downtime, this method could be the simplest one to implement. It has its limits though. Restore from tape can be slow, cumbersome, and unreliable. - Use disk as a staging device: Sometimes the easiest thing to do is to use the native OS utilities such as xcopy, dd or ufsdump to move data from your source to target system. This is not going to work well if you have a multi-protocol file server. - Using software tools: This is a very common method where administrators uses command line tools such as rsync, remote copy, robocopy or other UI driven tools such as secure copy or CIFS consolidator for moving data between source and target servers. This is a fast and
13 reliable method, and this also allows re-sync option. Re-sync option is very powerful, because this allows you to pre-stage most of the copy and minimise the effective downtime during the actual migration process. - Multi-protocol migration: Special attention should be provided to data set with mixed mode permissions (UNIX/Windows) or special permissions such as file system user, group or directory quota settings. Make sure that the tools you use understand them and will not lose them during the migration. 6. Implementation You have done all the hard work, planned everything to the last fine detail, now it is time to do implement the solution. Here are some of the areas you should focus on the D-Day. - Notification window: Create a list of departments and applications that is going to be affected during this migration process and associated down time. Give ample notification time for unexpected things. If you have followed the steps in this document, there will be very few things that can go wrong, but it is always better to be safe. Don t forget to send multiple reminder s, so that people can close their applications and backup any thing they want to back up locally - Notification list: Designate a person for each department, if you belong to a large organisation, and make that person responsible for communicating all messages from the corporate IT team downwards. It is important that you include this person in all communications regarding any special instructions you may have for them. For example, if the team has a local data base that is going to be affected, it may require special instructions to close it gracefully - Test your offline backup: You may have the backup in offline media, do a random restore a week or two before the migration. This will give you an idea of where you stand. If something goes awry during the migration you ll want to know that you have options. - Test your temporary spools: If your migration involves corporate IT resources like mail or DNS servers, you may need to setup some temporary machines to act as the resource and collect mails and DNS updates and keep it till the new ones come online. Test them in a separate network before you actually move them online. Make sure you test it from intranet as well as from the Internet. - Activity logs: Enable the online activity log capability of your SSH or telnet program during the actual migration. This will allow you to go back and check things if you want to during the post-mortem phase. Most applications offer this facility, like Putty, SecureTTY and Hyper terminal.
14 - HTML links: Pay attention to html links on your web pages. Use a software program, if available, to crawl through your pages to find any broken links that may be pointing to old storage server resources - Router/Firewall access controls: If your migration caused server names or IP addresses to change, you will also have to modify your router access control lists and other firewall entries you may have. Remember to create a sample set before the migration and test it. Apply soon after your migration. - Post migration helpdesk: Create a temporary helpdesk for answering post-migration questions your end users may have. Create single page cheat-sheets for doing the common tasks such as changing DNS or Mail server client configurations. - Document everything: There is no substitute for good documentation. Document every change that happened to your data centre. Don t leave out server names, rack space, locations, patch-panel port number or even colour of the unit if possible. Also keep a soft copy of the activity log you captured during the actual migration in somewhere public for your team to evaluate later - Post-mortem: Conduct a post-mortem study after a week for the migration to look back and assess what went right and what didn t. Document it, you will be using it in your next task, it will prove to be invaluable tool! Conclusion File server consolidation through virtualisation is fast becoming a critical initiative in large enterprise organisations around the world. With consolidation, you can improve resource utilisation, simplify infrastructure management and reduce capital and operating cost, all while increasing your organisation s ROI. Virtualisation technologies can help by allowing you to consolidate Windows and UNIX servers while preserving your end user s access patterns and accessibility expectations. With a little careful planning, you ll be ready for a smooth and eventless migration from real to a virtual world.