Cloning Complex Linux Servers



Similar documents
Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system.

How To Set Up Software Raid In Linux (Amd64)

LVM2 data recovery. Milan Brož LinuxAlt 2009, Brno

Shared Storage Setup with System Automation

Installing Debian with SATA based RAID

Bare Metal Backup And Restore

Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption

These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.

10 Red Hat Linux Tips and Tricks

Ubuntu bit + 64 bit Server Software RAID Recovery and Troubleshooting.

Linux Software Raid. Aug Mark A. Davis

Creating a Cray System Management Workstation (SMW) Bootable Backup Drive

WES 9.2 DRIVE CONFIGURATION WORKSHEET

How To Install A Safesync On A Server

USTM16 Linux System Administration

How to Restore a Linux Server Using Bare Metal Restore

Navigating the Rescue Mode for Linux

Customizing Boot Media for Linux* Direct Boot

How To Manage Your Volume On Linux (Evms) On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux

Replacing a Laptop Hard Disk On Linux. Khalid Baheyeldin KWLUG, September 2015

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4

Installation GENTOO + RAID sur VMWARE

Capturing a Forensic Image. By Justin C. Klein Keane <jukeane@sas.upenn.edu> 12 February, 2013

Installing Ubuntu LTS with full disk encryption

Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there?

Do it Yourself System Administration

BackTrack Hard Drive Installation

Advanced SUSE Linux Enterprise Server Administration (Course 3038) Chapter 5 Manage Backup and Recovery

TECHNICAL HOWTO. Imaging Linux systems with hardware changes. author:

Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality

LVM and Raid. Scott Gilliland

Total Backup Recovery Server for Linux. User s Guide

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide

Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.

Restoring a Suse Linux Enterprise Server 9 64 Bit on Dissimilar Hardware with CBMR for Linux 1.02

Fiery Clone Tool For Embedded Servers User Guide

RocketRAID 174x SATA Controller Ubuntu Linux Installation Guide

Linux Template Creation Guide. How to build your own Linux VM templates for deployment in Cloudturk.

NI Real-Time Hypervisor for Windows

Planning for an Amanda Disaster Recovery System

Intel Rapid Storage Technology (Intel RST) in Linux*

Clonezilla: Clone As Free As You Want

Adaptec ASR-7805/ASR-7805Q/ASR-71605/ASR-71605Q/ASR-71605E/ASR-71685/ASR SAS/SATA RAID Controllers AFM-700 Flash Backup Unit

Backup, Booten, RAID, LVM, Virtualization

EXPLORING LINUX KERNEL: THE EASY WAY!

RedHat (RHEL) System Administration Course Summary

Linux + Windows 95 mini HOWTO

Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive

Make a Bootable USB Flash Drive from the Restored Edition of Hiren s Boot CD

Using the Software RAID Functionality of Linux

USB Bare Metal Restore: Getting Started

Extended installation documentation

Red Hat Linux Administration II Installation, Configuration, Software and Troubleshooting

LBNC and IBM Corporation Document: LBNC-Install.doc Date: Path: D:\Doc\EPFL\LNBC\LBNC-Install.doc Version: V1.0

Acronis Backup & Recovery 11.5 Server for Linux. Update 2. User Guide

README.TXT

Recovering Data from Windows Systems by Using Linux

Oracle VM Server Recovery Guide. Version 8.2

Linux Disaster Recovery best practices with rear

Red Hat Linux 7.2 Installation Guide

Linux Disaster Recovery best practices with rear

Bare Metal Recover by Open Source Software

Acronis Backup & Recovery 11.5 Server for Linux. User Guide

NetVault : Backup. User s Guide for the VaultDR System Plugins

The Logical Volume Manager (LVM)

Configuring Offboard Storage Guide

HP LeftHand SAN Solutions

How To Write A Backup On A Linux Computer (For A Non-Freebie)

II. Installing Debian Linux:

PA-5000 Series SSD Storage Options Configuring RAID and Disk Backup

USB 2.0 Flash Drive User Manual

P2V Best Practices. Joe Christie Technical Trainer

Abstract. Microsoft Corporation Published: August 2009

Acronis Disk Director 11 Advanced Server. Quick Start Guide

Parallels Virtuozzo Containers 4.7 for Linux

3.2 Install, configure, optimize and upgrade operating systems references to upgrading from Windows 95 and NT may be made

GL-250: Red Hat Linux Systems Administration. Course Outline. Course Length: 5 days

Backup Methods for your BBB or RPi2 Node

Creating a Disk Drive For Linux

Btrfs and Rollback How It Works and How to Avoid Pitfalls

FileCruiser Backup & Restoring Guide

ENTERPRISE LINUX SYSTEM ADMINISTRATION

Converting Linux and Windows Physical and Virtual Machines to Oracle VM Virtual Machines. An Oracle Technical White Paper December 2008

Installing a Second Operating System

Using Open Source Virtualization Technology In Computer Education. SUNY IT Master's Project By: Ronny L. Bull

Intel Rapid Storage Technology enterprise (Intel RSTe) for Linux OS

SUSE Manager in the Public Cloud. SUSE Manager Server in the Public Cloud

How to create a Virtual machine from a Macrium image backup

Recovering Data from Windows Systems by Using Linux

Operating System Installation Guidelines

Partition Alignment of Intel SSDs for Achieving Maximum Performance and Endurance Technical Brief February 2014

Dell DR4000 Disk Backup System. Introduction to the Dell DR4000 Restore Manager A primer for creating and using a Restore Manager USB flash drive

System deployment and bare metal recovery by Clonezilla

Technical Note TN_146. Creating Android Images for Application Development

Using Linux VMware and SMART to Create a Virtual Computer to Recreate a Suspect's Computer. By:

Abstract. Microsoft Corporation Published: November 2011

Abstract. Microsoft Corporation Published: December 2009

Relax and Recover (rear) Workshop

Transcription:

Cloning Complex Linux Servers Cloning A Linux Machine That Has A Complex Storage Setup Where I work we have Cent OS servers whose drives are in various software raid and LVM2 configurations. I was recently asked to prepare to backup/migrate some of these servers using "ghost like" technologies. As I began working with various tools (clonezilla, g4l, etc) I quickly learned that all of our storage flexibility came with a trade off of great complexity. None of the tools available were flexible enough to do the job the way I wanted it to (backup used space only and work properly with mdadm), so I decided to do things by hand. The server we will be cloning in this article has 3 SCSI drives in which root (/), boot (/boot) and swap are in raid 1 arrays and then everything else across all the drives is in a raid 5 array. md0 is boot, md1 is root, md2 is the raid5 array and md3 is the raid 1 swap array. On top of the raid 5 array, md2, are LVM volumes. I wrote this article in hopes that it will help others (and myself after I forget) who might have to do a similar task in the future. These steps worked for me but they may not work for you. In other words, this text comes with no express or implied warranty. Just back up your data and be careful, and if something should go terribly wrong, don't come crying to me--you have been warned. What you will need: Two servers with similar hardware. The hard drives in the replacement/destination server must be the same size or larger than the drives in the source server. Also, if the hardware is significantly different, the destination server may not boot when you are through with the procedure even though the data was copied properly. This is usually due to kernel or initrid modules/inbuilt differences or due to boot loaders needing to be updated to reflect the new configuration. Physical access to the old machine and its replacement The ability to take both machines offline for a period of time A copy of the System Rescue CD and an optical drive in each server Access to a NFS share, external hard drive, or some type of temporary storage Lots of free time Lets get started. 1) Go to the old server and print out some of its current vital storage information. You may need this stuff when you boot off systemrescue CD on either of the machines. It is probably a good idea to get: cat /proc/partitions cat /proc/mdstat cat /etc/mdadm.conf

fdisk -l cat /etc/fstab lvdisplay vgdisplay pvdisplay lvscan vgscan pvscan mdadm --detail /dev/md0 mdadm --detail /dev/md1 mdadm --detail /dev/md2, etc 2) Boot the systemrescue CD in the old server. 3) Get the system on the network. dhclient eth0 Or you can do the ifconfig commands if you want. 4) Mount the space you will be using to temporarily store the images of the server you are cloning. Since we will be using partimage each image should take up no more than the used space on the original partition you are backing up. IE: If you have a 50Gb partition and df shows that your users are only using 20 gigs of it the image for that partition should be less than 20 gigs because partimage only backs up space that is used and then it compresses the image itself. Once you have an external hard drive or a NFS server with the right capacity picked out it to /mnt/sysimages. I will sending my data over ssh using sshfs, so look at my steps below, modify them to fit your needs, and then issue a df to make sure it worked. mkdir /mnt/sysimages sshfs root@server:/md0/sysimages /mnt/sysimages 5) Copy the partition tables from all of your hard drives to the ed directory. sfdisk -d /dev/sda > /mnt/sysimages/sda.partitions sfdisk -d /dev/sdb > /mnt/sysimages/sdb.partitions sfdisk -d /dev/sdc > /mnt/sysimages/sdc.partitions etc Modify the lines above to fit your needs and then cat the files to make sure they look right. 6) See if sysrescure brought up your raid arrays properly. cat /proc/mdstat If it did not, you will have to bring them up manually. Hopefully the command below will help you do this. Increase the number each time for a new array. mdadm --assemble -m 0 /dev/md0 mdadm --assemble -m 1 /dev/md1

etc If that doesn't work, these articles may help. http://www.linuxjournal.com/article/8874 http://andrewsblog.org/2008/11/16/ing-a-linux-software-raid-array-lvm/ http://www.howtoforge.com/recover_data_from_raid_lvm_partitions 7) Bring up LVM. vgchange -a y /etc/init.d/lvm stop /etc/init.d/lvm start Make sure you see the volumes by using lvdisplay. There should also be some "dm" devices in cat /proc/partitions if things are working correctly. If this does not go right by the numbers you may have to manually rebuild the lvm config files by using the info that LVM stores at the beginning of the disks. See the links below for more info. http://www.howtoforge.com/recover_data_from_raid_lvm_partitions http://www.linuxjournal.com/article/8874 8) Backup your LVM configuration. vgcfgbackup --file /mnt/sysimages/lvmcfg.backup cat the file and make sure it looks right. 9) Backup the MBRs for your boot drives. Or for all your drives if you don't know. dd if=/dev/sda of=/mnt/sysimages/sda.mbr bs=512 count=1 dd if=/dev/sdb of=/mnt/sysimages/sdb.mbr bs=512 count=1 etc. 10) Make sure none of the partitions you want to backup are ed. 11) Backup your partitions using partimage. Note that you can specify normal partitions, raid arrays or LVM volumes using the commands below. You want to copy the partition at the highest level possible. partimage -d -b -z1 -M save /dev/md0 /mnt/sysimages/md0.image partimage -d -b -z1 -M save /dev/md1 /mnt/sysimages/md1.image partimage -d -b -z1 -M save /dev/testgroup/tesvolume /mnt/sysimages/testvolume.image That last one was for an LVM volume. -d means don't ask for a description -b means batch. don't ask dumb questions. -z1 means use standard image compression. -M means don't backup the MBR. We already did that.

Glance at the file sizes of the data on your temporary storage device and make sure they seem reasonable. 12) Cleanly shutdown the source server. Make sure all data is done writing. sync sync Un your temporary storage space. cd / u /mnt/sysimages Make sure nothing else is ed. Shutdown LVM. vgchange -an /etc/init.d/lvm stop Stop your raid arrays. mdadm -S /dev/md0 mdadm -S /dev/md1 etc Shutdown the machine. shutdown -h now 13) Boot the replacement server with the system rescue CD. 14) Get the system on the network. dhclient eth0 Or you can do the ifconfig commands if you want. 15) Mount the space you are using to temporarily store the partition images to /mnt/sysimages. I will getting my data over ssh using sshfs, so look at my steps below, modify them to fit your needs, and then issue a df to make sure it worked. mkdir /mnt/sysimages sshfs root@server:/md0/sysimages /mnt/sysimages

16) Double check to make sure there is no data you need on the drives in the destination server. IF THERE IS IT WILL BE LOST IN THE NEXT STEPS. 17) Make sure nothing is ed and stop any raid arrays that may be running. mdadm -S /dev/md? swapoff -a NOTE: None of this should be necessary if you are using clean new drives. 18) Restore the MBRs to your boot drives. Or for all your drives if you don't know. dd if= /mnt/sysimages/sda.mbr of=/dev/sda bs=512 count=1 dd if= /mnt/sysimages/sdb.mbr of=/dev/sdb bs=512 count=1 etc 19) Restore the partition tables to your drives. sfdisk /dev/sda < /mnt/sysimages/sda.partitions sfdisk /dev/sdb < /mnt/sysimages/sdb.partitions sfdisk /dev/sdc < /mnt/sysimages/sdc.partitions NOTE: I had to use --force on some drives, not sure why. Now try to update the partition information that the system knows about by issuing the command partprobe. Verify that things look right by doing a fdisk -l and a cat /proc/partitions. Reboot and check again if things don't look right. 20) Get out those files you printed. Its time to manually re-create our raid arrays. mdadm -C -R -f /dev/md0 -l1 -n2 /dev/sda1 /dev/sdb1 mdadm -C -R -f /dev/md1 -l1 -n2 /dev/sda2 /dev/sdb2 mdadm -C -R -f /dev/md2 -l5 -n3 /dev/sda5 /dev/sdb5 /dev/sdc1 mdadm -C -R -f /dev/md3 -l1 -n2 /dev/sda3 /dev/sdb3 -C is create -R is run -f is force -l is level 1,5,0,10 -n is number of devices You might have to break out the man pages if you have an advanced setup with spare drives or non-default chunk sizes. You may also have to power up the source server again and do a side by side comparison to figure out what to do if your print outs and your memory aren't too good. 21) Watch your arrays rebuild.

cat /proc/mdstat 22) Recreate your LVM phsyical volumes. Grab the physical volume uuids from the lvmcfg.backup file (cat /mnt/sysimages/lvmcfg.backup) and issue commands like this: pvcreate --uuid UUID_FROM_BACKUP_FILE /dev/md2 Check this with pvdisplay. 23) Restore your LVM groups. vgcfgrestore --file /mnt/sysimages/lvmcfg.backup testgroup You have to issue a command like that for every volume group. Verify with lvdisplay. 24) Activate LVM. vgchange -ay 25) Restore your partitions using partimage. partimage -e -b restore /dev/md0 /mnt/sysimages/md0.image partimage -e -b restore /dev/md1 /mnt/sysimages/md1.image partimage -e -b restore /dev/testgroup/tesvolume /mnt/sysimages/testvolume.image -e means erase empty blocks -b means don't ask stupid questions 26) Re-format your swap partition(s)/array(s). mkswap /dev/md3 27) Grub is finicky with raid 1 arrays and with being copied over with dd. Reinstall it. /sbin/grub root (hd0,0) setup (hd0) root (hd0,0) setup (hd1) quit NOTE: This example is for two drives that have /boot in a raid 1 array. Modify it to fit your needs. 28) Cleanly reboot the destination server with your fingers crossed. Make sure all data is done writing.

sync sync Un your temporary storage space. cd / u /mnt/sysimages Make sure nothing else is ed. Shutdown LVM. vgchange -an /etc/init.d/lvm stop Stop your raid arrays. mdadm -S /dev/md0 mdadm -S /dev/md1 etc Reboot the machine. shutdown -r now 29) Did it come up properly? If not, boot the sysrescue CD things and see if the data is there. If it is it is likely a kernel, initrd, bootloader, or fstab issue stopping the machine from booting. If the data isn't there properly, re-check your steps and better luck with the next round. Sites used while writing this: http://www.partimage.org/forums/viewtopic.php?t=46 http://www.sysresccd.org/sysresccd-manual-en_network http://www.ducea.com/2006/10/09/partition-table-backup/ http://www.linuxjournal.com/article/8874 http://crazedmuleproductions.blogspot.com/2008/11/fedora-9-prep-usingpartimage-wraid.html http://andrewsblog.org/2008/11/16/ing-a-linux-software-raid-array-lvm/ http://www.howtoforge.com/recover_data_from_raid_lvm_partitions http://www.linuxjournal.com/article/8874 http://docs.hp.com/en/b2355-90692/vgcfgbackup.1m.html http://www.cyberciti.biz/tips/linux-how-to-backup-hard-disk-partition-table-

mbr.html http://ubuntu.wordpress.com/2005/10/20/backing-up-the-mbr/ http://docs.hp.com/en/b2355-90692/vgcfgrestore.1m.html http://linux.about.com/library/cmd/blcmdl8_vgcfgrestore.htm http://www.noah.org/wiki/dd_-_destroyer_of_disks