Root-on-LVM-on-RAID HOWTO

Size: px
Start display at page:

Download "Root-on-LVM-on-RAID HOWTO"

Transcription

1 Massimiliano Ferrero This document describes a procedure to install a Linux system with software RAID and Logical Volume Manager (LVM) support and with a root file system stored into a LVM logical volume. The procedure can be used to install such a system from scratch, without the need to install first a normal system and then convert root to LVM and RAID. Introduction New Versions of This Document You can always view the latest version of this document at the URL ( Copyright, License and Disclaimer This document, Root-on-LVM-on-RAID HOWTO, is copyrighted (c) 2003 by Massimiliano Ferrero. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is available at ( Linux is a registered trademark of Linus Torvalds. Disclaimer This document records the methods used by the authors. All reasonable effort is expended in making sure that the description is indeed accurate. The document is distributed in good faith and with the hope that the contents may prove useful to others. There is, however, no guarantee, express or implied, that the methods and procedures described herein are fit for the stated purpose. The authors disclaim any and all liability for any and all consequences, direct or indirect, of the application of these methods. No liability for the contents of this document can be accepted. Use the concepts, examples and information at your own risk. There may be errors and inaccuracies, that could be damaging to your system. Proceed with caution, and although this is highly unlikely, the author(s) do not take any responsibility. 1

2 All copyrights are held by their by their respective owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Naming of particular products or brands should not be seen as endorsements. Credits / Contributors This document is of course based on the work of many people. The author wish to thank: George Karaolides <george(at)karaolides[dot]com> for his "Unofficial Kernel 2.4 Root-on-RAID and Root-on-LVM-on-RAID HOWTO". ( The procedure here described is largely based on his document. Eduard Bloch <blade(at)debian[dot]org> for the RAID and LVM extdisk ( and for info on how to install in a chroot environment. Everybody else that has made available info in other documents or on Usenet. Feedback Feedback is most certainly welcome for this document. Send your additions, comments and criticisms to the following address : <[email protected]>. What this document is about... This document describes a procedure to install a Linux system with software RAID and Logical Volume Manager (LVM) support and with a root file system stored into a LVM logical volume. As stated in next section there are already other documents over this topic: this procedure differs from those documents as it can be used to install such a system from scratch, without the need to install first a "normal" system and then convert root to LVM and RAID. The procedure has been developed and tested using Debian "Woody" 3.0 distribution and 2.4 kernel version. The install process and steps illustrated are those of Debian setup, but this method should be valid for any distribution, provided that some user interaction is allowed during install (that would be: pause the install process, open a shell and type some magik). This procedure describes the installation of a system with two disks, using RAID 1 arrays (mirroring). All given commands refer to two IDE hard disks on primary IDE channel (/dev/hda, /dev/hdb): the procedure can be easily modified for different disks, more RAID arrays or different RAID levels (ex. RAID 5). 2

3 ...and what is not Converting a root file system to LVM The situation where a system has already been installed without LVM support or with root on a normal partition has already been discussed by other documents. Look at the LVM howto, it has a section dedicated to this task. ( Look also to the excellent "Unofficial Kernel 2.4 Root-on-RAID and Root-on-LVM-on-RAID HOWTO" ( by George Karaolides: his howto covers extensively root migration from a standard partition to LVM and is also a good source of other informations about LVM. General RAID and LVM info For an introduction about RAID systems look at these pages and documents: RAID and Data Protection Solutions for Linux, by Linas Vepstas, lot s of links and info. The Software-RAID HOWTO, by Jakob Østergaard What is RAID, by Mike Neuffer For general info about Logical Volume Manager look at: The LVM home page LVM howto, by Sistina Software Inc. RAID and LVM with 2.0.x and 2.2.x kernel versions The procedure described in this howto has been tested only with kernel versions 2.4.x For information about RAID over older kernel versions look at: Root RAID HOWTO cookbook, by Michael A. Robinton Required Software Required software is: Debian ( "Woody" 3.0 install CD (CD 1) or network install CD/floppy. 3

4 Eduard Bloch s LVM-and-RAID extension disk ( It can also be downloaded from this mirror ( Note: Seems like the extension disk image with date 8-Dec-2002 is missing the command mkraid. Use the mirror! lvm10, initrd-tools, raidtools2, mdadm packages (plus any packages they depend from). Required Hardware A Linux compatible system: this is at the very least a loose definition, since there have been reports of Linux being installed on Cash Registers and MP3 readers. A floppy disk reader, for loading the extension disk and maybe for booting from install floppies. A CDROM reader or a Linux compatible network card, for booting and installing the system. At least two hard disk: with only one there would be little gain in installing a RAID enabled system, unless the installation of a second hard disk has been already planned. Some Warnings Before starting install the system read entirely sections Overview, Before starting install, Installing: disks and file systems and Installing: kernel, RAM disk and packages. At some points of the install process there is more than one path possible: this way the choices will be clear before. A software RAID system and/or a system with root-on-lvm requires special attention: in case of a crash recovery is more complex than usual and special tools are needed (the same used to install it). KNOW WHAT YOU ARE DOING before you put such a system up and running. Overview It will be now presented an overview of the overall install process. In a few words the process of installing root over RAID and LVM consists in interrupting normal install at the very beginning, loading RAID and LVM support, configuring RAID arrays and LVM volumes, installing a kernel with RAID and LVM support and then completing a normal install. Here are more detailed installation steps: 1. Start Debian woody 3.0 installation booting with bf2.4 kernel. 2. Open a shell. 3. Load an extdisk with RAID and LVM support. 4. Partition hard disks. 4

5 5. Create RAID arrays: create one boot array and at least one additional array. 6. Start LVM. Root-on-LVM-on-RAID HOWTO 7. Create LVM physical volumes: one for each additional RAID array (not for the boot array). 8. Create LVM volume groups. 9. Create LVM logical volumes. 10. Create and activate swap space. 11. Create file systems. 12. Create mount points and mount target file systems. 13. Return to main install menu. 14. Install a kernel with RAID and LVM support (if using stock bf2.4 Debian kernel, if not skip this point). 15. Configure network. 16. Install base system. 17. Open a shell and chroot into target file system. 18. Configure APT. 19. Install a kernel with RAID and LVM support (if using a custom kernel, if already installed the stock Debian kernel skip this point). 20. Install RAID and LVM packages: lvm10, initrd-tools, raidtools2 and mdadm. 21. Optional: install devfsd. 22. Start LVM (reprise). 23. Install RAM disk with LVM support. 24. Modify configuration files (/etc/raidtab, /etc/fstab, /etc/lilo.conf, /etc/modules). 25. Write lilo configuration to disk. 26. Exit from chrooted environment. 27. Return to main install menu. 28. Reboot the system (reboot from hard disk, not from install CDROM). 29. Terminate installation. Note: The kernel install step can be done at different moments; if the stock Debian kernel is to be used is more convenient to do that before chrooting into target file systems, while if a custom kernel is to be used it is suggested to do that after chrooting. Look at section RAID and LVM support loaded as a module vs statically linked for more details. 5

6 Before starting install Before starting acquire all software indicated in section Required Software Below are reported links to a kernel with static RAID and LVM support and to RAM disk images that can be used to boot the system. Before downloading any of these read sections Install custom kernel with statically compiled RAID and LVM and Install RAM disk with LVM support. Kernel package with static RAID and LVM support: ( RAM disk for kernel bf2.4 with modular RAID and LVM support: ( RAM disk for kernel with static RAID and LVM support: ( Preparing RAID and LVM extension disk Write the extension disk image to a blank formatted floppy. From a linux system do: dd if=lar1440.bin of=/dev/fd0 bs=1024 conv=sync ; sync From a Windows 9x or MS-DOS system use RaWrite3 or a similar tool ( from a Windows NT, 2K or XP system use NTRawrite. ( Look at this section ( of the Debian install manual for more info. Preparing script and configuration files disk To speed up install process place on another formatted floppy disk some scripts and configuration files that will be used during install: Install script install_lvm1 ( this script will execute almost all commands necessary to create RAID arrays, logical volumes, file systems and mount them. The same commands can be executed one by one, but the use of this script speeds up the whole process. The suggestion is to complete at least one installation by hand, then use the script. The script must be adapted for disk and logical volumes configuration. File /etc/fstab ( File /etc/lilo.conf ( File /etc/raidtab ( 6

7 Installing: disks and file systems This section of the document will go from the beginning of the installation to mounting file systems. Main steps consist of partitioning disks, preparing RAID and LVM, creating file systems. Start installation Boot from Debian 3.0 "Woody" CD 1 and at lilo prompt use bf24, to boot with kernel Choose install language and configure the keyboard then open a shell (this option is near the end of the menu). Load RAID and LVM support Insert "LVM-and-RAID" floppy into floppy drive and use: extdisk to load RAID and LVM support. The following (or similar) messages should be displayed; ignore any warning message regarding the kernel: Trying the floppy drives, please wait... Locating a new mount point... Copying resource files... Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/md.o will taint the kernel Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/linear.o will taint the ke Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/xor.o will taint the kerne Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/raid0.o will taint the ker Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/raid1.o will taint the ker Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/raid5.o will taint the ker Warning: loading /ext1/lib/modules/ bf2.4/kernel/drivers/md/lvm-mod.o will taint the k Done: LVM and RAID tools available now. After the extension disk has been loaded the floppy can be removed from the drive. Note: Debian install process works by booting a kernel and mounting a ram disk; so extdisk loads RAID and LVM commands from floppy disk to the RAM disk. Please bear in mind that the same fate suffers any configuration file (as /etc/raidtab), so any file copied to /etc folder during install is lost when the machine is rebooted. This is true at least until target file systems are created and mounted. Only at this point configuration files can be copied to "real" /etc (ex. /target/etc) and packages installed in "real" file systems. Partition hard disks Partition hard disks: minimal configuration requires two RAID arrays, one for boot file system and one for LVM volume group. 7

8 The procedure has been written for two IDE disks on first IDE channel (/dev/hda, /dev/hdb): using fdisk create two partitions in each disk, the first one 20 or 25 MB large, the second one using all remaining space. The first partition will be used to hold boot volume, the second one will hold LVM volume group. If the two disks are not equal in size, the second partition must fit on both disks, so it has to be large as the remaining space on the smallest disk. Ex. If the disks are MB and MB large, create 25 MB and MB partitions. Warning! Partitioning hard disks can destroy all data. Check that the disks do not contain any valuable data before going on. Use: fdisk /dev/hda to partition the first hard disk: delete any existing partition, create one primary partition 25 MB large (/dev/hda1), mark it with type FD (Linux raid autodetect) and bootable, then create another primary partition with required size and mark this too with type FD (Linux raid autodetect). Write configuration to disk and quit from fdisk. Repeat the same step for second disk: fdisk /dev/hdb It is possible to use cfdisk to partition hard disks. Create RAID arrays Note: The steps from current section to Create mount points and mount target file systems can be executed with a script, one can be found in section Preparing script and configuration files disk. It is suggested that the first installation is completed by hand, so that each step can be fully understanded. If a script disk is to be used mount it with mount /dev/fd0 /floppy Now it is necessary to create RAID arrays: two arrays will be created, /dev/md1 and /dev/md2. /dev/md1 will be built from /dev/hda1 and /dev/hdb1, and will hold the boot file system. /dev/md2 will be built from /dev/hda2 and /dev/hdb2, and will become the physical volume that LVM will include. First configuration file /etc/raidtab has to be created: edit it using nano-tiny or any other available editor or, more conveniently, prepare the file before on a floppy and then just copy it from there to /etc. To do so mount such a floppy with: mount /dev/fd0 /floppy 8

9 then copy the file with: cp /floppy/raidtab /etc/ Here is shown a valid /etc/raidtab for /dev/md1 and /dev/md2 built over /dev/hda and /dev/hdb: raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda2 raid-disk 0 device /dev/hdb2 raid-disk 1 This file can be downloaded here: ( Once created or copied /etc/raidtab, RAID arrays must be created. This is done using mkraid command. mkraid /dev/md1 mkraid /dev/md2 The following output should result: #mkraid /dev/md1 handling MD device /dev/md1 analyzing super-block disk 0: /dev/hda1, 32098kB, raid superblock at 32000kB disk 1: /dev/hdb1, 30712kB, raid superblock at 30592kB # #mkraid /dev/md2 handling MD device /dev/md2 analyzing super-block disk 0: /dev/hda2, kB, raid superblock at kB disk 1: /dev/hdb2, kB, raid superblock at kB Note: As soon as arrays are created, a rebuild thread is started to align disk status. This process can be lengthy and requires that all disk space is rewritten, so heavy disk activity will be generated. This is normal, so don t panic. 9

10 Verify that RAID arrays have been created successfully, by issuing the following command: cat /proc/mdstat This should print a status of all RAID arrays, similar to this: Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md2 : active raid1 hdb2[1] hda2[0] blocks [2/2] [UU] [>...] resync = 0.7% (320788/ ) finish=37.1min speed=20049k/ md1 : active raid1 hdb1[1] hda1[0] blocks [2/2] [UU] unused devices: <none> Each disk/partition that makes up an array should be in UP status (should have [UU] reported), and the big array will be under resync. While looking at mdstat repeatedly the resync percentage should rise. Configurations with more than two disks If more than two disks are present many other disk schemes are usable. With three disks possible configurations are: 1. Create RAID 1 arrays over two disks and use the third as hot spare 2. Create a small RAID 1 array for boot (using two disks) and a big RAID 5 arrays over all three disks In the first case create two partitions, one small and one big, on every disk, then configure mirroring between the first two disks; the third disk will not be used until one disk suffers a crash and the hot spare is activated. In the second case it is best to partition all disks with the same scheme, one small partition and one big, then create a RAID 1 using the first partition of the first two disks, then create a RAID 5 array using the second partition of each disk: /dev/md1 using /dev/hda1, /dev/hdb1 /dev/md2 using /dev/hda2, /dev/hdb2 and /dev/hdc2 Note: The boot partition must be a RAID 1 array, lilo has currently no support for booting from a RAID 5 array. With four disks possible configurations are: 3. Create RAID 1 arrays over the first couple of disks as before, the create another RAID 1 volume using the second couple (one big partition over the third and fourth disk). 10

11 4. As configuration #2 plus the fourth disk as hot spare. Root-on-LVM-on-RAID HOWTO 5. Create a small RAID 1 array from two disks then a RAID 5 array over all four disks. And so on with more disks. In Appendix A: /etc/raidtab examples there are examples of the file /etc/raidtab for the first two cases (RAID 1 + hot spare, RAID 1 + RAID 5). LVM disk organization For detailed informations on LVM consult LVM man pages and the official LVM howto (search for links in section General RAID and LVM info). LVM maps disk partitions or raid arrays to physical disks (one to one), so the first step while configuring a LVM-enabled machine is to create physical volumes. Then disks are organized into volume groups: a volume group is a group of physical volumes and of logical volumes that are related each other. Each volume group can hold one or more logical volumes: think of a logical volume as a "dynamic" partition (Microsoft Windows 2000 Server calls this "Dynamic Volume"). For an example of situation where more than one volume group is advisable look at Appendix B: more than one volume group. Start LVM Before creating or accessing physical volumes and volume groups the LVM subsystem must be activated: this is done by using: vgscan This should produce the following output: vgscan -- reading all physical volumes (this may take a while...) vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group Note: It is normal that the first time vgscan is invoked after each boot, /etc/lvmtab is created. Create LVM physical volume(s) Before creating volume groups physical volumes have to be created: this is done using pvcreate command. To create /dev/md2 physical disk issue: pvcreate /dev/md2 The following output should be shown: 11

12 pvcreate -- physical volume "/dev/md2" successfully created Root-on-LVM-on-RAID HOWTO Create LVM volume group(s) Now it is time to create required volume groups. In this case just one volume group can be created: there is only one physical volume available. Create the volume group writing: vgcreate -A n vg00 /dev/md2 This creates the volume group vg00 and allocates physical disk dev/md2 to it. Following output should result: vgcreate -- INFO: using default physical extent size 4.00 MB vgcreate -- INFO: maximum logical volume size is Gigabyte vgcreate -- WARNING: you don t have an automatic backup of "vg00" vgcreate -- volume group "vg00" successfully created and activated Note: The "-A n" option is used to prevent auto-backup of volume group configuration. This would be done into /etc/lvmtab.d Remember that current /etc is into a RAM disk, so it would be useless to make a backup of the configuration and, much worst, after a few backup the RAM disk would be full. Should this ever happen it would be sufficient to delete all *.old files into /etc/lvmtab.d, but the "-A n" option prevents this. Note: If more that one physical volume is to be put into the volume group this can be done at volume group creation by putting all physical volume on pvcreate command line or later by using vgextend (look at related man pages). Note: Note: the name vg00 is arbitrary, but if changed (ex. vg0, vgroot) all subsequent commands related to this volume group or his logical volumes must be changed accordingly. Choose file systems structure Before proceeding with logical volumes creation a file systems structure has to be chosen: while it is possible to install over a single "big" root file system, this is not recommended, since this solution has several disadvantages: Almost everything can saturate root file system, and so cause a system lock or crash. No separation between data and binaries or between different kind of data. All file in the system are be held by a single file system, and this will make the whole system much more vulnerable to file system corruption. 12

13 To avoid all these problems use different file systems for the main folders held into root. Next is indicated a "decent" file systems structure with some possible values for file systems size: / (root) 128 MB /boot 25 MB /home 128 MB or more (depends on data to be held) /opt 16 MB or more (depends on packages installed) /tmp 128 MB /usr 256 MB or more (depends on packages installed) /var 128 MB or more (depends on packages installed, logs created and data to be held) swap 128 MB or more (depends on RAM and usage) Of course this is just an indication and everybody is free to adopt a custom solution. Create LVM logical volumes Now logical volumes have to be created. It is necessary to create one logical volume for each file system (except /boot that will be held in /dev/md1) plus one logical volume for swap space. Logical volumes are creating by using lvcreate. Referring to file systems structure in Choose file systems structure here are commands for creating such logical volumes: lvcreate -A n -L 128 -n root vg00 lvcreate -A n -L 128 -n home vg00 lvcreate -A n -L 16 -n opt vg00 lvcreate -A n -L 128 -n tmp vg00 lvcreate -A n -L 256 -n usr vg00 lvcreate -A n -L 128 -n var vg00 lvcreate -A n -L 128 -n swap vg00 For each lvcreate command an output similar to the following should be shown: lvcreate -- WARNING: you don t have an automatic backup of "vg00" lvcreate -- logical volume "/dev/vg00/root" successfully created It is possible to verify logical volume correct creation by issuing: vgdisplay -v vg00 more The following output should be displayed: --- Volume group --- VG Name vg00 VG Access read/write VG Status available/resizable VG # 0 MAX LV 255 Cur LV 7 Open LV 0 MAX LV Size GB Max PV 255 Cur PV 1 13

14 Act PV 1 VG Size GB PE Size 4.00 MB Total PE Alloc PE / Size 228 / MB Free PE / Size / GB VG UUID W2R8ko-px88-WoJ1-9KV8-syTX-tBqh-My6YjD --- Logical volume --- LV Name /dev/vg00/root VG Name vg00 LV Write Access read/write LV Status available LV # 1 # open 0 LV Size MB Current LE 32 Allocated LE 32 Allocation next free Read ahead sectors 120 Block device 58:0... Look also at /dev/vg00 directory with: ls -l /dev/vg00 There should be one node for each logical volume, like this: crw-r root disk 109, 0 Jan 1 22:42 group brw-rw root root 58, 1 Jan 1 22:58 home brw-rw root root 58, 2 Jan 1 22:58 opt brw-rw root root 58, 0 Jan 1 22:47 root brw-rw root root 58, 6 Jan 1 22:58 swap brw-rw root root 58, 3 Jan 1 22:58 tmp brw-rw root root 58, 4 Jan 1 22:58 usr brw-rw root root 58, 5 Jan 1 22:58 var Root-on-LVM-on-RAID HOWTO Create and initialize swap space Next a swap partition has to be created and activated: for this step mkswap and swapon commands are used. Create swap space: mkswap /dev/vg00/swap Output similar to this should result: Setting up swapspace version 1, size = bytes Then activate swap space: 14

15 swapon /dev/vg00/swap This command has no output. Finally verify swap space status: cat /proc/swaps The file should be like this one: Filename Type Size Used Priority /dev/vg00/swap partition Create file systems At this point it is possible to create file systems. File systems can be of any type supported by the kernel that will be installed, common choices are ext2, ext3 or reiserfs. ext3 and reiserfs are journaled file systems, so are more secure than ext2, on the other hand they are slower and journal entries takes up some space. The commands here shown create ext3 file systems except for the boot file system that will be an ext2 file system. To create file systems use these commands: mke2fs /dev/md1 mke2fs -j /dev/vg00/root mke2fs -j /dev/vg00/home mke2fs -j /dev/vg00/opt mke2fs -j /dev/vg00/tmp mke2fs -j /dev/vg00/usr mke2fs -j /dev/vg00/var For each command an output similar to the following should be displayed: mke2fs 1.27 (8-Mar-2002) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) inodes, blocks 6553 blocks (5.00%) reserved for the super user First data block=1 16 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. 15

16 Create mount points and mount target file systems Root-on-LVM-on-RAID HOWTO The final step necessary before proceeding with "normal" installation is to create mount points and mount file systems into /target. First root file system has to be mounted into /target: mount /dev/vg00/root /target Then mount points have to be created: mkdir /target/boot mkdir /target/home mkdir /target/opt mkdir /target/tmp mkdir /target/usr mkdir /target/var Finally remaining file systems have to be mounted into corresponding mount points: mount /dev/md1 /target/boot mount /dev/vg00/home /target/home mount /dev/vg00/opt /target/opt mount /dev/vg00/tmp /target/tmp mount /dev/vg00/usr /target/usr mount /dev/vg00/var /target/var Verify that all mounts are successful with: mount This should be the result: /dev/ram0 on / type ext2 (rw) /proc on /proc type proc (rw) /dev/ram1 on /ext1 type ext2 (rw) /dev/vg00/root on /target type ext3 (rw) /dev/fd0 on /floppy type vfat (rw) /dev/md1 on /target/boot type ext2 (rw) /dev/vg00/home on /target/home type ext3 (rw) /dev/vg00/opt on /target/opt type ext3 (rw) /dev/vg00/tmp on /target/tmp type ext3 (rw) /dev/vg00/usr on /target/usr type ext3 (rw) /dev/vg00/var on /target/var type ext3 (rw) Installing: kernel, RAM disk and packages This section of the document illustrates the steps necessary to complete installation: installing a suitable kernel, a RAM disk, configure the system so that it can boot and install all needed packages. 16

17 Return to main install menu If any floppy has been mounted, unmount it then: exit from the shell and return to main menu. Install a kernel with RAID and LVM support This is one of the most important steps of the whole process: to be able to boot with a root-on-lvm-on-raid configuration a RAID and LVM enabled kernel is needed. Before installing the kernel there is a question that needs an answer: to module or not to module? More seriously: RAID and LVM support should be compiled statically into the kernel or loaded as an external module? Both solutions are feasible, but each one has some advantages and disadvantages that will be examined. Boot process overview To boot a system with root-on-lvm-on-raid (or just with root-on-lvm) a RAM disk is needed: this is necessary because root file system resides in a volume group, so to be able to access root file system LVM must first be started. Commands (and maybe modules) to accomplish this task are stored into a RAM disk, which is automatically loaded and used by the kernel. When booting a system with root-on-lvm-on-raid the following steps are performed: LILO (or another boot loader with RAID support) is loaded. Kernel image is loaded. If RAID support is built in the kernel RAID arrays are auto-detected and started. RAM disk image with LVM support is loaded and uncompressed. RAM disk is mounted as root. If RAID and LVM support is not built in the kernel these modules must be included in the RAM disk and loaded. If RAID support has been loaded as a module RAID arrays must be started manually (RAM disk must be modified). LVM is started. Volume groups are detected and activated. "Real" root file system over LVM is mounted read-only as new root file system. RAM disk is unmounted. LVM is started again (root has changed). Root file system is checked for errors. If no errors are found root file system is remounted read-write. 17

18 From this point on boot process continue as normal (modules are loaded, remaining file systems are mounted,... ). RAID and LVM support loaded as a module vs statically linked By choosing the first solution, modular support, there is one great advantage: the stock bf2.4 kernel shipped with Debian "Woody" 3.0 CD has RAID and LVM compiled as a module, so there is no need to compile a custom kernel. Moreover the kernel can be installed from main menu. The main disadvantage of this solution is that with RAID support loaded as a module, RAID auto-detect does not work. To workaround this problem the RAM disk must be modified to make it start RAID arrays. The real problem is that array configuration has to be included into the RAM disk, and thus the RAM disk must be modified for each single system that is to be installed. This is not very good if several different systems are to be installed. Even worst in some cases if array configuration is changed, for example an array is added, the RAM disk must be updated. The second solution, a custom kernel with built-in RAID and LVM support, has working RAID arrays auto-detect, and does not require manual RAM disk modifications. The drawback is that kernel compilation is required and that the kernel has to be installed manually. Install stock bf2.4 Debian kernel with modular RAID and LVM support To install the stock bf2.4 kernel choose "Install Kernel and Driver Modules" from the main install menu and then "Configure Device Driver Modules" to configure any required module. Install custom kernel with statically compiled RAID and LVM It is not possible to install a custom kernel from the install menu: it s necessary to perform this step from command line. Probably to transfer the kernel image to the system, network access will be necessary along with some tools like ftp or wget. For all these reasons installation of a custom kernel has to be performed later: necessary steps are reported in section Install custom kernel with statically compiled RAID and LVM. Configure the network From the main install menu choose "Configure the Hostname" and type a name for the system, then choose "Configure the Network" and configure network parameters as required (use DHCP or configure it manually). Install base system From main install menu choose "Install the Base System" to install base Debian packages. Wait until all required packages are installed. Ignore the warning "No swap partition found". 18

19 Chroot into target file system Note: To perform the following steps another shell has to be opened. This time is better not to open it from the main menu but use tty2: usually pressing ALT-F2 brings up the second console. If the first console (ttyp0) is used, some command (ex. apt-setup) fail stating something like "Setting locale failed" (this with Debian "Woody" 3.0). It s still possible to use the first console by issuing: export TERM=vt100 before the offending command. Open a new shell (see previous note) and then chroot a shell into /target: chroot /target /bin/sh Then remount proc: mount -t proc proc /proc Configure APT Some additional packages have to be installed: the best way to do this is by using apt-get. Before using this utility APT must be configured. Type: apt-setup and configure package sources (CDROM or ftp sites). Since some required packages (ex. lvm10) are not present in the two firsts Debian CDROM it is better to configure at least one APT ftp source, to be able to install these packages through the network. This, of course, is possible only if network access is available during install. If for some unfortunate reason the system cannot be made access the network during install (ex. some strange network card not supported by stock bf2.4 kernel is used, not even as a module, not even as an external custom module, not even by passing some weird startup parameter to the kernel while booting,... ) some packages (ex. lvm10) will have to be installed manually using dpkg command. This can be a little tricky: lvm10 depends on other packages an these must be installed in the correct order. In section Install RAID and LVM packages the exact package list and order of dependencies is given along with the manual install procedure. Note: Usually debian install executes apt-setup after the first reboot. Executing it so early can lead to another problem: at this stage the cdrom is mounted over /instmnt, and the device used for the cdrom is not may not be /dev/cdrom but, for example, /dev/hdd If this is the case when specifying a cdrom as a package source probably /dev/cdrom will not work: use the actual device (it can be guessed using mount). 19

20 Install custom kernel with statically compiled RAID and LVM Skip this step if stock kernel bf2.4 has been previously installed into the system. To install a custom kernel of course this has first to be acquired or compiled. Below there are links to an already compiled kernel, steps for compilation are shown in next section. From this link can be downloaded a Debian package (.deb) with kernel that has RAID 1 and 5 along with LVM support statically compiled in, it has also devfsd support enabled: ( The kernel has been compiled for i386 architecture. This kernel package has been compiled on a Debian system by using make-kpkg command. This is the.config file used to compile it: ( Here can be downloaded a tarball containing the kernel image, system map and modules. This can be used on distributions different from Debian: ( Once kernel package or binaries have been copied/compiled into the system the kernel can be installed. If the kernel has been acquired as a Debian package: Install the kernel: dpkg -i kernel_image lvm_1.0_i386.deb Ignore any warning about the need to reboot urgently. The install script will ask a few questions about lilo: do not have it create the simbolic link (it would create it in /), do not write now the configuration to disk and do not wipe out lilo configuration. Create a simbolic link from the kernel image to /boot/vmlinuz ln -s /boot/vmlinuz lvm /boot/vmlinuz If the kernel has been acquired as a tarball: Unpack the kernel: tar -zxvf kernel_image lvm_1.0_i386.tgz Move vmlinuz lvm and config lvm into /boot: mv vmlinuz lvm config lvm /boot/ 20

21 Move module files into /lib/modules/ mv lvm /lib/modules/ Create symbolic links: ln -s /boot/system.map lvm /boot/system.map ln -s /boot/config lvm /boot/config ln -s /boot/vmlinuz lvm /boot/vmlinuz Custom kernel compilation To compile a custom kernel it is necessary to download the source, unpack it, configure it or copy a configuration file and then run the compile. While it is possible to compile the kernel "on-the-fly" during installation, this requires installing some additional packages, at least on Debian. It is better to compile the kernel on an already installed system, then pack the binaries and transfer them to the system that is being installed. In order to have a kernel able to boot a RAID or LVM system there are a few options that must be set, these are: Multiple device support (CONFIG_MD=y) RAID device support (CONFIG_BLK_DEV_MD=y) RAID 0 support, if using RAID 0 arrays (CONFIG_MD_RAID0=y) RAID 1 support, if using RAID 1 arrays (CONFIG_MD_RAID1=y) RAID 5 support, if using RAID 5 arrays (CONFIG_MD_RAID5=y) LVM support, if using LVM (CONFIG_BLK_DEV_LVM=y) Moreover, since a RAM disk has to be loaded, following options must be set: Loopback device support (CONFIG_BLK_DEV_LOOP=y) RAM disk support (CONFIG_BLK_DEV_RAM=y) Maximum RAM disk size, default is 4096 (4 MB), do not change this (CONFIG_BLK_DEV_RAM_SIZE=4096) Initial RAM disk support (CONFIG_BLK_DEV_INITRD=y) Optionally devfs support can be compiled into the kernel. devfs is, in kernel , an experimental feature: it is a virtual /dev file system (like /proc) that replaces classical /dev file system. Main advantages of devfs are that devices are dynamically registered as needed, and this happens just for devices that are effectively used by the system. For more information look at devfs documentation in the kernel source. devfs support (CONFIG_DEVFS_FS=y) Automatically mount devfs at boot (CONFIG_DEVFS_MOUNT=y) If devfs support is enabled, it is recommended that devfsd is installed, look at section Optional: install devfsd 21

22 Kernel Sources can be downloaded from The Linux Kernel Archives or installed with distribution specific means (deb packages for Debian, rpm packages for Red Hat). For more information on kernel compilation look at Appendix D: kernel compilation tips. Install RAID and LVM packages Some additional packages are needed to support RAID, LVM and RAM disk use. These are: lvm10: current stable version initrd-tools: current stable version raidtools2: current stable version mdadm: current stable version Package lvm10 depends on lvm-common and file, lvm-common depends on binutils, initrd-tools depends on cramfsprogs. Dependencies are more complex, here have been indicated only missing packages after a Debian base install. Note: If apt has been configured to use a cdrom as source for packages, manually mount the cdrom over /cdrom in the chrooted environment before using apt-get, because at this stage of install process auto-mount does not work. The device for the cdrom will probably not be /dev/cdrom, as explained in section Configure APT. Use: mount /dev/hdd /cdrom to mount the cdrom (change the device if necessary). If APT has been correctly configured and network access is available (look at section Configure APT) to install these packages use: apt-get install lvm10 initrd-tools raidtools2 mdadm The package mdadm has to be configured: answer "Yes" when asked if the RAID monitor daemon has to be started, then configure an address which has to be notified when a disk failure event happens. If APT cannot be used manual package install is required. For each package file: dpkg -i package_filename.deb The following packages must be installed: lvm10 raidtools2 initrd-tools mdadm 22

23 ash binutils cramfsprogs file lvm-common zlib1g Optional: install devfsd If a custom kernel with devfs support enabled has been installed it is recommended to install devfsd. devfsd is a daemon that automatically handles the registration and removal of device entries into devfs. To install devfsd use: apt-get install devfsd Warning! Do not perform this step if the stock kernel bf2.4 has been installed since it has no devfs support. Start LVM (reprise) Since the current is chrooted, LVM subsystem has to be re-started in the new environment: vgscan has to be used again. It will state again that it s creating /etc/lvmtab. Note: This step is necessary before lilo configuration can be written to disk. So must be executed before section Write lilo configuration to disk. It cannot be executed before because prior to installing package lvm10 the command vgscan is not available in the chrooted environment. Install RAM disk with LVM support As for kernel installation, RAM disk installation is one of the most important steps in successfully installing (and rebooting!) a root-on-lvm system. In sections Boot process overview and RAID and LVM support loaded as a module vs statically linked an overview of problems related to RAM disk and kernel has already been presented. This section will show more in detail step necessary to install a suitable RAM disk. Here are shown also steps necessary to create the RAM disk. 23

24 Note: RAM disk creation can be a complicate matter (at least the author of this document hasn t found any way to make it simpler ;). In case the option of downloading the RAM disk is chosen, skip those section of the document. Install RAM disk for kernel bf2.4 Download this RAM disk ( or create one as explained in section RAM disk for bf2.4 kernel: requirements and creation, then copy it to /boot on the system that s being installed and create a symbolic link /boot/initrd to it. ln -s /boot/initrd.gz /boot/initrd Install RAM disk for custom kernel Download this RAM disk ( or create one as explained in section RAM disk for custom kernel: requirements and creation, then copy it to /boot on the system that s being installed and create a symbolic link /boot/initrd to it. ln -s /boot/initrd.gz /boot/initrd RAM disk for custom kernel: requirements and creation The requirements for a RAM disk to be used with a kernel that has RAID and LVM support statically compiled will be presented first, since they are simpler than the ones for kernel with modular support. With this kind of kernel RAID array autodetect and autostart at boot works. Moreover the RAM disk must not contain modules for RAID and LVM support (they already are in the kernel!). The role of the RAM disk in this case is limited to start LVM subsystem (vgscan) and activate volume group(s) (vgchange). Requirements are as follows: RAM disk must contain commands vgscan and vgchange. RAM disk must contain commands mount and umount. Any library needed by previous commands, particularly liblvm1.0.so.1 A linuxrc file that issues commands necessary to activate LVM and volume group(s). The linuxrc file must be like the one shown below, the same file can be downloaded here ( #!/bin/sh /bin/mount /proc /sbin/vgscan /sbin/vgchange -a y /bin/umount /proc 24

25 The RAM disk can be created at hand (modify an existing one or create it with mkinitrd and then modify it) or it can be created automatically using lvmcreate_initrd: this command creates a RAM disk that mets above requirements and puts it into /boot. It is suggested that RAM disk creation is performed prior to installing the wanna be RAID system (on another already installed system) or that an available RAM disk ( is used. The RAM disk can be created during install process, but this can be a little tricky: lvmcreate_initrd tryies to put into the RAM disk modules from the directory /lib/modules/kernel_version, where kernel_version is the kernel used for boot: if the command is used during the installation, the same kernel used for boot (ex. bf2.4) must already have been installed on the system, even if later the system will be booted with another kernel (a custom one). Ex. Debian install is booted using the bf.24 kernel, if during install lvmcreate_initrd is called depmod will state that it cannot find modules.dep. To be able to create the RAM disk with this command the kernel bf2.4 must first be installed. To modify an existing RAM disk or to examine an existing one: If the RAM disk is compressed, uncompress it with: gzip -cd /boot/initrd.gz > /boot/initrd_unc This uncompress the RAM disk in a copy: using gzip -d the compressed file will be removed. It will be necessary to generate it again with a gzip command. If these steps are performed on an already installed system remember to run lilo after. Mount the RAM disk with: mount -o loop /boot/initrd_unc /initrd Examine or modify the RAM disk in /initrd Unmount the RAM disk with: umount /initrd If necessary compress again the RAM disk: gzip /boot/initrd_unc RAM disk for bf2.4 kernel: requirements and creation Read previous section for RAM disk requirements for static RAID support: in this section are illustrated only differences with that case. The same for commands necessary to RAM disk creation. Since the kernel has no RAID and LVM support the modules for this must be included in the RAM disk: md.o 25

26 raid1.o lvm-mod.o With RAID and LVM support compiled as a module, RAID autodetect will not work at boot. This requires that the RAM disk contains not only commands for starting LVM but also for starting RAID arrays (mdadm or raidstart). Finally linuxrc file must include commands to load the modules and for starting RAID arrays. Below is shown a modified script, the same file can be downloaded here ( #!/bin/sh /sbin/modprobe md /sbin/modprobe raid1 /sbin/modprobe lvm-mod /sbin/mdadm -A -R /dev/md1 /dev/hda1 /dev/hdb1 /sbin/mdadm -A -R /dev/md2 /dev/hda2 /dev/hdb2 /bin/mount /proc /sbin/vgscan /sbin/vgchange -a y /bin/umount /proc As it can be easily noticed the script is dependent from disks and arrays configuration, so it must be modified to be adapted for each system: a mdadm command must be added for each RAID array. To modify the RAM disk: If the RAM disk is compressed, uncompress it. Mount the RAM disk over /initrd Add required modules to the RAM disk: cp /lib/modules/ bf2.4/kernel/drivers/md/lvm-mod.o /initrd/lib/modules/ bf2.4/ cp /lib/modules/ bf2.4/kernel/drivers/md/md.o /initrd/lib/modules/ bf2.4/ cp /lib/modules/ bf2.4/kernel/drivers/md/raid1.o /initrd/lib/modules/ bf2.4/ Modify or copy linuxrc as shown above to include calls to mdadm. Add mdadm to the RAM disk: cp /sbin/mdadm /initrd/sbin/mdadm Unmount the RAM disk with. If necessary compress again the RAM disk. Modify configuration files Some configuration files have to be copied into /etc (the "real" /etc, not the one in the RAM disk. 26

27 /etc/raidtab Root-on-LVM-on-RAID HOWTO The file /etc/raidtab is needed for raid arrays management, create or copy it, look at section Create RAID arrays. Note: In the chrooted shell the RAM disk is not accessible. To copy the file raidtab from the RAM disk use a non-chrooted shell: cp /etc/raidtab /target/etc/ To do this it is possible to exit from the chrooted shell and the reopen it. Otherwise remount the floppy with all config files in the chrooted shell and copy raidtab from floppy. /etc/fstab File Systems Table: debian install does not detect file systems over lvm, so /etc/fstab created by install process will be empty; the file must be created by hand or copied. Here can be downloaded an example /etc/fstab file ( the same file is shown below. Modify it to reflect current configuration: boot raid device, logical volumes name or number. # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> /dev/vg00/root / ext3 errors=remount-ro 0 1 /dev/vg00/swap none swap sw 0 0 proc /proc proc defaults 0 0 /dev/fd0 /floppy auto user,noauto 0 0 /dev/cdrom /cdrom iso9660 ro,user,noauto 0 0 /dev/md1 /boot ext2 defaults 0 2 /dev/vg00/home /home ext3 defaults 0 2 /dev/vg00/opt /opt ext3 defaults 0 2 /dev/vg00/tmp /tmp ext3 defaults 0 2 /dev/vg00/usr /usr ext3 defaults 0 2 /dev/vg00/var /var ext3 defaults 0 2 /etc/lilo.conf To make the system bootable lilo has to be configured. Since version 22.0 lilo raid support has been made more powerful. Taken from lilo changelog: Changes from version to 22.0 (29-Aug-2001) John Coffman [released 9/27] Boot Installer RAID installations now create a single map file, install the boot record on the RAID partition, install auxiliary boot records only on MBRs if needed, except BIOS device 0x80. Back- 27

28 ward compatibility is possible with new config-file and command line options (raid-extra-boot= or -x switch). Even with stored boot command lines ( -R, lock, fallback), RAID set coherency can be maintained. To have lilo boot from a RAID system use: boot=/dev/md1 Change /dev/md1 if the boot device is different. To use a root-on-lvm file system include the line: root=/dev/vg00/root Change /dev/vg00/root if the volume group name or logical volume name are different. Root-on-LVM-on-RAID HOWTO Now disks whose MBR is to be written by lilo have to be indicated with option raid-extra-boot; from lilo.conf man page: raid-extra-boot=<option> This option only has meaning for RAID1 installations. The <option> may be specified as none, auto, mbr-only, or a comma-separated list of devices; e.g., "/dev/hda,/dev/hdc6". Starting with LILO version 22.0, the boot record is normally written to the first sector of the RAID1 device. Use of an explicit list of devices, forces writing of auxiliary boot records only on those devices enumerated, in addition to the boot record on the RAID1 device. Since the version 22 RAID1 codes will never automatically write a boot record on the MBR of device 0x80, if such a boot record is desired, this is the way to have it written. So add this line to lilo.conf: raid-extra-boot="/dev/hda, /dev/hdb" Change the disks to reflect system configuration. Have the kernel mount root read-only by adding this option: read-only Finally specify an image to be loaded at boot with related RAM disk. image=/boot/vmlinuz label=linux initrd=/boot/initrd With non standard disks setup options disk and bios it could be necessary to specify correspondence between disks and BIOS numbers. Look at lilo.conf man page for more details. Below is shown an example /etc/lilo.conf, the same file can be downloaded here ( boot=/dev/md1 root=/dev/vg00/root raid-extra-boot="/dev/hda, /dev/hdb" 28

29 read-only image=/boot/vmlinuz label=linux initrd=/boot/initrd Change the devices, kernel image name and RAM disk name as required or create symbolic links /boot/vmlinuz and /boot/initrd to regular files. /etc/modules File /etc/modules contains modules to be loaded at startup. If the stock kernel is being used and modules configuration has been performed from the main menu (look at section Install stock bf2.4 Debian kernel with modular RAID and LVM support) this file should be already ok. Just check it with: cat /etc/modules If a custom kernel has been installed manual module configuration will be necessary. If any required module or parameter is missing add it. An example of /etc/modules is shown below: # /etc/modules: kernel modules to load at boot time. # # This file should contain the names of kernel modules that are # to be loaded at boot time, one per line. Comments begin with # a "#", and everything on the line after them are ignored. usb-uhci input usbkbd keybdev This file can be downloaded here ( Write lilo configuration to disk Last but not least, lilo configuration has to be written to disk. Warning! Always remember to do this after any change to /etc/lilo.conf, kernel image file or RAM disk image. It is very easy to make the system unbootable by forgetting this step. To write configuration to disk use: lilo -v The -v option is used to have lilo give some more feedback on what is doing An output similar to the following should result: 29

30 LILO version 22.2, Copyright (C) Werner Almesberger Development beyond version 21 Copyright (C) John Coffman Released 05-Feb-2002 and compiled at 20:57:26 on Apr MAX_IMAGES = 27 Warning: LBA32 addressing assumed Warning: using BIOS device code 0x80 for RAID boot blocks Reading boot sector from /dev/md1 Merging with /boot/boot.b Boot image: /boot/vmlinuz -> /boot/vmlinuz bf2.4 Mapping RAM disk /boot/initrd -> /boot/initrd-lvm bf2.4.gz Added Linux * Backup copy of boot sector in /boot/boot.0901 Writing boot sector. The boot record of /dev/md1 has been updated. Reading boot sector from /dev/hde Backup copy of boot sector in /boot/boot.2100 Writing boot sector. The boot record of /dev/hde has been updated. Reading boot sector from /dev/hdg Backup copy of boot sector in /boot/boot.2200 Writing boot sector. The boot record of /dev/hdg has been updated. Exit from chrooted environment If a cdrom or a floppy had been manually mounted, unmount them (verify with mount command). Unmount also the remounted /proc. umount /cdrom umount /floppy umount /proc There are no other steps that have to be performed from the chrooted environment: exit to the original shell. Return to main install menu Note: Before exiting and rebooting it could be worth to take a look at /proc/mdstat: if the newly created arrays are still resyncing, rebooting would have the resync process restart. While this is not a problem it could be a waste of time: look at the ETA and decide if it s best to wait or to reboot. Again: exit 30

31 to the main install menu. Reboot the system From the menu choose "Reboot the system". As soon as the machine reboot remove Debian CDROM and any floppy, so that the system can boot from hard disk. Look carefully at boot messages: if everything is ok the system will complete boot sequence by loading the RAM disk, mount root file system and then all other file systems. Look at section Boot process overview for more informations. Look at Appendix E: boot messages for some examples of successful boot messages. If some nasty message is printed (something like "Kernel Panic") and boot process stops, don t despair: if the system is unable to load the kernel, the RAM disk or to start LVM and mount root file system for any reason, it is always possible to boot again from installation media and use LVM and RAID extdisk as a rescue disk, activating manually volume group(s) and mounting manually file systems. If this is the case, write down any error message, then boot from install media and try to correct the problem. Look at Appendix C: troubleshooting tips for detailed troubleshooting tips. Terminate installation When the system reboots Debian install process will complete as usual. The command: base-config will be run automatically. This tool is Debian specific and will configure time-zone, password management, normal user accounts, APT, run tasksel and dselect to install additional packages. After installing There are still a few tasks that it should be better to perform before declaring installation complete. Look at boot messages If not already done examine carefully boot messages. Particularly these sections: RAID arrays detections, RAM disk loading, LVM subsystem activation, root file system mounting, modules loading, other file systems mounting. If any strange error messages or warnings are displayed investigate them. Look at Appendix E: boot messages for some examples of successful boot messages. Look at Appendix C: troubleshooting tips for troubleshooting tips. 31

32 Rescue floppies Root-on-LVM-on-RAID HOWTO A valid substitute for the rescue floppies can be the Debian "Woody" 3.0 cdrom plus the RAID and LVM extension disk. It would be advisable to keep also at hand the floppy with the scripts used to install the system along with the configuration of disks logical volumes. Look at section Required Software for pointers to the extension disk. Look at section Appendix C: troubleshooting tips for recovery instruction in case of a crash. Make a crash test It is better to discover that this new super-fault-tolerant-unbreakable system cannot boot when an hard disk fails before the system goes on production (and before a real crash, too). This is a good time also for checking the rescue floppies, make some practice with RAID disk faults and write down a Disaster Recovery procedure. Warning! Before going on with the following tests, beware that they include possible data loss and array reconstruction. So, if any valuable information has been already put on the system, backup it. Moreover consider that array reconstruction can be a lengthy process. Always disconnect the system from power while physically working at it!!! You have been warned! ;-) A common problem could be that the system is unable to boot from the second hard disk, for a couple of different reasons, ranging from BIOS problems to a mis-configured lilo. A good test for checking that the system will boot with a degraded array is to power down the system, disconnect the first hard disk (both power and data cables) then reboot the system. The RAID subsystem should detect a disk "fault", remove the faulty drives from the arrays and then boot. Moreover, if a hot spare drive had been configured (ex. /dev/hdc), this should be automatically used to start array reconstruction. In this case before going on with other tests wait until reconstruction is complete. Arrays status can be monitored by looking at /proc/mdstat file. If no hot spare was used, a second useful test can be power down again the system, reconnecting the first disk and then boot. The system should detect that the superblock on the first disk is older than that on the second one and keep the first disk from joining the arrays. To put back the first disk into the arrays use the command raidhotadd; with two mirrored hard disk /dev/hda and /dev/hdb, provided that the "faulty" drive was /dev/hda, use: raidhotadd /dev/md1 /dev/hda1 raidhotadd /dev/md2 /dev/hda2 Again, the reconstruction process should start. If a hot spare disk was used there are two options: 32

33 Leave /dev/hdc into the array and use /dev/hda as hot spare. Update /etc/raidtab to match this situation (swap roles between /dev/hda and /dev/hdc). Return to original situation: mark /dev/hdc as faulty with raidhotgenerateerror, then add /dev/hda to the array with raidhotadd, then wait until reconstruct is completed again. In both cases reboot to verify correct system functionality. Warning! Unless your system has hot swap hard disk and hot swap support do not hot plug any hard disk from the system while it s running (do not unplug the data cable neither the power cable). This would lead to system complete lockup. While the first thought could be "What I installed a RAID system for, then?" this behavior is correct, or it may be defined as a "feature": a software RAID system is a low cost system targeted to protect from a reasonable amount of damage or misfortune, for example hard disk damage limited to some blocks or tracks. If an hard disk suddenly stops responding to commands (as if it was unplugged) the system will lock up and manual shut down, disconnect of the disk and restart will be required. In any case the system will boot up again with the remaining drive(s). It must be clear that this is not a limit due to the use of software raid, but to the hardware architecture instead. The same problem would show up with the use of some low cost RAID SCSI card that does not support hot swap, in case of a serious disk problem. Appendix A: /etc/raidtab examples File /etc/raidtab for RAID 1 + hot spare raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 1 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 device /dev/hdc1 spare-disk 0 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 1 chunk-size 32 persistent-superblock 1 device /dev/hda2 33

34 raid-disk 0 device /dev/hdb2 raid-disk 1 device /dev/hdc2 spare-disk 0 File /etc/raidtab for RAID 1 + RAID 5 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md2 raid-level 5 nr-raid-disks 3 nr-spare-disks 0 persistent-superblock 1 parity-algorithm device /dev/hda2 raid-disk 0 device /dev/hdb2 raid-disk 1 device /dev/hdc2 raid-disk 2 left-symmetric Appendix B: more than one volume group There could be cases in which creation of more that one volume group is advisable: a database server (Oracle) is such a common situation. As an example, suppose to have a server with 4 disks, the first couple of disks with two RAID 1 arrays (/dev/md1 and /dev/md2) and the second one with one RAID 1 array (/dev/md3). /dev/md1 will be used "as-is" for boot file system, while /dev/md2 and /dev/md3 will be used each one as a physical disk. They could be allocated to one volume group or two different volume groups. Volume groups should aggregate disks with similar usage: the first volume group (and so the first RAID array) could hold all "standard" file systems (root, tmp, home, var, usr, opt,... ) along with database binaries and also database transaction logs (Redo Logs), while the second volume group (second RAID array) could hold all database datafiles (one logical volume for each datafile if raw devices are used). This leads to a "logical" separation of data and binaries. Moreover, returning to the Oracle example, by carefully planning the database structures disposition over the two volume groups, if one of the groups suffers any unrecoverable damage, information on the other group is sufficient to rebuild an up-to-date database starting from backup. 34

35 On the other hand such a division can lead to problems like wasting space: probably all disks will be the same in size, while it s unlikely that binaries and transaction logs are big as datafiles, so there would be much space wasted in the first volume group. There could be cases where a "logical" division of data and binaries is not required or advisable, so one volume group would be preferable (ex. Disk space is scarce or valuable). Appendix C: troubleshooting tips If a RAID and LVM system results not bootable anymore the following steps can be used to diagnose the problem: Boot with Debian "Woody" 3.0 cdrom with bf24 kernel. Exit to a shell and load the extension disk. extdisk Manually start raid arrays with: raidstart /dev/md1 repeat this for all raid arrays. Look for error messages during raid arrays start. Look at /proc/mdstat for array status. Start LVM subsystem with: vgscan Activate volume groups with: vgchange -a y vg00 repeat this for all volume groups. Check file systems: fsck /dev/vg00/root repeat this for all logical volumes with a file system inside. Mount file systems over /target chroot into /target 35

36 Appendix D: kernel compilation tips Here are indicated some advices on compiling a kernel on a Debian system: Install these packages: kernel-package, bzip2, libncurses5-dev Uncompress the kernel under /usr/src/kernel-source-version If a file.config is already available copy it into kernel source directory. Always run make menuconfig (or another kernel configuration command), even if the file.config has been copied, and save the configuration (even if no modification have been done. This to prevent using a config file of an older kernel version with a new one. Run: make-kpkg clean to clean kernel source directory. This has to be done every time that make menuconfig is called and before the next step. Run: make-kpkg --version append-to-version -lvm kernel_image to compile the kernel, modules and create the kernel package. The kernel package is created in /usr/src Appendix E: boot messages Note: The system whose these messages refer to has four IDE controllers and two hard disks connected to the third and fourth controller: the resulting devices are /dev/hde and /dev/hdg. This is an example of kernel boot messages while using the stock bf2.4 kernel: Linux version bf2.4 (root@zombie) (gcc version (Debian prerelease)) # BIOS-provided physical RAM map: BIOS-e820: a0000 (usable) BIOS-e820: f (reserved) BIOS-e820: fff0000 (usable) BIOS-e820: fff fff3000 (ACPI NVS) BIOS-e820: fff (ACPI data) BIOS-e820: ffff (reserved) On node 0 totalpages: zone(0): 4096 pages. zone(1): pages. zone(2): 0 pages. Local APIC disabled by BIOS -- reenabling. Found and enabled local APIC! Kernel command line: auto BOOT_IMAGE=Linux ro root=3a00 Initializing CPU#0 36

37 Detected MHz processor. Console: colour VGA+ 80x25 Calibrating delay loop BogoMIPS Memory: k/524224k available (1783k kernel code, 12324k reserved, 549k data, 280k init, Dentry-cache hash table entries: (order: 7, bytes) Inode-cache hash table entries: (order: 6, bytes) Mount-cache hash table entries: 8192 (order: 4, bytes) Buffer-cache hash table entries: (order: 5, bytes) Page-cache hash table entries: (order: 7, bytes) CPU: Before vendor init, caps: 0183fbff c1c7fbff , vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff CPU: Common caps: 0183fbff c1c7fbff CPU: AMD Athlon(tm) Processor stepping 02 Enabling fast FPU save and restore... done. Checking hlt instruction... OK. Checking for popad bug... OK. POSIX conformance testing by UNIFIX enabled ExtINT on CPU#0 ESR value before enabling vector: ESR value after enabling vector: Using local APIC timer interrupts. calibrating APIC timer CPU clock speed is MHz.... host bus clock speed is MHz. cpu: 0, clocks: , slice: CPU0<T0: ,T1: ,D:9,S: ,C: > mtrr: v1.40 ( ) Richard Gooch ([email protected]) mtrr: detected mtrr type: Intel PCI: PCI BIOS revision 2.10 entry at 0xfb430, last bus=1 PCI: Using configuration type 1 PCI: Probing PCI hardware Unknown bridge resource 0: assuming transparent PCI: Using IRQ router VIA [1106/0686] at 00:07.0 PCI: Disabling Via external APIC routing Linux NET4.0 for Linux 2.4 Based upon Swansea University Computer Society NET3.039 Initializing RT netlink socket Starting kswapd VFS: Diskquotas version dquot_6.4.0 initialized Journalled Block Device driver loaded vga16fb: initializing vga16fb: mapped to 0xc00a0000 Console: switching to colour frame buffer device 80x30 fb0: VGA16 VGA frame buffer device Detected PS/2 Mouse Port. pty: 256 Unix98 ptys configured Serial driver version 5.05c ( ) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabled ttys00 at 0x03f8 (irq = 4) is a 16550A 37

38 ttys01 at 0x02f8 (irq = 3) is a 16550A Real Time Clock Driver v1.10e block: 128 slots per queue, batch=32 RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Uniform Multi-Platform E-IDE driver Revision: 6.31 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller on PCI bus 00 dev 39 VP_IDE: chipset revision 16 VP_IDE: not 100% native mode: will probe irqs later ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1 ide0: BM-DMA at 0xc000-0xc007, BIOS settings: hda:pio, hdb:pio ide1: BM-DMA at 0xc008-0xc00f, BIOS settings: hdc:pio, hdd:dma HPT370: IDE controller on PCI bus 00 dev 98 PCI: Found IRQ 11 for device 00:13.0 PCI: Sharing IRQ 11 with 00:09.0 HPT370: chipset revision 3 HPT370: not 100% native mode: will probe irqs later ide2: BM-DMA at 0xe000-0xe007, BIOS settings: hde:dma, hdf:pio ide3: BM-DMA at 0xe008-0xe00f, BIOS settings: hdg:dma, hdh:pio hdd: ASUS CD-S400, ATAPI CD/DVD-ROM drive hde: IBM-DTLA , ATA DISK drive hdg: IBM-DTLA , ATA DISK drive ide1 at 0x170-0x177,0x376 on irq 15 ide2 at 0xd000-0xd007,0xd402 on irq 11 ide3 at 0xd800-0xd807,0xdc02 on irq 11 hde: sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdg: sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdd: ATAPI 40X CD-ROM drive, 128kB Cache Uniform CD-ROM driver Revision: 3.12 ide-floppy driver 0.97.sv Partition check: hde: [PTBL] [5606/255/63] hde1 hde2 hdg: hdg1 hdg2 Floppy drive(s): fd0 is 1.44M FDC 0 is a post Loading I2O Core - (c) Copyright 1999 Red Hat Software I2O configuration manager v (C) Copyright 1999 Red Hat Software loop: loaded (max 8 devices) Compaq CISS Driver (v 2.4.5) 8139cp 10/100 PCI Ethernet driver v0.0.6 (Nov 19, 2001) 8139cp: pci dev 00:09.0 (id 10ec:8139 rev 10) is not an 8139C+ compatible chip 8139cp: Try the "8139too" driver instead. 8139too Fast Ethernet driver PCI: Found IRQ 11 for device 00:09.0 PCI: Sharing IRQ 11 with 00:13.0 eth0: RealTek RTL8139 Fast Ethernet at 0xe081a000, 00:d0:70:00:cd:e4, IRQ 11 eth0: Identified 8139 chip type RTL-8139B HDLC support module revision 1.02 for Linux 2.4 Cronyx Ltd, Synchronous PPP and CISCO HDLC (c) 1994 Linux port (c) 1998 Building Number Three Ltd & Jan "Yenya" Kasprzak. ide-floppy driver 0.97.sv 38

39 Promise Fasttrak(tm) Softwareraid driver 0.03beta: No raid array found Highpoint HPT370 Softwareraid driver for linux version 0.01 No raid array found SCSI subsystem driver Revision: 1.00 Red Hat/Adaptec aacraid driver, Apr DC390: 0 adapters found 3ware Storage Controller device driver for Linux v w-xxxx: No cards with valid units found. request_module[scsi_hostadapter]: Root fs not mounted request_module[scsi_hostadapter]: Root fs not mounted i2o_scsi.c: Version chain_pool: 0 c19f44a0 (512 byte buffers X 4 can_queue X 0 i2o controllers) NET4: Linux TCP/IP 1.0 for NET4.0 IP Protocols: ICMP, UDP, TCP, IGMP IP: routing cache hash table of 4096 buckets, 32Kbytes TCP: Hash tables configured (established bind 32768) NET4: Unix domain sockets 1.0/SMP for Linux NET4.0. RAMDISK: Compressed image found at block 0 Freeing initrd memory: 1165k freed VFS: Mounted root (ext2 filesystem). md: md driver MAX_MD_DEVS=256, MD_SB_DISKS=27 md: raid1 personality registered as nr 3 LVM version rc4(ish)(03/10/2001) module loaded [events: ] md: bind<hdg1,1> [events: ] md: bind<hde1,2> md: hde1 s event counter: md: hdg1 s event counter: md: RAID level 1 does not need chunksize! Continuing anyway. md1: max total readahead window set to 124k md1: 1 data-disks, max readahead per data-disk: 124k raid1: device hde1 operational as mirror 0 raid1: device hdg1 operational as mirror 1 raid1: raid set md1 active with 2 out of 2 mirrors md: updating md1 RAID superblock on device md: hde1 [events: ]<6>(write) hde1 s sb offset: md: hdg1 [events: ]<6>(write) hdg1 s sb offset: [events: ] md: bind<hdg2,1> [events: ] md: bind<hde2,2> md: hde2 s event counter: md: hdg2 s event counter: md: RAID level 1 does not need chunksize! Continuing anyway. md2: max total readahead window set to 124k md2: 1 data-disks, max readahead per data-disk: 124k raid1: device hde2 operational as mirror 0 raid1: device hdg2 operational as mirror 1 raid1: raid set md2 active with 2 out of 2 mirrors md: updating md2 RAID superblock on device md: hde2 [events: ]<6>(write) hde2 s sb offset:

40 md: hdg2 [events: ]<6>(write) hdg2 s sb offset: mdadm: /dev/md2 has been started with 2 drives. vgscan -- reading all physical volumes (this may take a while...) vgscan -- found inactive volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" successfully activated kjournald starting. Commit interval 5 seconds EXT3-fs: lvm(58,0): orphan cleanup on readonly fs EXT3-fs: lvm(58,0): 2 orphan inodes deleted EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. change_root: old root has d_count=2 Freeing unused kernel memory: 280k freed INIT: version 2.84 booting Loading /etc/console/boottime.kmap.gz Activating swap. Adding Swap: k swap-space (priority -1) Checking root file system... fsck 1.27 (8-Mar-2002) /dev/vg00/root: clean, 6752/65536 files, 40628/ blocks EXT3 FS , 10 Jan 2002 on lvm(58,0), internal journal System time was Sat Jan 4 21:18:33 UTC Setting the System Clock using the Hardware Clock as reference... System Clock set. System local time is now Sat Jan 4 21:18:35 UTC Calculating module dependencies... done. Loading modules: usb-uhci usb.c: registered new driver usbdevfs usb.c: registered new driver hub usb-uhci.c: $Revision: $ time 10:29:43 Apr usb-uhci.c: High bandwidth mode enabled PCI: Found IRQ 12 for device 00:07.2 PCI: Sharing IRQ 12 with 00:07.3 usb-uhci.c: USB UHCI at I/O 0xc400, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 1 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found 40

41 hub.c: 2 ports detected usb-uhci.c: v1.275:usb Universal Host Controller Interface driver input usbkbd usb.c: registered new driver keyboard usbkbd.c: :USB HID Boot Protocol keyboard driver keybdev Setting up LVM Volume Groups... vgscan -- reading all physical volumes (this may take a while...) vgscan -- found active volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" already active Starting RAID devices: done. Checking all file systems... fsck 1.27 (8-Mar-2002) /dev/md1: clean, 28/5136 files, 3711/20544 blocks /dev/vg00/home: clean, 11/32768 files, 8268/ blocks /dev/vg00/opt: clean, 11/4096 files, 1564/16384 blocks /dev/vg00/tmp: clean, 11/32768 files, 8268/ blocks /dev/vg00/usr: clean, 13944/ files, / blocks /dev/vg00/var: clean, 1144/32768 files, 40846/ blocks Setting kernel variables. Loading the saved-state of the serial devices... /dev/ttys0 at 0x03f8 (irq = 4) is a 16550A /dev/ttys1 at 0x02f8 (irq = 3) is a 16550A Mounting local filesystems... /dev/md1 on /boot type ext2 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/home on /home type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,2), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/opt on /opt type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,3), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/tmp on /tmp type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,4), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/usr on /usr type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,5), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/var on /var type ext3 (rw) Running 0dns-down to make sure resolv.conf is ok...done. Cleaning: /etc/network/ifstate. Setting up IP spoofing protection: rp_filter. Configuring network interfaces: eth0: Setting half-duplex based on auto-negotiated partner a done. 41

42 Starting portmap daemon: portmap. Setting the System Clock using the Hardware Clock as reference... System Clock set. Local time: Sat Jan 4 22:18:42 CET 2003 Cleaning: /tmp /var/lock /var/run. Initializing random number generator... done. Recovering nvi editor sessions... done. INIT: Entering runlevel: 2 Starting system log daemon: syslogd. Starting kernel log daemon: klogd. Starting NFS common utilities: statd. Starting mouse interface server: gpm. Starting internet superserver: inetd. Starting printer spooler: lpd. Not starting NFS kernel daemon: No exports. Starting OpenBSD Secure Shell server: sshd. Starting RAID monitor daemon: mdadm -F. Starting deferred execution scheduler: atd. Starting periodic command scheduler: cron. Debian GNU/Linux 3.0 debian tty1 debian login: This is an example of kernel boot messages while using a cutom kernel with static RAID and LVM support: Linux version lvm (root@debian) (gcc version (Debian prerelease)) #1 BIOS-provided physical RAM map: BIOS-e820: a0000 (usable) BIOS-e820: f (reserved) BIOS-e820: fff0000 (usable) BIOS-e820: fff fff3000 (ACPI NVS) BIOS-e820: fff (ACPI data) BIOS-e820: ffff (reserved) On node 0 totalpages: zone(0): 4096 pages. zone(1): pages. zone(2): 0 pages. Local APIC disabled by BIOS -- reenabling. Found and enabled local APIC! Kernel command line: auto BOOT_IMAGE=Linux ro root=3a00 Initializing CPU#0 Detected MHz processor. Console: colour VGA+ 80x25 Calibrating delay loop BogoMIPS Memory: k/524224k available (2051k kernel code, 12772k reserved, 652k data, 292k init, Dentry-cache hash table entries: (order: 7, bytes) Inode-cache hash table entries: (order: 6, bytes) Mount-cache hash table entries: 8192 (order: 4, bytes) Buffer-cache hash table entries: (order: 5, bytes) Page-cache hash table entries: (order: 7, bytes) 42

43 CPU: Before vendor init, caps: 0183fbff c1c7fbff , vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff CPU: Common caps: 0183fbff c1c7fbff Enabling fast FPU save and restore... done. Checking hlt instruction... OK. Checking for popad bug... OK. POSIX conformance testing by UNIFIX mtrr: v1.40 ( ) Richard Gooch ([email protected]) mtrr: detected mtrr type: Intel CPU: Before vendor init, caps: 0183fbff c1c7fbff , vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff CPU: Common caps: 0183fbff c1c7fbff CPU0: AMD Athlon(tm) Processor stepping 02 per-cpu timeslice cutoff: usecs. SMP motherboard not detected. enabled ExtINT on CPU#0 ESR value before enabling vector: ESR value after enabling vector: Using local APIC timer interrupts. calibrating APIC timer CPU clock speed is MHz.... host bus clock speed is MHz. cpu: 0, clocks: , slice: CPU0<T0: ,T1: ,D:10,S: ,C: > Waiting on wait_init_idle (map = 0x0) All processors have done init_idle PCI: PCI BIOS revision 2.10 entry at 0xfb430, last bus=1 PCI: Using configuration type 1 PCI: Probing PCI hardware Unknown bridge resource 0: assuming transparent PCI: Using IRQ router VIA [1106/0686] at 00:07.0 PCI: Disabling Via external APIC routing Linux NET4.0 for Linux 2.4 Based upon Swansea University Computer Society NET3.039 Initializing RT netlink socket Starting kswapd VFS: Diskquotas version dquot_6.4.0 initialized Journalled Block Device driver loaded devfs: v1.10 ( ) Richard Gooch ([email protected]) devfs: boot_options: 0x1 vga16fb: initializing vga16fb: mapped to 0xc00a0000 Console: switching to colour frame buffer device 80x30 fb0: VGA16 VGA frame buffer device 43

44 Detected PS/2 Mouse Port. pty: 256 Unix98 ptys configured Serial driver version 5.05c ( ) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabled ttys00 at 0x03f8 (irq = 4) is a 16550A ttys01 at 0x02f8 (irq = 3) is a 16550A Real Time Clock Driver v1.10e block: 128 slots per queue, batch=32 RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Uniform Multi-Platform E-IDE driver Revision: 6.31 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller on PCI bus 00 dev 39 VP_IDE: chipset revision 16 VP_IDE: not 100% native mode: will probe irqs later ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1 ide0: BM-DMA at 0xc000-0xc007, BIOS settings: hda:pio, hdb:pio ide1: BM-DMA at 0xc008-0xc00f, BIOS settings: hdc:pio, hdd:dma HPT370: IDE controller on PCI bus 00 dev 98 PCI: Found IRQ 11 for device 00:13.0 PCI: Sharing IRQ 11 with 00:09.0 HPT370: chipset revision 3 HPT370: not 100% native mode: will probe irqs later ide2: BM-DMA at 0xe000-0xe007, BIOS settings: hde:dma, hdf:pio ide3: BM-DMA at 0xe008-0xe00f, BIOS settings: hdg:dma, hdh:pio hdd: ASUS CD-S400, ATAPI CD/DVD-ROM drive hde: IBM-DTLA , ATA DISK drive hdg: IBM-DTLA , ATA DISK drive ide1 at 0x170-0x177,0x376 on irq 15 ide2 at 0xd000-0xd007,0xd402 on irq 11 ide3 at 0xd800-0xd807,0xdc02 on irq 11 hde: sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdg: sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdd: ATAPI 40X CD-ROM drive, 128kB Cache, UDMA(33) Uniform CD-ROM driver Revision: 3.12 ide-floppy driver 0.97.sv Partition check: /dev/ide/host2/bus0/target0/lun0: [PTBL] [5606/255/63] p1 p2 /dev/ide/host2/bus1/target0/lun0: p1 p2 Floppy drive(s): fd0 is 1.44M FDC 0 is a post Loading I2O Core - (c) Copyright 1999 Red Hat Software I2O configuration manager v (C) Copyright 1999 Red Hat Software loop: loaded (max 8 devices) Compaq CISS Driver (v 2.4.5) 8139cp 10/100 PCI Ethernet driver v0.0.6 (Nov 19, 2001) 8139cp: pci dev 00:09.0 (id 10ec:8139 rev 10) is not an 8139C+ compatible chip 8139cp: Try the "8139too" driver instead. 8139too Fast Ethernet driver PCI: Found IRQ 11 for device 00:09.0 PCI: Sharing IRQ 11 with 00:13.0 eth0: RealTek RTL8139 Fast Ethernet at 0xe081e000, 00:d0:70:00:cd:e4, IRQ 11 eth0: Identified 8139 chip type RTL-8139B 44

45 HDLC support module revision 1.02 for Linux 2.4 Cronyx Ltd, Synchronous PPP and CISCO HDLC (c) 1994 Linux port (c) 1998 Building Number Three Ltd & Jan "Yenya" Kasprzak. ide-floppy driver 0.97.sv Promise Fasttrak(tm) Softwareraid driver 0.03beta: No raid array found Highpoint HPT370 Softwareraid driver for linux version 0.01 No raid array found SCSI subsystem driver Revision: 1.00 Red Hat/Adaptec aacraid driver, Jan DC390: 0 adapters found 3ware Storage Controller device driver for Linux v w-xxxx: No cards with valid units found. request_module[scsi_hostadapter]: Root fs not mounted request_module[scsi_hostadapter]: Root fs not mounted Linux Kernel Card Services options: [pci] [cardbus] [pm] i2o_scsi.c: Version chain_pool: 0 c19fbc80 (512 byte buffers X 4 can_queue X 0 i2o controllers) md: raid1 personality registered as nr 3 md: raid5 personality registered as nr 4 raid5: measuring checksumming speed 8regs : MB/sec 32regs : MB/sec pii_mmx : MB/sec p5_mmx : MB/sec raid5: using function: p5_mmx ( MB/sec) md: md driver MAX_MD_DEVS=256, MD_SB_DISKS=27 md: Autodetecting RAID arrays. [events: ] [events: ] [events: ] [events: ] md: autorun... md: considering ide/host2/bus1/target0/lun0/part2... md: adding ide/host2/bus1/target0/lun0/part2... md: adding ide/host2/bus0/target0/lun0/part2... md: created md2 md: bind<ide/host2/bus0/target0/lun0/part2,1> md: bind<ide/host2/bus1/target0/lun0/part2,2> md: running: <ide/host2/bus1/target0/lun0/part2><ide/host2/bus0/target0/lun0/part2> md: ide/host2/bus1/target0/lun0/part2 s event counter: md: ide/host2/bus0/target0/lun0/part2 s event counter: md: RAID level 1 does not need chunksize! Continuing anyway. md2: max total readahead window set to 124k md2: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host2/bus1/target0/lun0/part2 operational as mirror 1 raid1: device ide/host2/bus0/target0/lun0/part2 operational as mirror 0 raid1: raid set md2 active with 2 out of 2 mirrors md: updating md2 RAID superblock on device md: ide/host2/bus1/target0/lun0/part2 [events: ]<6>(write) ide/host2/bus1/target0/lu md: ide/host2/bus0/target0/lun0/part2 [events: ]<6>(write) ide/host2/bus0/target0/lu md: considering ide/host2/bus1/target0/lun0/part

46 md: adding ide/host2/bus1/target0/lun0/part1... md: adding ide/host2/bus0/target0/lun0/part1... md: created md1 md: bind<ide/host2/bus0/target0/lun0/part1,1> md: bind<ide/host2/bus1/target0/lun0/part1,2> md: running: <ide/host2/bus1/target0/lun0/part1><ide/host2/bus0/target0/lun0/part1> md: ide/host2/bus1/target0/lun0/part1 s event counter: md: ide/host2/bus0/target0/lun0/part1 s event counter: md: RAID level 1 does not need chunksize! Continuing anyway. md1: max total readahead window set to 124k md1: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host2/bus1/target0/lun0/part1 operational as mirror 1 raid1: device ide/host2/bus0/target0/lun0/part1 operational as mirror 0 raid1: raid set md1 active with 2 out of 2 mirrors md: updating md1 RAID superblock on device md: ide/host2/bus1/target0/lun0/part1 [events: ]<6>(write) ide/host2/bus1/target0/lu md: ide/host2/bus0/target0/lun0/part1 [events: ]<6>(write) ide/host2/bus0/target0/lu md:... autorun DONE. LVM version rc4(ish)(03/10/2001) NET4: Linux TCP/IP 1.0 for NET4.0 IP Protocols: ICMP, UDP, TCP, IGMP IP: routing cache hash table of 4096 buckets, 32Kbytes TCP: Hash tables configured (established bind 32768) NET4: Unix domain sockets 1.0/SMP for Linux NET4.0. ds: no socket drivers loaded! RAMDISK: Compressed image found at block 0 Freeing initrd memory: 1031k freed VFS: Mounted root (ext2 filesystem). Mounted devfs on /dev vgscan -- reading all physical volumes (this may take a while...) vgscan -- found inactive volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" successfully activated kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. change_root: old root has d_count=2 Mounted devfs on /dev Freeing unused kernel memory: 292k freed INIT: version 2.84 booting Creating extra device nodes...done. Started device management daemon v for /dev Loading /etc/console/boottime.kmap.gz Activating swap. Adding Swap: k swap-space (priority -1) Checking root file system... fsck 1.27 (8-Mar-2002) /dev/vg00/root: clean, 6766/65536 files, 47853/ blocks EXT3 FS , 10 Jan 2002 on lvm(58,0), internal journal System time was Sat Jan 4 22:26:31 UTC

47 Setting the System Clock using the Hardware Clock as reference... System Clock set. System local time is now Sat Jan 4 22:26:33 UTC Calculating module dependencies... done. Loading modules: usb-uhci usb.c: registered new driver usbdevfs usb.c: registered new driver hub usb-uhci.c: $Revision: $ time 17:12:36 Jan usb-uhci.c: High bandwidth mode enabled PCI: Found IRQ 12 for device 00:07.2 PCI: Sharing IRQ 12 with 00:07.3 usb-uhci.c: USB UHCI at I/O 0xc400, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 1 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found hub.c: 2 ports detected usb-uhci.c: v1.275:usb Universal Host Controller Interface driver input usbkbd usb.c: registered new driver keyboard usbkbd.c: :USB HID Boot Protocol keyboard driver keybdev Setting up LVM Volume Groups... vgscan -- reading all physical volumes (this may take a while...) modprobe: Can t locate module /dev/nb vgscan -- found active volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group modprobe: Can t locate module /dev/nb vgchange -- volume group "vg00" already active Starting RAID devices: done. Checking all file systems... fsck 1.27 (8-Mar-2002) /dev/md1: clean, 28/5136 files, 3799/20544 blocks /dev/vg00/home: clean, 11/32768 files, 8268/ blocks /dev/vg00/opt: clean, 11/4096 files, 1564/16384 blocks /dev/vg00/tmp: clean, 11/32768 files, 8268/ blocks /dev/vg00/usr: clean, 13967/ files, / blocks /dev/vg00/var: clean, 1153/32768 files, 26342/ blocks Setting kernel variables. Loading the saved-state of the serial devices... /dev/tts/0 at 0x03f8 (irq = 4) is a 16550A /dev/tts/1 at 0x02f8 (irq = 3) is a 16550A Mounting local filesystems... /dev/md1 on /boot type ext2 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. 47

48 /dev/vg00/home on /home type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,2), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/opt on /opt type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,3), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/tmp on /tmp type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,4), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/usr on /usr type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS , 10 Jan 2002 on lvm(58,5), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/var on /var type ext3 (rw) Running 0dns-down to make sure resolv.conf is ok...done. Cleaning: /etc/network/ifstate. Setting up IP spoofing protection: rp_filter. Configuring network interfaces: eth0: Setting half-duplex based on auto-negotiated partner a done. Starting portmap daemon: portmap. Setting the System Clock using the Hardware Clock as reference... System Clock set. Local time: Sat Jan 4 23:26:40 CET 2003 Cleaning: /tmp /var/lock /var/run. Initializing random number generator... done. Recovering nvi editor sessions... done. INIT: Entering runlevel: 2 Starting system log daemon: syslogd. Starting kernel log daemon: klogd. Starting NFS common utilities: statd. Starting mouse interface server: gpm. spurious 8259A interrupt: IRQ7. Starting internet superserver: inetd. Starting printer spooler: lpd. Not starting NFS kernel daemon: No exports. Starting OpenBSD Secure Shell server: sshd. starting RAID monitor daemon: mdadm -F. Starting deferred execution scheduler: atd. Starting periodic command scheduler: cron. Debian GNU/Linux 3.0 debian tty1 debian login: 48

Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there?

Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there? Filesystems, LVM, MD Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there? Filesystem - Introduction Short description of

More information

Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system.

Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system. Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) All commands run from terminal as super user. Default CentOS 6.3 installation with two hard drives, /dev/sda

More information

How To Set Up Software Raid In Linux 6.2.2 (Amd64)

How To Set Up Software Raid In Linux 6.2.2 (Amd64) Software RAID on Red Hat Enterprise Linux v6 Installation, Migration and Recovery November 2010 Ashokan Vellimalai Raghavendra Biligiri Dell Enterprise Operating Systems THIS WHITE PAPER IS FOR INFORMATIONAL

More information

Installing Debian with SATA based RAID

Installing Debian with SATA based RAID Installing Debian with SATA based RAID Now for 2.6 kernel version I've read that there will soon be an installer that will do raid installs and perhaps even support SATA, but today it is manual. My install

More information

Creating a Cray System Management Workstation (SMW) Bootable Backup Drive

Creating a Cray System Management Workstation (SMW) Bootable Backup Drive Creating a Cray System Management Workstation (SMW) Bootable Backup Drive This technical note provides the procedures to create a System Management Workstation (SMW) bootable backup drive. The purpose

More information

Installing a Second Operating System

Installing a Second Operating System Installing a Second Operating System Click a link below to view one of the following sections: Overview Key Terms and Information Operating Systems and File Systems Managing Multiple Operating Systems

More information

How To Manage Your Volume On Linux 2.5.5 (Evms) 2.4.5 On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux

How To Manage Your Volume On Linux 2.5.5 (Evms) 2.4.5 On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux www.suse.com/documentation Storage Administration Guide SUSE Linux Enterprise Server 10 SP3/SP4 March 6, 2011 Legal Notices Novell, Inc. makes no representations or warranties with respect to the contents

More information

Redundant Array of Inexpensive/Independent Disks. RAID 0 Disk striping (best I/O performance, no redundancy!)

Redundant Array of Inexpensive/Independent Disks. RAID 0 Disk striping (best I/O performance, no redundancy!) 1 Data storage A Simple Overview 1. HW: SCSI/IDE/SATA/SAS (SAN!) 2. HW/SW: RAID 3. SW: Logical volumes 4. SW: Journalling filesystems 5. SW/NET: Networked filesystem (NAS!) DAS-NAS-SAN 2 RAID Redundant

More information

Data storage, backup and restore

Data storage, backup and restore , backup and restore IMT3292 - System Administration November 25, 2008 A Simple Overview 1 HW: SCSI/IDE/SATA/SAS (SAN!) 2 HW/SW: 3 SW: Logical volumes 4 SW: Journalling filesystems 5 SW/NET: Networked

More information

Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.

Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk. Windows Template Creation Guide How to build your own Windows VM templates for deployment in Cloudturk. TABLE OF CONTENTS 1. Preparing the Server... 2 2. Installing Windows... 3 3. Creating a Template...

More information

Linux Software Raid. Aug 2010. Mark A. Davis

Linux Software Raid. Aug 2010. Mark A. Davis Linux Software Raid Aug 2010 Mark A. Davis a What is RAID? Redundant Array of Inexpensive/Independent Drives It is a method of combining more than one hard drive into a logic unit for the purpose of: Increasing

More information

RAID User Guide. Edition. Trademarks V1.0 P/N: 91-187C51GME0-00

RAID User Guide. Edition. Trademarks V1.0 P/N: 91-187C51GME0-00 RAID User Guide Edition V1.0 P/N: 91-187C51GME0-00 Trademarks All brand or product names mentioned are trademarks or registered trademarks of their respective holders. Contents NVIDIA RAID...1 RAID Arrays...1

More information

Yosemite Server Backup Installation Guide

Yosemite Server Backup Installation Guide Yosemite Server Backup Installation Guide Part number: First edition: October, 2010 Legal and notice information Copyright 2004, 2012 Barracuda Networks, Inc. Under copyright laws, the contents of this

More information

Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller.

Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller. Embedded MegaRAID SATA User's Guide Areas Covered Before Reading This Manual This section explains the notes for your safety and conventions used in this manual. Chapter 1 Overview This chapter introduces

More information

Shared Storage Setup with System Automation

Shared Storage Setup with System Automation IBM Tivoli System Automation for Multiplatforms Authors: Markus Müller, Fabienne Schoeman, Andreas Schauberer, René Blath Date: 2013-07-26 Shared Storage Setup with System Automation Setup a shared disk

More information

Linux Template Creation Guide. How to build your own Linux VM templates for deployment in Cloudturk.

Linux Template Creation Guide. How to build your own Linux VM templates for deployment in Cloudturk. Linux Template Creation Guide How to build your own Linux VM templates for deployment in Cloudturk. TABLE OF CONTENTS 1. Installing Xen Hypervisor... 2 2. Installing DomU (Paravirtualized)... 5 3. Installing

More information

Using the Software RAID Functionality of Linux

Using the Software RAID Functionality of Linux Using the Software RAID Functionality of Linux Kevin Carson Solution Architect Hewlett-Packard (Canada) Co. 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

Planning for an Amanda Disaster Recovery System

Planning for an Amanda Disaster Recovery System Planning for an Amanda Disaster Recovery System Bernd Harmsen [email protected] www.datasysteme.de 22nd April 2003 Contents 1 Introduction 1 1.1 Why we need a specialized Amanda Disaster Recovery System?..............

More information

Acronis Backup & Recovery 10 Server for Linux. Command Line Reference

Acronis Backup & Recovery 10 Server for Linux. Command Line Reference Acronis Backup & Recovery 10 Server for Linux Command Line Reference Table of contents 1 Console mode in Linux...3 1.1 Backup, restore and other operations (trueimagecmd)... 3 1.1.1 Supported commands...

More information

Recovering Data from Windows Systems by Using Linux

Recovering Data from Windows Systems by Using Linux Recovering Data from Windows Systems by Using Linux Published by the Open Source Software at Microsoft, May 27 Special thanks to Chris Travers, Contributing Author to the Open Source Software Lab Most

More information

RAID Manual. Edition. Trademarks V1.0 P/N: 91-187-CK8-A5-0E

RAID Manual. Edition. Trademarks V1.0 P/N: 91-187-CK8-A5-0E RAID Manual Edition V1.0 P/N: 91-187-CK8-A5-0E Trademarks All brand or product names mentioned are trademarks or registered trademarks of their respective holders. Contents NVIDIA RAID... 1 RAID Arrays...

More information

The Logical Volume Manager (LVM)

The Logical Volume Manager (LVM) Page 1 WHITEPAPER The Logical Volume Manager (LVM) This document describes the LVM in SuSE Linux. It is freely distributable as long as it remains unchanged. SuSE has included a Logical Volume Manager

More information

Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality

Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality Introduction A History of Hard Drive Capacity Starting in 1984, when IBM first introduced a 5MB hard drive in

More information

BackTrack Hard Drive Installation

BackTrack Hard Drive Installation BackTrack Hard Drive Installation BackTrack Development Team jabra [at] remote-exploit [dot] org Installing Backtrack to a USB Stick or Hard Drive 1 Table of Contents BackTrack Hard Drive Installation...3

More information

How To Install Acronis Backup & Recovery 11.5 On A Linux Computer

How To Install Acronis Backup & Recovery 11.5 On A Linux Computer Acronis Backup & Recovery 11.5 Server for Linux Update 2 Installation Guide Copyright Statement Copyright Acronis International GmbH, 2002-2013. All rights reserved. Acronis and Acronis Secure Zone are

More information

www.cristie.com CBMR for Linux v6.2.2 User Guide

www.cristie.com CBMR for Linux v6.2.2 User Guide www.cristie.com CBMR for Linux v6.2.2 User Guide Contents CBMR for Linux User Guide - Version: 6.2.2 Section No. Section Title Page 1.0 Using this Guide 3 1.1 Version 3 1.2 Limitations 3 2.0 About CBMR

More information

Abstract. Microsoft Corporation Published: August 2009

Abstract. Microsoft Corporation Published: August 2009 Linux Integration Components Version 2 for Hyper-V (Windows Server 2008, Windows Server 2008 R2, Microsoft Hyper-V Server 2008, and Microsoft Hyper-V Server 2008 R2) Readme Microsoft Corporation Published:

More information

Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption

Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption This is a step-by-step guide showing how to create an encrypted bootable Backtrack 4 USB thumb drive. I put quotes around full in the title

More information

2. Boot using the Debian Net Install cd and when prompted to continue type "linux26", this will load the 2.6 kernel

2. Boot using the Debian Net Install cd and when prompted to continue type linux26, this will load the 2.6 kernel These are the steps to build a hylafax server. 1. Build up your server hardware, preferably with RAID 5 (3 drives) plus 1 hotspare. Use a 3ware raid card, 8000 series is a good choice. Use an external

More information

NI Real-Time Hypervisor for Windows

NI Real-Time Hypervisor for Windows QUICK START GUIDE NI Real-Time Hypervisor Version 2.1 The NI Real-Time Hypervisor provides a platform you can use to develop and run LabVIEW and LabVIEW Real-Time applications simultaneously on a single

More information

USB 2.0 Flash Drive User Manual

USB 2.0 Flash Drive User Manual USB 2.0 Flash Drive User Manual 1 INDEX Table of Contents Page 1. IMPORTANT NOTICES...3 2. PRODUCT INTRODUCTION...4 3. PRODUCT FEATURES...5 4. DRIVER INSTALLATION GUIDE...6 4.1 WINDOWS 98 / 98 SE... 6

More information

TELE 301 Lecture 7: Linux/Unix file

TELE 301 Lecture 7: Linux/Unix file Overview Last Lecture Scripting This Lecture Linux/Unix file system Next Lecture System installation Sources Installation and Getting Started Guide Linux System Administrators Guide Chapter 6 in Principles

More information

Linux Boot Loaders Compared

Linux Boot Loaders Compared Linux Boot Loaders Compared L.C. Benschop May 29, 2003 Copyright c 2002, 2003, L.C. Benschop, Eindhoven, The Netherlands. Permission is granted to make verbatim copies of this document. This is version

More information

EXPLORING LINUX KERNEL: THE EASY WAY!

EXPLORING LINUX KERNEL: THE EASY WAY! EXPLORING LINUX KERNEL: THE EASY WAY! By: Ahmed Bilal Numan 1 PROBLEM Explore linux kernel TCP/IP stack Solution Try to understand relative kernel code Available text Run kernel in virtualized environment

More information

RocketRAID 174x SATA Controller Ubuntu Linux Installation Guide

RocketRAID 174x SATA Controller Ubuntu Linux Installation Guide RocketRAID 174x SATA Controller Ubuntu Linux Installation Guide Version 1.0 Copyright 2008 HighPoint Technologies, Inc. All rights reserved. Last updated on November 13, 2008 Table of Contents 1 Overview...1

More information

Support Notes for SUSE LINUX Enterprise Server 9 Service Pack 3 for the Intel Itanium 2 Processor Family

Support Notes for SUSE LINUX Enterprise Server 9 Service Pack 3 for the Intel Itanium 2 Processor Family Support Notes for SUSE LINUX Enterprise Server 9 Service Pack 3 for the Intel Itanium 2 Processor Family *5991-5301* Part number: 5991-5301 Edition: 3, E0406 Copyright 2006 Hewlett-Packard Development

More information

Table of Contents. Software-RAID-HOWTO

Table of Contents. Software-RAID-HOWTO Table of Contents The Software-RAID HOWTO...1 Jakob Østergaard [email protected] and Emilio Bueso [email protected] 1. Introduction...1 2. Why RAID?...1 3. Devices...1 4. Hardware issues...1 5. RAID

More information

RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide

RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide Version 1.2 Copyright 2012 HighPoint Technologies, Inc. All rights reserved. Last updated on June 14, 2012 Table of Contents 1 Overview...

More information

SATA RAID Function (Only for chipset Sil3132 used) User s Manual

SATA RAID Function (Only for chipset Sil3132 used) User s Manual SATA RAID Function (Only for chipset Sil3132 used) User s Manual 12ME-SI3132-001 Table of Contents 1 WELCOME...4 1.1 SATARAID5 FEATURES...4 2 AN INTRODUCTION TO RAID...5 2.1 DISK STRIPING (RAID 0)...5

More information

Total Backup Recovery Server for Linux. User s Guide

Total Backup Recovery Server for Linux. User s Guide Total Backup Recovery Server for Linux User s Guide Content Copyright Notice 3 Chapter 1. Introduction 4 1.1 Total Backup Recovery Server for Linux a reliable solution for SMB users 4 1.2 Features of Total

More information

PARALLELS SERVER BARE METAL 5.0 README

PARALLELS SERVER BARE METAL 5.0 README PARALLELS SERVER BARE METAL 5.0 README 1999-2011 Parallels Holdings, Ltd. and its affiliates. All rights reserved. This document provides the first-priority information on the Parallels Server Bare Metal

More information

Installing MooseFS Step by Step Tutorial

Installing MooseFS Step by Step Tutorial Installing MooseFS Step by Step Tutorial Michał Borychowski MooseFS Support Manager [email protected] march 2010 Gemius SA Overview... 3 MooseFS install process on dedicated machines... 3 Master server

More information

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4 How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4 Download the software from http://www.starwindsoftware.com/ Click on products then under

More information

WES 9.2 DRIVE CONFIGURATION WORKSHEET

WES 9.2 DRIVE CONFIGURATION WORKSHEET WES 9.2 DRIVE CONFIGURATION WORKSHEET This packet will provide you with a paper medium external to your WES box to write down the device names, partitions, and mount points within your machine. You may

More information

These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.

These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system. Elastix Application Note #201201091: Elastix RAID Setup Step By Step Including Recovery Title Elastix Raid Setup Step By Step Including Recovery Author Bob Fryer Date Document Written 9 th January 2012

More information

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0 UltraBac Documentation UBDR Gold Bare Metal Disaster Recovery Administrator Guide UBDR Gold v8.0 UBDR Administrator Guide UBDR Gold v8.0 The software described in this guide is furnished under a license

More information

Linux + Windows 95 mini HOWTO

Linux + Windows 95 mini HOWTO Linux + Windows 95 mini HOWTO Jonathon Katz [email protected] Joy Yokley Converted document from HTML to DocBook 4.1 (SGML) 2001 03 01 Revision History Revision 1.1.1 2001 04 19 Revised by: DCM Corrected

More information

File Systems Management and Examples

File Systems Management and Examples File Systems Management and Examples Today! Efficiency, performance, recovery! Examples Next! Distributed systems Disk space management! Once decided to store a file as sequence of blocks What s the size

More information

VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide

VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide 1. Summary Version 0.8, December 03, 2007 Copyright 2003~2007 VIA Technologies, INC

More information

readme_asm.txt -------------------------------------------------------------------- README.TXT

readme_asm.txt -------------------------------------------------------------------- README.TXT README.TXT Adaptec Storage Manager as of March 27, 2006 Please review this file for important information about issues and erratas that were discovered after completion of the standard product documentation.

More information

Cloning Complex Linux Servers

Cloning Complex Linux Servers Cloning Complex Linux Servers Cloning A Linux Machine That Has A Complex Storage Setup Where I work we have Cent OS servers whose drives are in various software raid and LVM2 configurations. I was recently

More information

LVM2 data recovery. Milan Brož [email protected]. LinuxAlt 2009, Brno

LVM2 data recovery. Milan Brož mbroz@redhat.com. LinuxAlt 2009, Brno LVM2 data recovery Milan Brož [email protected] LinuxAlt 2009, Brno Linux IO storage stack [ VFS ] filesystem [ volumes ] MD / LVM / LUKS / MPATH... [ partitions ] legacy partition table recovery from the

More information

Customizing Boot Media for Linux* Direct Boot

Customizing Boot Media for Linux* Direct Boot White Paper Bruce Liao Platform Application Engineer Intel Corporation Customizing Boot Media for Linux* Direct Boot October 2013 329747-001 Executive Summary This white paper introduces the traditional

More information

The Tor VM Project. Installing the Build Environment & Building Tor VM. Copyright 2008 - The Tor Project, Inc. Authors: Martin Peck and Kyle Williams

The Tor VM Project. Installing the Build Environment & Building Tor VM. Copyright 2008 - The Tor Project, Inc. Authors: Martin Peck and Kyle Williams The Tor VM Project Installing the Build Environment & Building Tor VM Authors: Martin Peck and Kyle Williams Table of Contents 1. Introduction and disclaimer 2. Creating the virtualization build environment

More information

HP Embedded SATA RAID Controller

HP Embedded SATA RAID Controller HP Embedded SATA RAID Controller User Guide Part number: 433600-001 First Edition: June 2006 Legal notices Copyright 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject

More information

How to Install Windows on Xen TM 3.0

How to Install Windows on Xen TM 3.0 How to Install Windows on Xen TM 3.0 A XenSource Technical Note for the Windows 2003 Server Introduction This note describes how to install Windows 2003 Server on Xen. It provides an overview of the Linux

More information

PARALLELS SERVER 4 BARE METAL README

PARALLELS SERVER 4 BARE METAL README PARALLELS SERVER 4 BARE METAL README This document provides the first-priority information on Parallels Server 4 Bare Metal and supplements the included documentation. TABLE OF CONTENTS 1 About Parallels

More information

UNIX - FILE SYSTEM BASICS

UNIX - FILE SYSTEM BASICS http://www.tutorialspoint.com/unix/unix-file-system.htm UNIX - FILE SYSTEM BASICS Copyright tutorialspoint.com A file system is a logical collection of files on a partition or disk. A partition is a container

More information

46xx_47xx_1546_1547 RAID Recovery/Set Up Instructions

46xx_47xx_1546_1547 RAID Recovery/Set Up Instructions 46xx_47xx_1546_1547 RAID Recovery/Set Up Instructions Note: The Windows RAID utility should handle most RAID recovery situations without any trouble. In Windows, when the ICON in the RAID utility turns

More information

Navigating the Rescue Mode for Linux

Navigating the Rescue Mode for Linux Navigating the Rescue Mode for Linux SUPPORT GUIDE DEDICATED SERVERS ABOUT THIS GUIDE This document will take you through the process of booting your Linux server into rescue mode to identify and fix the

More information

Using Encrypted File Systems with Caché 5.0

Using Encrypted File Systems with Caché 5.0 Using Encrypted File Systems with Caché 5.0 Version 5.0.17 30 June 2005 InterSystems Corporation 1 Memorial Drive Cambridge MA 02142 www.intersystems.com Using Encrypted File Systems with Caché 5.0 InterSystems

More information

Technical Note TN_146. Creating Android Images for Application Development

Technical Note TN_146. Creating Android Images for Application Development TN_146 Creating Android Images for Application Development Issue Date: 2013-01-28 This document shows how to build and install the Android Operating System on the BeagleBoard xm Use of FTDI devices in

More information

A+ Guide to Software: Managing, Maintaining, and Troubleshooting, 5e. Chapter 3 Installing Windows

A+ Guide to Software: Managing, Maintaining, and Troubleshooting, 5e. Chapter 3 Installing Windows : Managing, Maintaining, and Troubleshooting, 5e Chapter 3 Installing Windows Objectives How to plan a Windows installation How to install Windows Vista How to install Windows XP How to install Windows

More information

Installing VMware Tools on Clearswift v4 Gateways

Installing VMware Tools on Clearswift v4 Gateways Technical Guide Version 2.0 January 2016 Contents 1 Introduction... 3 2 Scope... 3 3 Installation and Setup... 4 3.1 Overview... 4 3.2 Installation... 4 4 Performance Impact... 8 4.1 Overview... 8 4.2

More information

LSI MegaRAID User s Manual

LSI MegaRAID User s Manual LSI MegaRAID User s Manual Q2143 August 2005 Copyright Information No part of this manual, including the products and software described in it,may be reproduced, transmitted, transcribed, stored in a retrieval

More information

Dell NetVault Bare Metal Recovery for Dell NetVault Backup Server 10.5. User s Guide

Dell NetVault Bare Metal Recovery for Dell NetVault Backup Server 10.5. User s Guide Dell NetVault Bare Metal Recovery for Dell NetVault Backup Server 10.5 User s Guide Copyright 2015 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual

More information

Restoring a Suse Linux Enterprise Server 9 64 Bit on Dissimilar Hardware with CBMR for Linux 1.02

Restoring a Suse Linux Enterprise Server 9 64 Bit on Dissimilar Hardware with CBMR for Linux 1.02 Cristie Bare Machine Recovery Restoring a Suse Linux Enterprise Server 9 64 Bit on Dissimilar Hardware with CBMR for Linux 1.02 This documentation shows how to restore or migrate a Linux system on dissimilar

More information

Embedded MegaRAID Software

Embedded MegaRAID Software Embedded MegaRAID Software User Guide 48712-00, Rev. B Revision History Version and Date Description of Changes 48712-00, Rev. B, Revised the guide to document changes to the driver installation procedures,

More information

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Access CloudStack web interface via: Internal access links: http://cloudstack.doc.ic.ac.uk

More information

Operating System Installation Guidelines

Operating System Installation Guidelines Operating System Installation Guidelines The following document guides you step-by-step through the process of installing the operating systems so they are properly configured for boot camp. The document

More information

Recovering Data from Windows Systems by Using Linux

Recovering Data from Windows Systems by Using Linux Recovering Data from Windows Systems by Using Linux Published by the Open Source Software Lab at Microsoft. November 2007. Special thanks to Chris Travers, Contributing Author to the Open Source Software

More information

Intel Matrix Storage Manager 8.x

Intel Matrix Storage Manager 8.x Intel Matrix Storage Manager 8.x User's Manual January 2009 Revision 1.0 Document Number: XXXXXX INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

ThinkServer RD540 and RD640 Operating System Installation Guide

ThinkServer RD540 and RD640 Operating System Installation Guide ThinkServer RD540 and RD640 Operating System Installation Guide Note: Before using this information and the product it supports, be sure to read and understand the Read Me First and Safety, Warranty, and

More information

Acronis Backup & Recovery 11

Acronis Backup & Recovery 11 Acronis Backup & Recovery 11 Update 0 Installation Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for

More information

Converting Linux and Windows Physical and Virtual Machines to Oracle VM Virtual Machines. An Oracle Technical White Paper December 2008

Converting Linux and Windows Physical and Virtual Machines to Oracle VM Virtual Machines. An Oracle Technical White Paper December 2008 Converting Linux and Windows Physical and Virtual Machines to Oracle VM Virtual Machines An Oracle Technical White Paper December 2008 Converting Linux and Windows Physical and Virtual Machines to Oracle

More information

Attix5 Pro Server Edition

Attix5 Pro Server Edition Attix5 Pro Server Edition V7.0.3 User Manual for Linux and Unix operating systems Your guide to protecting data with Attix5 Pro Server Edition. Copyright notice and proprietary information All rights reserved.

More information

Introduction to Operating Systems

Introduction to Operating Systems Introduction to Operating Systems It is important that you familiarize yourself with Windows and Linux in preparation for this course. The exercises in this book assume a basic knowledge of both of these

More information

10 Red Hat Linux Tips and Tricks

10 Red Hat Linux Tips and Tricks Written and Provided by Expert Reference Series of White Papers 10 Red Hat Linux Tips and Tricks 1-800-COURSES www.globalknowledge.com 10 Red Hat Linux Tips and Tricks Compiled by Red Hat Certified Engineers

More information

ASM_readme_6_10_18451.txt -------------------------------------------------------------------- README.TXT

ASM_readme_6_10_18451.txt -------------------------------------------------------------------- README.TXT README.TXT Adaptec Storage Manager (ASM) as of June 3, 2009 Please review this file for important information about issues and erratas that were discovered after completion of the standard product documentation.

More information

Linux Embedded devices with PicoDebian Martin Noha 28.9.2006

Linux Embedded devices with PicoDebian Martin Noha 28.9.2006 Embedded systems Linux Embedded devices with PicoDebian Martin Noha 28.9.2006 24.03.2005 1 Agenda Why did I look in this stuff? What is an embedded device? Characteristic hardware global requirements for

More information

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution Release 3.0 User Guide P/N 300-999-671 REV 02 Copyright 2007-2013 EMC Corporation. All rights reserved. Published in the USA.

More information

System administration basics

System administration basics Embedded Linux Training System administration basics Michael Opdenacker Thomas Petazzoni Free Electrons Copyright 2009, Free Electrons. Creative Commons BY SA 3.0 license Latest update: Dec 20, 2010, Document

More information

INF-110. GPFS Installation

INF-110. GPFS Installation INF-110 GPFS Installation Overview Plan the installation Before installing any software, it is important to plan the GPFS installation by choosing the hardware, deciding which kind of disk connectivity

More information

Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM)

Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM) PRIMERGY RX300 S2 Onboard SCSI RAID User s Guide Areas Covered Chapter 1 Features (Overview/Note) This chapter explains the overview of the disk array and features of the SCSI array controller. Chapter

More information

Parallels Virtuozzo Containers 4.7 for Linux Readme

Parallels Virtuozzo Containers 4.7 for Linux Readme Parallels Virtuozzo Containers 4.7 for Linux Readme This document provides the first-priority information about Parallels Virtuozzo Containers 4.7 for Linux and supplements the included documentation.

More information

English. Configuring SATA Hard Drive(s)

English. Configuring SATA Hard Drive(s) Configuring SATA Hard Drive(s) To configure SATA hard drive(s), follow the steps below: (1) Install SATA hard drive(s) in your system. (2) Configure SATA controller mode and boot sequence in BIOS Setup.

More information

Linux Disaster Recovery best practices with rear

Linux Disaster Recovery best practices with rear Relax and Recover Linux Disaster Recovery best practices with rear Gratien D'haese IT3 Consultants Who am I Independent Unix System Engineer since 1996 Unix user since 1986 Linux user since 1991 Open Source

More information

Acronis Backup & Recovery 11.5

Acronis Backup & Recovery 11.5 Acronis Backup & Recovery 11.5 Installation Guide Applies to the following editions: Advanced Server Virtual Edition Advanced Server SBS Edition Advanced Workstation Server for Linux Server for Windows

More information

Other trademarks and Registered trademarks include: LONE-TAR. AIR-BAG. RESCUE-RANGER TAPE-TELL. CRONY. BUTTSAVER. SHELL-LOCK

Other trademarks and Registered trademarks include: LONE-TAR. AIR-BAG. RESCUE-RANGER TAPE-TELL. CRONY. BUTTSAVER. SHELL-LOCK Quick Start Guide Copyright Statement Copyright Lone Star Software Corp. 1983-2013 ALL RIGHTS RESERVED. All content viewable or accessible from this guide is the sole property of Lone Star Software Corp.

More information

Extended installation documentation

Extended installation documentation Extended installation documentation Version 3.0-1 Revision 12431 Stand: March 13, 2012 Alle Rechte vorbehalten. / All rights reserved. (c) 2002 bis 2012 Univention GmbH Mary-Somerville-Straße 1 28359 Bremen

More information

Secure Perfect RAID Recovery Instructions

Secure Perfect RAID Recovery Instructions Secure Perfect RAID Recovery Instructions Contents Overview Dell PowerEdge 2500 RAID Level 1 Recovery Instructions Overview NOTE If you possess a previous version of this document, you may notice changes

More information

HP Embedded SATA RAID Controller

HP Embedded SATA RAID Controller HP Embedded SATA RAID Controller User Guide Part number: 391679-002 Second Edition: August 2005 Legal notices Copyright 2005 Hewlett-Packard Development Company, L.P. The information contained herein is

More information

Guide to SATA Hard Disks Installation and RAID Configuration

Guide to SATA Hard Disks Installation and RAID Configuration Guide to SATA Hard Disks Installation and RAID Configuration 1. Guide to SATA Hard Disks Installation... 2 1.1 Serial ATA (SATA) Hard Disks Installation... 2 2. Guide to RAID Configurations... 3 2.1 Introduction

More information

Installation GENTOO + RAID sur VMWARE

Installation GENTOO + RAID sur VMWARE Installation GENTOO + RAID sur VMWARE Chargement modules modprobe raid0 modprobe raid1 modprobe dm-mod Partitionnement /dev/sdx1 ext2 32M /dev/sdx2 swap =RAM /dev/sdx3 ext3 reste du disque Pour supprimer

More information

Sophos Anti-Virus for Linux configuration guide. Product version: 9

Sophos Anti-Virus for Linux configuration guide. Product version: 9 Sophos Anti-Virus for Linux configuration guide Product version: 9 Document date: September 2015 Contents 1 About this guide...5 2 About Sophos Anti-Virus for Linux...6 2.1 What Sophos Anti-Virus does...6

More information