Root-on-LVM-on-RAID HOWTO



Similar documents
Typing some stupidities in text files, databases or whatever, where does it fit? why does it fit there, and how do you access there?

Setup software RAID1 array on running CentOS 6.3 using mdadm. (Multiple Device Administrator) 1. Gather information about current system.

How To Set Up Software Raid In Linux (Amd64)

Installing Debian with SATA based RAID

Creating a Cray System Management Workstation (SMW) Bootable Backup Drive

Installing a Second Operating System

How To Manage Your Volume On Linux (Evms) On A Windows Box (Amd64) On A Raspberry Powerbook (Amd32) On An Ubuntu Box (Aes) On Linux

Redundant Array of Inexpensive/Independent Disks. RAID 0 Disk striping (best I/O performance, no redundancy!)

Data storage, backup and restore

Windows Template Creation Guide. How to build your own Windows VM templates for deployment in Cloudturk.

Linux Software Raid. Aug Mark A. Davis

RAID User Guide. Edition. Trademarks V1.0 P/N: C51GME0-00

Yosemite Server Backup Installation Guide

Chapter 2 Array Configuration [SATA Setup Utility] This chapter explains array configurations using this array controller.

Shared Storage Setup with System Automation

Linux Template Creation Guide. How to build your own Linux VM templates for deployment in Cloudturk.

Using the Software RAID Functionality of Linux

Planning for an Amanda Disaster Recovery System

Acronis Backup & Recovery 10 Server for Linux. Command Line Reference

Recovering Data from Windows Systems by Using Linux

RAID Manual. Edition. Trademarks V1.0 P/N: CK8-A5-0E

The Logical Volume Manager (LVM)

Support for Storage Volumes Greater than 2TB Using Standard Operating System Functionality

BackTrack Hard Drive Installation

How To Install Acronis Backup & Recovery 11.5 On A Linux Computer

CBMR for Linux v6.2.2 User Guide

Abstract. Microsoft Corporation Published: August 2009

Backtrack 4 Bootable USB Thumb Drive with Full Disk Encryption

2. Boot using the Debian Net Install cd and when prompted to continue type "linux26", this will load the 2.6 kernel

NI Real-Time Hypervisor for Windows

USB 2.0 Flash Drive User Manual

TELE 301 Lecture 7: Linux/Unix file

Linux Boot Loaders Compared

EXPLORING LINUX KERNEL: THE EASY WAY!

RocketRAID 174x SATA Controller Ubuntu Linux Installation Guide

Support Notes for SUSE LINUX Enterprise Server 9 Service Pack 3 for the Intel Itanium 2 Processor Family

Table of Contents. Software-RAID-HOWTO

RocketRAID 2640/2642 SAS Controller Ubuntu Linux Installation Guide

SATA RAID Function (Only for chipset Sil3132 used) User s Manual

Total Backup Recovery Server for Linux. User s Guide

PARALLELS SERVER BARE METAL 5.0 README

Installing MooseFS Step by Step Tutorial

How you configure Iscsi target using starwind free Nas software & configure Iscsi initiator on Oracle Linux 6.4

WES 9.2 DRIVE CONFIGURATION WORKSHEET

These application notes are intended to be a guide to implement features or extend the features of the Elastix IP PBX system.

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0

Linux + Windows 95 mini HOWTO

File Systems Management and Examples

VIA Fedora Linux Core 8 (x86&x86_64) VT8237R/VT8237A/VT8237S/VT8251/CX700/VX800 V-RAID V3.10 Driver Installation Guide

readme_asm.txt README.TXT

Cloning Complex Linux Servers

LVM2 data recovery. Milan Brož LinuxAlt 2009, Brno

Customizing Boot Media for Linux* Direct Boot

The Tor VM Project. Installing the Build Environment & Building Tor VM. Copyright The Tor Project, Inc. Authors: Martin Peck and Kyle Williams

HP Embedded SATA RAID Controller

How to Install Windows on Xen TM 3.0

PARALLELS SERVER 4 BARE METAL README

UNIX - FILE SYSTEM BASICS

46xx_47xx_1546_1547 RAID Recovery/Set Up Instructions

Navigating the Rescue Mode for Linux

Using Encrypted File Systems with Caché 5.0

Technical Note TN_146. Creating Android Images for Application Development

A+ Guide to Software: Managing, Maintaining, and Troubleshooting, 5e. Chapter 3 Installing Windows

Installing VMware Tools on Clearswift v4 Gateways

LSI MegaRAID User s Manual

Dell NetVault Bare Metal Recovery for Dell NetVault Backup Server User s Guide

Restoring a Suse Linux Enterprise Server 9 64 Bit on Dissimilar Hardware with CBMR for Linux 1.02

Embedded MegaRAID Software

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Operating System Installation Guidelines

Recovering Data from Windows Systems by Using Linux

Intel Matrix Storage Manager 8.x

ThinkServer RD540 and RD640 Operating System Installation Guide

Acronis Backup & Recovery 11

Converting Linux and Windows Physical and Virtual Machines to Oracle VM Virtual Machines. An Oracle Technical White Paper December 2008

Attix5 Pro Server Edition

Introduction to Operating Systems

10 Red Hat Linux Tips and Tricks

ASM_readme_6_10_18451.txt README.TXT

Linux Embedded devices with PicoDebian Martin Noha

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

System administration basics

INF-110. GPFS Installation

Areas Covered. Chapter 1 Features (Overview/Note) Chapter 2 How to Use WebBIOS. Chapter 3 Installing Global Array Manager (GAM)

Parallels Virtuozzo Containers 4.7 for Linux Readme

English. Configuring SATA Hard Drive(s)

Linux Disaster Recovery best practices with rear

Acronis Backup & Recovery 11.5

Other trademarks and Registered trademarks include: LONE-TAR. AIR-BAG. RESCUE-RANGER TAPE-TELL. CRONY. BUTTSAVER. SHELL-LOCK

Extended installation documentation

Secure Perfect RAID Recovery Instructions

HP Embedded SATA RAID Controller

Guide to SATA Hard Disks Installation and RAID Configuration

Installation GENTOO + RAID sur VMWARE

Sophos Anti-Virus for Linux configuration guide. Product version: 9

Transcription:

Massimiliano Ferrero m.ferrero@midhgard.it This document describes a procedure to install a Linux system with software RAID and Logical Volume Manager (LVM) support and with a root file system stored into a LVM logical volume. The procedure can be used to install such a system from scratch, without the need to install first a normal system and then convert root to LVM and RAID. Introduction New Versions of This Document You can always view the latest version of this document at the URL http://www.midhgard.it/docs/index_en.html (http://www.midhgard.it/docs/index_en.html) Copyright, License and Disclaimer This document, Root-on-LVM-on-RAID HOWTO, is copyrighted (c) 2003 by Massimiliano Ferrero. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is available at http://www.gnu.org/copyleft/fdl.html (http://www.gnu.org/copyleft/fdl.html). Linux is a registered trademark of Linus Torvalds. Disclaimer This document records the methods used by the authors. All reasonable effort is expended in making sure that the description is indeed accurate. The document is distributed in good faith and with the hope that the contents may prove useful to others. There is, however, no guarantee, express or implied, that the methods and procedures described herein are fit for the stated purpose. The authors disclaim any and all liability for any and all consequences, direct or indirect, of the application of these methods. No liability for the contents of this document can be accepted. Use the concepts, examples and information at your own risk. There may be errors and inaccuracies, that could be damaging to your system. Proceed with caution, and although this is highly unlikely, the author(s) do not take any responsibility. 1

All copyrights are held by their by their respective owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Naming of particular products or brands should not be seen as endorsements. Credits / Contributors This document is of course based on the work of many people. The author wish to thank: George Karaolides <george(at)karaolides[dot]com> for his "Unofficial Kernel 2.4 Root-on-RAID and Root-on-LVM-on-RAID HOWTO". (http://karaolides.com/computing/howto/lvmraid/lvmraid.html) The procedure here described is largely based on his document. Eduard Bloch <blade(at)debian[dot]org> for the RAID and LVM extdisk (http://people.debian.org/~blade/install/lvm/) and for info on how to install in a chroot environment. Everybody else that has made available info in other documents or on Usenet. Feedback Feedback is most certainly welcome for this document. Send your additions, comments and criticisms to the following email address : <m.ferrero@midhgard.it>. What this document is about... This document describes a procedure to install a Linux system with software RAID and Logical Volume Manager (LVM) support and with a root file system stored into a LVM logical volume. As stated in next section there are already other documents over this topic: this procedure differs from those documents as it can be used to install such a system from scratch, without the need to install first a "normal" system and then convert root to LVM and RAID. The procedure has been developed and tested using Debian "Woody" 3.0 distribution and 2.4 kernel version. The install process and steps illustrated are those of Debian setup, but this method should be valid for any distribution, provided that some user interaction is allowed during install (that would be: pause the install process, open a shell and type some magik). This procedure describes the installation of a system with two disks, using RAID 1 arrays (mirroring). All given commands refer to two IDE hard disks on primary IDE channel (/dev/hda, /dev/hdb): the procedure can be easily modified for different disks, more RAID arrays or different RAID levels (ex. RAID 5). 2

...and what is not Converting a root file system to LVM The situation where a system has already been installed without LVM support or with root on a normal partition has already been discussed by other documents. Look at the LVM howto, it has a section dedicated to this task. (http://tldp.org/howto/lvm-howto/upgraderoottolvm.html) Look also to the excellent "Unofficial Kernel 2.4 Root-on-RAID and Root-on-LVM-on-RAID HOWTO" (http://karaolides.com/computing/howto/lvmraid/lvmraid.html) by George Karaolides: his howto covers extensively root migration from a standard partition to LVM and is also a good source of other informations about LVM. General RAID and LVM info For an introduction about RAID systems look at these pages and documents: RAID and Data Protection Solutions for Linux, by Linas Vepstas, lot s of links and info. http://linas.org/linux/raid.html The Software-RAID HOWTO, by Jakob Østergaard http://www.linux.org/docs/ldp/howto/software-raid-howto.html http://www.tldp.org/howto/software-raid-howto.html What is RAID, by Mike Neuffer http://www.uni-mainz.de/~neuffer/scsi/what_is_raid.html For general info about Logical Volume Manager look at: The LVM home page http://www.sistina.com/products_lvm.htm LVM howto, by Sistina Software Inc. http://tldp.org/howto/lvm-howto.html RAID and LVM with 2.0.x and 2.2.x kernel versions The procedure described in this howto has been tested only with kernel versions 2.4.x For information about RAID over older kernel versions look at: Root RAID HOWTO cookbook, by Michael A. Robinton http://www.linux.org/docs/ldp/howto/root-raid-howto.html Required Software Required software is: Debian (http://www.debian.org) "Woody" 3.0 install CD (CD 1) or network install CD/floppy. 3

Eduard Bloch s LVM-and-RAID extension disk (http://people.debian.org/~blade/install/lvm/). It can also be downloaded from this mirror (http://www.midhgard.it/files/lvm/extdisk/lar1440.bin). Note: Seems like the extension disk image with date 8-Dec-2002 is missing the command mkraid. Use the mirror! lvm10, initrd-tools, raidtools2, mdadm packages (plus any packages they depend from). Required Hardware A Linux compatible system: this is at the very least a loose definition, since there have been reports of Linux being installed on Cash Registers and MP3 readers. A floppy disk reader, for loading the extension disk and maybe for booting from install floppies. A CDROM reader or a Linux compatible network card, for booting and installing the system. At least two hard disk: with only one there would be little gain in installing a RAID enabled system, unless the installation of a second hard disk has been already planned. Some Warnings Before starting install the system read entirely sections Overview, Before starting install, Installing: disks and file systems and Installing: kernel, RAM disk and packages. At some points of the install process there is more than one path possible: this way the choices will be clear before. A software RAID system and/or a system with root-on-lvm requires special attention: in case of a crash recovery is more complex than usual and special tools are needed (the same used to install it). KNOW WHAT YOU ARE DOING before you put such a system up and running. Overview It will be now presented an overview of the overall install process. In a few words the process of installing root over RAID and LVM consists in interrupting normal install at the very beginning, loading RAID and LVM support, configuring RAID arrays and LVM volumes, installing a kernel with RAID and LVM support and then completing a normal install. Here are more detailed installation steps: 1. Start Debian woody 3.0 installation booting with bf2.4 kernel. 2. Open a shell. 3. Load an extdisk with RAID and LVM support. 4. Partition hard disks. 4

5. Create RAID arrays: create one boot array and at least one additional array. 6. Start LVM. Root-on-LVM-on-RAID HOWTO 7. Create LVM physical volumes: one for each additional RAID array (not for the boot array). 8. Create LVM volume groups. 9. Create LVM logical volumes. 10. Create and activate swap space. 11. Create file systems. 12. Create mount points and mount target file systems. 13. Return to main install menu. 14. Install a kernel with RAID and LVM support (if using stock bf2.4 Debian kernel, if not skip this point). 15. Configure network. 16. Install base system. 17. Open a shell and chroot into target file system. 18. Configure APT. 19. Install a kernel with RAID and LVM support (if using a custom kernel, if already installed the stock Debian kernel skip this point). 20. Install RAID and LVM packages: lvm10, initrd-tools, raidtools2 and mdadm. 21. Optional: install devfsd. 22. Start LVM (reprise). 23. Install RAM disk with LVM support. 24. Modify configuration files (/etc/raidtab, /etc/fstab, /etc/lilo.conf, /etc/modules). 25. Write lilo configuration to disk. 26. Exit from chrooted environment. 27. Return to main install menu. 28. Reboot the system (reboot from hard disk, not from install CDROM). 29. Terminate installation. Note: The kernel install step can be done at different moments; if the stock Debian kernel is to be used is more convenient to do that before chrooting into target file systems, while if a custom kernel is to be used it is suggested to do that after chrooting. Look at section RAID and LVM support loaded as a module vs statically linked for more details. 5

Before starting install Before starting acquire all software indicated in section Required Software Below are reported links to a kernel with static RAID and LVM support and to RAM disk images that can be used to boot the system. Before downloading any of these read sections Install custom kernel with statically compiled RAID and LVM and Install RAM disk with LVM support. Kernel package 2.4.18 with static RAID and LVM support: http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.deb (http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.deb) RAM disk for kernel 2.4.18-bf2.4 with modular RAID and LVM support: http://www.midhgard.it/files/lvm/initrd/bf2.4/initrd.gz (http://www.midhgard.it/files/lvm/initrd/bf2.4/initrd.gz) RAM disk for kernel with static RAID and LVM support: http://www.midhgard.it/files/lvm/initrd/custom/initrd.gz (http://www.midhgard.it/files/lvm/initrd/custom/initrd.gz) Preparing RAID and LVM extension disk Write the extension disk image to a blank formatted floppy. From a linux system do: dd if=lar1440.bin of=/dev/fd0 bs=1024 conv=sync ; sync From a Windows 9x or MS-DOS system use RaWrite3 or a similar tool (http://www.minix-vmd.org/pub/minix-vmd/dosutil/), from a Windows NT, 2K or XP system use NTRawrite. (http://sourceforge.net/projects/ntrawrite/) Look at this section (http://www.debian.org/releases/stable/i386/ch-install-methods.en.html#s-create-floppy) of the Debian install manual for more info. Preparing script and configuration files disk To speed up install process place on another formatted floppy disk some scripts and configuration files that will be used during install: Install script install_lvm1 (http://www.midhgard.it/files/lvm/install_lvm1): this script will execute almost all commands necessary to create RAID arrays, logical volumes, file systems and mount them. The same commands can be executed one by one, but the use of this script speeds up the whole process. The suggestion is to complete at least one installation by hand, then use the script. The script must be adapted for disk and logical volumes configuration. File /etc/fstab (http://www.midhgard.it/files/lvm/fstab) File /etc/lilo.conf (http://www.midhgard.it/files/lvm/lilo.conf) File /etc/raidtab (http://www.midhgard.it/files/lvm/raidtab) 6

Installing: disks and file systems This section of the document will go from the beginning of the installation to mounting file systems. Main steps consist of partitioning disks, preparing RAID and LVM, creating file systems. Start installation Boot from Debian 3.0 "Woody" CD 1 and at lilo prompt use bf24, to boot with kernel 2.4.18. Choose install language and configure the keyboard then open a shell (this option is near the end of the menu). Load RAID and LVM support Insert "LVM-and-RAID" floppy into floppy drive and use: extdisk to load RAID and LVM support. The following (or similar) messages should be displayed; ignore any warning message regarding the kernel: Trying the floppy drives, please wait... Locating a new mount point... Copying resource files... Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/md.o will taint the kernel Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/linear.o will taint the ke Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/xor.o will taint the kerne Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/raid0.o will taint the ker Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/raid1.o will taint the ker Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/raid5.o will taint the ker Warning: loading /ext1/lib/modules/2.4.18-bf2.4/kernel/drivers/md/lvm-mod.o will taint the k Done: LVM and RAID tools available now. After the extension disk has been loaded the floppy can be removed from the drive. Note: Debian install process works by booting a kernel and mounting a ram disk; so extdisk loads RAID and LVM commands from floppy disk to the RAM disk. Please bear in mind that the same fate suffers any configuration file (as /etc/raidtab), so any file copied to /etc folder during install is lost when the machine is rebooted. This is true at least until target file systems are created and mounted. Only at this point configuration files can be copied to "real" /etc (ex. /target/etc) and packages installed in "real" file systems. Partition hard disks Partition hard disks: minimal configuration requires two RAID arrays, one for boot file system and one for LVM volume group. 7

The procedure has been written for two IDE disks on first IDE channel (/dev/hda, /dev/hdb): using fdisk create two partitions in each disk, the first one 20 or 25 MB large, the second one using all remaining space. The first partition will be used to hold boot volume, the second one will hold LVM volume group. If the two disks are not equal in size, the second partition must fit on both disks, so it has to be large as the remaining space on the smallest disk. Ex. If the disks are 40000 MB and 30000 MB large, create 25 MB and 29975 MB partitions. Warning! Partitioning hard disks can destroy all data. Check that the disks do not contain any valuable data before going on. Use: fdisk /dev/hda to partition the first hard disk: delete any existing partition, create one primary partition 25 MB large (/dev/hda1), mark it with type FD (Linux raid autodetect) and bootable, then create another primary partition with required size and mark this too with type FD (Linux raid autodetect). Write configuration to disk and quit from fdisk. Repeat the same step for second disk: fdisk /dev/hdb It is possible to use cfdisk to partition hard disks. Create RAID arrays Note: The steps from current section to Create mount points and mount target file systems can be executed with a script, one can be found in section Preparing script and configuration files disk. It is suggested that the first installation is completed by hand, so that each step can be fully understanded. If a script disk is to be used mount it with mount /dev/fd0 /floppy Now it is necessary to create RAID arrays: two arrays will be created, /dev/md1 and /dev/md2. /dev/md1 will be built from /dev/hda1 and /dev/hdb1, and will hold the boot file system. /dev/md2 will be built from /dev/hda2 and /dev/hdb2, and will become the physical volume that LVM will include. First configuration file /etc/raidtab has to be created: edit it using nano-tiny or any other available editor or, more conveniently, prepare the file before on a floppy and then just copy it from there to /etc. To do so mount such a floppy with: mount /dev/fd0 /floppy 8

then copy the file with: cp /floppy/raidtab /etc/ Here is shown a valid /etc/raidtab for /dev/md1 and /dev/md2 built over /dev/hda and /dev/hdb: raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda2 raid-disk 0 device /dev/hdb2 raid-disk 1 This file can be downloaded here: http://www.midhgard.it/files/lvm/raidtab (http://www.midhgard.it/files/lvm/raidtab) Once created or copied /etc/raidtab, RAID arrays must be created. This is done using mkraid command. mkraid /dev/md1 mkraid /dev/md2 The following output should result: #mkraid /dev/md1 handling MD device /dev/md1 analyzing super-block disk 0: /dev/hda1, 32098kB, raid superblock at 32000kB disk 1: /dev/hdb1, 30712kB, raid superblock at 30592kB # #mkraid /dev/md2 handling MD device /dev/md2 analyzing super-block disk 0: /dev/hda2, 44998065kB, raid superblock at 44997952kB disk 1: /dev/hdb2, 45004176kB, raid superblock at 45004096kB Note: As soon as arrays are created, a rebuild thread is started to align disk status. This process can be lengthy and requires that all disk space is rewritten, so heavy disk activity will be generated. This is normal, so don t panic. 9

Verify that RAID arrays have been created successfully, by issuing the following command: cat /proc/mdstat This should print a status of all RAID arrays, similar to this: Personalities : [linear] [raid0] [raid1] [raid5] read_ahead 1024 sectors md2 : active raid1 hdb2[1] hda2[0] 44997952 blocks [2/2] [UU] [>...] resync = 0.7% (320788/44997952) finish=37.1min speed=20049k/ md1 : active raid1 hdb1[1] hda1[0] 30592 blocks [2/2] [UU] unused devices: <none> Each disk/partition that makes up an array should be in UP status (should have [UU] reported), and the big array will be under resync. While looking at mdstat repeatedly the resync percentage should rise. Configurations with more than two disks If more than two disks are present many other disk schemes are usable. With three disks possible configurations are: 1. Create RAID 1 arrays over two disks and use the third as hot spare 2. Create a small RAID 1 array for boot (using two disks) and a big RAID 5 arrays over all three disks In the first case create two partitions, one small and one big, on every disk, then configure mirroring between the first two disks; the third disk will not be used until one disk suffers a crash and the hot spare is activated. In the second case it is best to partition all disks with the same scheme, one small partition and one big, then create a RAID 1 using the first partition of the first two disks, then create a RAID 5 array using the second partition of each disk: /dev/md1 using /dev/hda1, /dev/hdb1 /dev/md2 using /dev/hda2, /dev/hdb2 and /dev/hdc2 Note: The boot partition must be a RAID 1 array, lilo has currently no support for booting from a RAID 5 array. With four disks possible configurations are: 3. Create RAID 1 arrays over the first couple of disks as before, the create another RAID 1 volume using the second couple (one big partition over the third and fourth disk). 10

4. As configuration #2 plus the fourth disk as hot spare. Root-on-LVM-on-RAID HOWTO 5. Create a small RAID 1 array from two disks then a RAID 5 array over all four disks. And so on with more disks. In Appendix A: /etc/raidtab examples there are examples of the file /etc/raidtab for the first two cases (RAID 1 + hot spare, RAID 1 + RAID 5). LVM disk organization For detailed informations on LVM consult LVM man pages and the official LVM howto (search for links in section General RAID and LVM info). LVM maps disk partitions or raid arrays to physical disks (one to one), so the first step while configuring a LVM-enabled machine is to create physical volumes. Then disks are organized into volume groups: a volume group is a group of physical volumes and of logical volumes that are related each other. Each volume group can hold one or more logical volumes: think of a logical volume as a "dynamic" partition (Microsoft Windows 2000 Server calls this "Dynamic Volume"). For an example of situation where more than one volume group is advisable look at Appendix B: more than one volume group. Start LVM Before creating or accessing physical volumes and volume groups the LVM subsystem must be activated: this is done by using: vgscan This should produce the following output: vgscan -- reading all physical volumes (this may take a while...) vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group Note: It is normal that the first time vgscan is invoked after each boot, /etc/lvmtab is created. Create LVM physical volume(s) Before creating volume groups physical volumes have to be created: this is done using pvcreate command. To create /dev/md2 physical disk issue: pvcreate /dev/md2 The following output should be shown: 11

pvcreate -- physical volume "/dev/md2" successfully created Root-on-LVM-on-RAID HOWTO Create LVM volume group(s) Now it is time to create required volume groups. In this case just one volume group can be created: there is only one physical volume available. Create the volume group writing: vgcreate -A n vg00 /dev/md2 This creates the volume group vg00 and allocates physical disk dev/md2 to it. Following output should result: vgcreate -- INFO: using default physical extent size 4.00 MB vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte vgcreate -- WARNING: you don t have an automatic backup of "vg00" vgcreate -- volume group "vg00" successfully created and activated Note: The "-A n" option is used to prevent auto-backup of volume group configuration. This would be done into /etc/lvmtab.d Remember that current /etc is into a RAM disk, so it would be useless to make a backup of the configuration and, much worst, after a few backup the RAM disk would be full. Should this ever happen it would be sufficient to delete all *.old files into /etc/lvmtab.d, but the "-A n" option prevents this. Note: If more that one physical volume is to be put into the volume group this can be done at volume group creation by putting all physical volume on pvcreate command line or later by using vgextend (look at related man pages). Note: Note: the name vg00 is arbitrary, but if changed (ex. vg0, vgroot) all subsequent commands related to this volume group or his logical volumes must be changed accordingly. Choose file systems structure Before proceeding with logical volumes creation a file systems structure has to be chosen: while it is possible to install over a single "big" root file system, this is not recommended, since this solution has several disadvantages: Almost everything can saturate root file system, and so cause a system lock or crash. No separation between data and binaries or between different kind of data. All file in the system are be held by a single file system, and this will make the whole system much more vulnerable to file system corruption. 12

To avoid all these problems use different file systems for the main folders held into root. Next is indicated a "decent" file systems structure with some possible values for file systems size: / (root) 128 MB /boot 25 MB /home 128 MB or more (depends on data to be held) /opt 16 MB or more (depends on packages installed) /tmp 128 MB /usr 256 MB or more (depends on packages installed) /var 128 MB or more (depends on packages installed, logs created and data to be held) swap 128 MB or more (depends on RAM and usage) Of course this is just an indication and everybody is free to adopt a custom solution. Create LVM logical volumes Now logical volumes have to be created. It is necessary to create one logical volume for each file system (except /boot that will be held in /dev/md1) plus one logical volume for swap space. Logical volumes are creating by using lvcreate. Referring to file systems structure in Choose file systems structure here are commands for creating such logical volumes: lvcreate -A n -L 128 -n root vg00 lvcreate -A n -L 128 -n home vg00 lvcreate -A n -L 16 -n opt vg00 lvcreate -A n -L 128 -n tmp vg00 lvcreate -A n -L 256 -n usr vg00 lvcreate -A n -L 128 -n var vg00 lvcreate -A n -L 128 -n swap vg00 For each lvcreate command an output similar to the following should be shown: lvcreate -- WARNING: you don t have an automatic backup of "vg00" lvcreate -- logical volume "/dev/vg00/root" successfully created It is possible to verify logical volume correct creation by issuing: vgdisplay -v vg00 more The following output should be displayed: --- Volume group --- VG Name vg00 VG Access read/write VG Status available/resizable VG # 0 MAX LV 255 Cur LV 7 Open LV 0 MAX LV Size 255.99 GB Max PV 255 Cur PV 1 13

Act PV 1 VG Size 42.91 GB PE Size 4.00 MB Total PE 10984 Alloc PE / Size 228 / 912.00 MB Free PE / Size 10756 / 42.02 GB VG UUID W2R8ko-px88-WoJ1-9KV8-syTX-tBqh-My6YjD --- Logical volume --- LV Name /dev/vg00/root VG Name vg00 LV Write Access read/write LV Status available LV # 1 # open 0 LV Size 128.00 MB Current LE 32 Allocated LE 32 Allocation next free Read ahead sectors 120 Block device 58:0... Look also at /dev/vg00 directory with: ls -l /dev/vg00 There should be one node for each logical volume, like this: crw-r----- 1 root disk 109, 0 Jan 1 22:42 group brw-rw---- 1 root root 58, 1 Jan 1 22:58 home brw-rw---- 1 root root 58, 2 Jan 1 22:58 opt brw-rw---- 1 root root 58, 0 Jan 1 22:47 root brw-rw---- 1 root root 58, 6 Jan 1 22:58 swap brw-rw---- 1 root root 58, 3 Jan 1 22:58 tmp brw-rw---- 1 root root 58, 4 Jan 1 22:58 usr brw-rw---- 1 root root 58, 5 Jan 1 22:58 var Root-on-LVM-on-RAID HOWTO Create and initialize swap space Next a swap partition has to be created and activated: for this step mkswap and swapon commands are used. Create swap space: mkswap /dev/vg00/swap Output similar to this should result: Setting up swapspace version 1, size = 134213632 bytes Then activate swap space: 14

swapon /dev/vg00/swap This command has no output. Finally verify swap space status: cat /proc/swaps The file should be like this one: Filename Type Size Used Priority /dev/vg00/swap partition 131064 0-1 Create file systems At this point it is possible to create file systems. File systems can be of any type supported by the kernel that will be installed, common choices are ext2, ext3 or reiserfs. ext3 and reiserfs are journaled file systems, so are more secure than ext2, on the other hand they are slower and journal entries takes up some space. The commands here shown create ext3 file systems except for the boot file system that will be an ext2 file system. To create file systems use these commands: mke2fs /dev/md1 mke2fs -j /dev/vg00/root mke2fs -j /dev/vg00/home mke2fs -j /dev/vg00/opt mke2fs -j /dev/vg00/tmp mke2fs -j /dev/vg00/usr mke2fs -j /dev/vg00/var For each command an output similar to the following should be displayed: mke2fs 1.27 (8-Mar-2002) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 32768 inodes, 131072 blocks 6553 blocks (5.00%) reserved for the super user First data block=1 16 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. 15

Create mount points and mount target file systems Root-on-LVM-on-RAID HOWTO The final step necessary before proceeding with "normal" installation is to create mount points and mount file systems into /target. First root file system has to be mounted into /target: mount /dev/vg00/root /target Then mount points have to be created: mkdir /target/boot mkdir /target/home mkdir /target/opt mkdir /target/tmp mkdir /target/usr mkdir /target/var Finally remaining file systems have to be mounted into corresponding mount points: mount /dev/md1 /target/boot mount /dev/vg00/home /target/home mount /dev/vg00/opt /target/opt mount /dev/vg00/tmp /target/tmp mount /dev/vg00/usr /target/usr mount /dev/vg00/var /target/var Verify that all mounts are successful with: mount This should be the result: /dev/ram0 on / type ext2 (rw) /proc on /proc type proc (rw) /dev/ram1 on /ext1 type ext2 (rw) /dev/vg00/root on /target type ext3 (rw) /dev/fd0 on /floppy type vfat (rw) /dev/md1 on /target/boot type ext2 (rw) /dev/vg00/home on /target/home type ext3 (rw) /dev/vg00/opt on /target/opt type ext3 (rw) /dev/vg00/tmp on /target/tmp type ext3 (rw) /dev/vg00/usr on /target/usr type ext3 (rw) /dev/vg00/var on /target/var type ext3 (rw) Installing: kernel, RAM disk and packages This section of the document illustrates the steps necessary to complete installation: installing a suitable kernel, a RAM disk, configure the system so that it can boot and install all needed packages. 16

Return to main install menu If any floppy has been mounted, unmount it then: exit from the shell and return to main menu. Install a kernel with RAID and LVM support This is one of the most important steps of the whole process: to be able to boot with a root-on-lvm-on-raid configuration a RAID and LVM enabled kernel is needed. Before installing the kernel there is a question that needs an answer: to module or not to module? More seriously: RAID and LVM support should be compiled statically into the kernel or loaded as an external module? Both solutions are feasible, but each one has some advantages and disadvantages that will be examined. Boot process overview To boot a system with root-on-lvm-on-raid (or just with root-on-lvm) a RAM disk is needed: this is necessary because root file system resides in a volume group, so to be able to access root file system LVM must first be started. Commands (and maybe modules) to accomplish this task are stored into a RAM disk, which is automatically loaded and used by the kernel. When booting a system with root-on-lvm-on-raid the following steps are performed: LILO (or another boot loader with RAID support) is loaded. Kernel image is loaded. If RAID support is built in the kernel RAID arrays are auto-detected and started. RAM disk image with LVM support is loaded and uncompressed. RAM disk is mounted as root. If RAID and LVM support is not built in the kernel these modules must be included in the RAM disk and loaded. If RAID support has been loaded as a module RAID arrays must be started manually (RAM disk must be modified). LVM is started. Volume groups are detected and activated. "Real" root file system over LVM is mounted read-only as new root file system. RAM disk is unmounted. LVM is started again (root has changed). Root file system is checked for errors. If no errors are found root file system is remounted read-write. 17

From this point on boot process continue as normal (modules are loaded, remaining file systems are mounted,... ). RAID and LVM support loaded as a module vs statically linked By choosing the first solution, modular support, there is one great advantage: the stock bf2.4 kernel shipped with Debian "Woody" 3.0 CD has RAID and LVM compiled as a module, so there is no need to compile a custom kernel. Moreover the kernel can be installed from main menu. The main disadvantage of this solution is that with RAID support loaded as a module, RAID auto-detect does not work. To workaround this problem the RAM disk must be modified to make it start RAID arrays. The real problem is that array configuration has to be included into the RAM disk, and thus the RAM disk must be modified for each single system that is to be installed. This is not very good if several different systems are to be installed. Even worst in some cases if array configuration is changed, for example an array is added, the RAM disk must be updated. The second solution, a custom kernel with built-in RAID and LVM support, has working RAID arrays auto-detect, and does not require manual RAM disk modifications. The drawback is that kernel compilation is required and that the kernel has to be installed manually. Install stock bf2.4 Debian kernel with modular RAID and LVM support To install the stock bf2.4 kernel choose "Install Kernel and Driver Modules" from the main install menu and then "Configure Device Driver Modules" to configure any required module. Install custom kernel with statically compiled RAID and LVM It is not possible to install a custom kernel from the install menu: it s necessary to perform this step from command line. Probably to transfer the kernel image to the system, network access will be necessary along with some tools like ftp or wget. For all these reasons installation of a custom kernel has to be performed later: necessary steps are reported in section Install custom kernel with statically compiled RAID and LVM. Configure the network From the main install menu choose "Configure the Hostname" and type a name for the system, then choose "Configure the Network" and configure network parameters as required (use DHCP or configure it manually). Install base system From main install menu choose "Install the Base System" to install base Debian packages. Wait until all required packages are installed. Ignore the warning "No swap partition found". 18

Chroot into target file system Note: To perform the following steps another shell has to be opened. This time is better not to open it from the main menu but use tty2: usually pressing ALT-F2 brings up the second console. If the first console (ttyp0) is used, some command (ex. apt-setup) fail stating something like "Setting locale failed" (this with Debian "Woody" 3.0). It s still possible to use the first console by issuing: export TERM=vt100 before the offending command. Open a new shell (see previous note) and then chroot a shell into /target: chroot /target /bin/sh Then remount proc: mount -t proc proc /proc Configure APT Some additional packages have to be installed: the best way to do this is by using apt-get. Before using this utility APT must be configured. Type: apt-setup and configure package sources (CDROM or ftp sites). Since some required packages (ex. lvm10) are not present in the two firsts Debian CDROM it is better to configure at least one APT ftp source, to be able to install these packages through the network. This, of course, is possible only if network access is available during install. If for some unfortunate reason the system cannot be made access the network during install (ex. some strange network card not supported by stock bf2.4 kernel is used, not even as a module, not even as an external custom module, not even by passing some weird startup parameter to the kernel while booting,... ) some packages (ex. lvm10) will have to be installed manually using dpkg command. This can be a little tricky: lvm10 depends on other packages an these must be installed in the correct order. In section Install RAID and LVM packages the exact package list and order of dependencies is given along with the manual install procedure. Note: Usually debian install executes apt-setup after the first reboot. Executing it so early can lead to another problem: at this stage the cdrom is mounted over /instmnt, and the device used for the cdrom is not may not be /dev/cdrom but, for example, /dev/hdd If this is the case when specifying a cdrom as a package source probably /dev/cdrom will not work: use the actual device (it can be guessed using mount). 19

Install custom kernel with statically compiled RAID and LVM Skip this step if stock kernel bf2.4 has been previously installed into the system. To install a custom kernel of course this has first to be acquired or compiled. Below there are links to an already compiled kernel, steps for compilation are shown in next section. From this link can be downloaded a Debian package (.deb) with 2.4.18 kernel that has RAID 1 and 5 along with LVM support statically compiled in, it has also devfsd support enabled: http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.deb (http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.deb) The kernel has been compiled for i386 architecture. This kernel package has been compiled on a Debian system by using make-kpkg command. This is the.config file used to compile it: http://www.midhgard.it/files/lvm/kernel/config-2.4.18-lvm (http://www.midhgard.it/files/lvm/kernel/config-2.4.18-lvm) Here can be downloaded a tarball containing the kernel image, system map and modules. This can be used on distributions different from Debian: http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.tgz (http://www.midhgard.it/files/lvm/kernel/kernel-image-2.4.18-lvm_1.0_i386.tgz) Once kernel package or binaries have been copied/compiled into the system the kernel can be installed. If the kernel has been acquired as a Debian package: Install the kernel: dpkg -i kernel_image-2.4.18-lvm_1.0_i386.deb Ignore any warning about the need to reboot urgently. The install script will ask a few questions about lilo: do not have it create the simbolic link (it would create it in /), do not write now the configuration to disk and do not wipe out lilo configuration. Create a simbolic link from the kernel image to /boot/vmlinuz ln -s /boot/vmlinuz-2.4.18-lvm /boot/vmlinuz If the kernel has been acquired as a tarball: Unpack the kernel: tar -zxvf kernel_image-2.4.18-lvm_1.0_i386.tgz Move vmlinuz-2.4.18-lvm and config-2.4.18-lvm into /boot: mv vmlinuz-2.4.18-lvm config-2.4.18-lvm /boot/ 20

Move module files into /lib/modules/ mv 2.4.18-lvm /lib/modules/ Create symbolic links: ln -s /boot/system.map-2.4.18-lvm /boot/system.map ln -s /boot/config-2.4.18-lvm /boot/config ln -s /boot/vmlinuz-2.4.18-lvm /boot/vmlinuz Custom kernel compilation To compile a custom kernel it is necessary to download the source, unpack it, configure it or copy a configuration file and then run the compile. While it is possible to compile the kernel "on-the-fly" during installation, this requires installing some additional packages, at least on Debian. It is better to compile the kernel on an already installed system, then pack the binaries and transfer them to the system that is being installed. In order to have a kernel able to boot a RAID or LVM system there are a few options that must be set, these are: Multiple device support (CONFIG_MD=y) RAID device support (CONFIG_BLK_DEV_MD=y) RAID 0 support, if using RAID 0 arrays (CONFIG_MD_RAID0=y) RAID 1 support, if using RAID 1 arrays (CONFIG_MD_RAID1=y) RAID 5 support, if using RAID 5 arrays (CONFIG_MD_RAID5=y) LVM support, if using LVM (CONFIG_BLK_DEV_LVM=y) Moreover, since a RAM disk has to be loaded, following options must be set: Loopback device support (CONFIG_BLK_DEV_LOOP=y) RAM disk support (CONFIG_BLK_DEV_RAM=y) Maximum RAM disk size, default is 4096 (4 MB), do not change this (CONFIG_BLK_DEV_RAM_SIZE=4096) Initial RAM disk support (CONFIG_BLK_DEV_INITRD=y) Optionally devfs support can be compiled into the kernel. devfs is, in kernel 2.4.18, an experimental feature: it is a virtual /dev file system (like /proc) that replaces classical /dev file system. Main advantages of devfs are that devices are dynamically registered as needed, and this happens just for devices that are effectively used by the system. For more information look at devfs documentation in the kernel source. devfs support (CONFIG_DEVFS_FS=y) Automatically mount devfs at boot (CONFIG_DEVFS_MOUNT=y) If devfs support is enabled, it is recommended that devfsd is installed, look at section Optional: install devfsd 21

Kernel Sources can be downloaded from The Linux Kernel Archives http://www.kernel.org or installed with distribution specific means (deb packages for Debian, rpm packages for Red Hat). For more information on kernel compilation look at Appendix D: kernel compilation tips. Install RAID and LVM packages Some additional packages are needed to support RAID, LVM and RAM disk use. These are: lvm10: current stable version 1.0.4-4 initrd-tools: current stable version 0.1.24 raidtools2: current stable version 0.90.20010914-15 mdadm: current stable version 0.7.2-2 Package lvm10 depends on lvm-common and file, lvm-common depends on binutils, initrd-tools depends on cramfsprogs. Dependencies are more complex, here have been indicated only missing packages after a Debian base install. Note: If apt has been configured to use a cdrom as source for packages, manually mount the cdrom over /cdrom in the chrooted environment before using apt-get, because at this stage of install process auto-mount does not work. The device for the cdrom will probably not be /dev/cdrom, as explained in section Configure APT. Use: mount /dev/hdd /cdrom to mount the cdrom (change the device if necessary). If APT has been correctly configured and network access is available (look at section Configure APT) to install these packages use: apt-get install lvm10 initrd-tools raidtools2 mdadm The package mdadm has to be configured: answer "Yes" when asked if the RAID monitor daemon has to be started, then configure an e-mail address which has to be notified when a disk failure event happens. If APT cannot be used manual package install is required. For each package file: dpkg -i package_filename.deb The following packages must be installed: lvm10 raidtools2 initrd-tools mdadm 22

ash binutils cramfsprogs file lvm-common zlib1g Optional: install devfsd If a custom kernel with devfs support enabled has been installed it is recommended to install devfsd. devfsd is a daemon that automatically handles the registration and removal of device entries into devfs. To install devfsd use: apt-get install devfsd Warning! Do not perform this step if the stock kernel bf2.4 has been installed since it has no devfs support. Start LVM (reprise) Since the current is chrooted, LVM subsystem has to be re-started in the new environment: vgscan has to be used again. It will state again that it s creating /etc/lvmtab. Note: This step is necessary before lilo configuration can be written to disk. So must be executed before section Write lilo configuration to disk. It cannot be executed before because prior to installing package lvm10 the command vgscan is not available in the chrooted environment. Install RAM disk with LVM support As for kernel installation, RAM disk installation is one of the most important steps in successfully installing (and rebooting!) a root-on-lvm system. In sections Boot process overview and RAID and LVM support loaded as a module vs statically linked an overview of problems related to RAM disk and kernel has already been presented. This section will show more in detail step necessary to install a suitable RAM disk. Here are shown also steps necessary to create the RAM disk. 23

Note: RAM disk creation can be a complicate matter (at least the author of this document hasn t found any way to make it simpler ;). In case the option of downloading the RAM disk is chosen, skip those section of the document. Install RAM disk for kernel bf2.4 Download this RAM disk (http://www.midhgard.it/files/lvm/initrd/bf2.4/initrd.gz) or create one as explained in section RAM disk for bf2.4 kernel: requirements and creation, then copy it to /boot on the system that s being installed and create a symbolic link /boot/initrd to it. ln -s /boot/initrd.gz /boot/initrd Install RAM disk for custom kernel Download this RAM disk (http://www.midhgard.it/files/lvm/initrd/custom/initrd.gz) or create one as explained in section RAM disk for custom kernel: requirements and creation, then copy it to /boot on the system that s being installed and create a symbolic link /boot/initrd to it. ln -s /boot/initrd.gz /boot/initrd RAM disk for custom kernel: requirements and creation The requirements for a RAM disk to be used with a kernel that has RAID and LVM support statically compiled will be presented first, since they are simpler than the ones for kernel with modular support. With this kind of kernel RAID array autodetect and autostart at boot works. Moreover the RAM disk must not contain modules for RAID and LVM support (they already are in the kernel!). The role of the RAM disk in this case is limited to start LVM subsystem (vgscan) and activate volume group(s) (vgchange). Requirements are as follows: RAM disk must contain commands vgscan and vgchange. RAM disk must contain commands mount and umount. Any library needed by previous commands, particularly liblvm1.0.so.1 A linuxrc file that issues commands necessary to activate LVM and volume group(s). The linuxrc file must be like the one shown below, the same file can be downloaded here (http://www.midhgard.it/files/lvm/initrd/custom/linuxrc): #!/bin/sh /bin/mount /proc /sbin/vgscan /sbin/vgchange -a y /bin/umount /proc 24

The RAM disk can be created at hand (modify an existing one or create it with mkinitrd and then modify it) or it can be created automatically using lvmcreate_initrd: this command creates a RAM disk that mets above requirements and puts it into /boot. It is suggested that RAM disk creation is performed prior to installing the wanna be RAID system (on another already installed system) or that an available RAM disk (http://www.midhgard.it/files/lvm/initrd/custom/initrd.gz) is used. The RAM disk can be created during install process, but this can be a little tricky: lvmcreate_initrd tryies to put into the RAM disk modules from the directory /lib/modules/kernel_version, where kernel_version is the kernel used for boot: if the command is used during the installation, the same kernel used for boot (ex. bf2.4) must already have been installed on the system, even if later the system will be booted with another kernel (a custom one). Ex. Debian install is booted using the 2.4.18-bf.24 kernel, if during install lvmcreate_initrd is called depmod will state that it cannot find modules.dep. To be able to create the RAM disk with this command the kernel bf2.4 must first be installed. To modify an existing RAM disk or to examine an existing one: If the RAM disk is compressed, uncompress it with: gzip -cd /boot/initrd.gz > /boot/initrd_unc This uncompress the RAM disk in a copy: using gzip -d the compressed file will be removed. It will be necessary to generate it again with a gzip command. If these steps are performed on an already installed system remember to run lilo after. Mount the RAM disk with: mount -o loop /boot/initrd_unc /initrd Examine or modify the RAM disk in /initrd Unmount the RAM disk with: umount /initrd If necessary compress again the RAM disk: gzip /boot/initrd_unc RAM disk for bf2.4 kernel: requirements and creation Read previous section for RAM disk requirements for static RAID support: in this section are illustrated only differences with that case. The same for commands necessary to RAM disk creation. Since the kernel has no RAID and LVM support the modules for this must be included in the RAM disk: md.o 25

raid1.o lvm-mod.o With RAID and LVM support compiled as a module, RAID autodetect will not work at boot. This requires that the RAM disk contains not only commands for starting LVM but also for starting RAID arrays (mdadm or raidstart). Finally linuxrc file must include commands to load the modules and for starting RAID arrays. Below is shown a modified script, the same file can be downloaded here (http://www.midhgard.it/files/lvm/initrd/bf2.4/linuxrc): #!/bin/sh /sbin/modprobe md /sbin/modprobe raid1 /sbin/modprobe lvm-mod /sbin/mdadm -A -R /dev/md1 /dev/hda1 /dev/hdb1 /sbin/mdadm -A -R /dev/md2 /dev/hda2 /dev/hdb2 /bin/mount /proc /sbin/vgscan /sbin/vgchange -a y /bin/umount /proc As it can be easily noticed the script is dependent from disks and arrays configuration, so it must be modified to be adapted for each system: a mdadm command must be added for each RAID array. To modify the RAM disk: If the RAM disk is compressed, uncompress it. Mount the RAM disk over /initrd Add required modules to the RAM disk: cp /lib/modules/2.4.18-bf2.4/kernel/drivers/md/lvm-mod.o /initrd/lib/modules/2.4.18-bf2.4/ cp /lib/modules/2.4.18-bf2.4/kernel/drivers/md/md.o /initrd/lib/modules/2.4.18-bf2.4/ cp /lib/modules/2.4.18-bf2.4/kernel/drivers/md/raid1.o /initrd/lib/modules/2.4.18-bf2.4/ Modify or copy linuxrc as shown above to include calls to mdadm. Add mdadm to the RAM disk: cp /sbin/mdadm /initrd/sbin/mdadm Unmount the RAM disk with. If necessary compress again the RAM disk. Modify configuration files Some configuration files have to be copied into /etc (the "real" /etc, not the one in the RAM disk. 26

/etc/raidtab Root-on-LVM-on-RAID HOWTO The file /etc/raidtab is needed for raid arrays management, create or copy it, look at section Create RAID arrays. Note: In the chrooted shell the RAM disk is not accessible. To copy the file raidtab from the RAM disk use a non-chrooted shell: cp /etc/raidtab /target/etc/ To do this it is possible to exit from the chrooted shell and the reopen it. Otherwise remount the floppy with all config files in the chrooted shell and copy raidtab from floppy. /etc/fstab File Systems Table: debian install does not detect file systems over lvm, so /etc/fstab created by install process will be empty; the file must be created by hand or copied. Here can be downloaded an example /etc/fstab file (http://www.midhgard.it/files/lvm/fstab), the same file is shown below. Modify it to reflect current configuration: boot raid device, logical volumes name or number. # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> /dev/vg00/root / ext3 errors=remount-ro 0 1 /dev/vg00/swap none swap sw 0 0 proc /proc proc defaults 0 0 /dev/fd0 /floppy auto user,noauto 0 0 /dev/cdrom /cdrom iso9660 ro,user,noauto 0 0 /dev/md1 /boot ext2 defaults 0 2 /dev/vg00/home /home ext3 defaults 0 2 /dev/vg00/opt /opt ext3 defaults 0 2 /dev/vg00/tmp /tmp ext3 defaults 0 2 /dev/vg00/usr /usr ext3 defaults 0 2 /dev/vg00/var /var ext3 defaults 0 2 /etc/lilo.conf To make the system bootable lilo has to be configured. Since version 22.0 lilo raid support has been made more powerful. Taken from lilo changelog: Changes from version 21.7.5 to 22.0 (29-Aug-2001) John Coffman [released 9/27] Boot Installer -------------- - RAID installations now create a single map file, install the boot record on the RAID partition, install auxiliary boot records only on MBRs if needed, except BIOS device 0x80. Back- 27

ward compatibility is possible with new config-file and command line options (raid-extra-boot= or -x switch). Even with stored boot command lines ( -R, lock, fallback), RAID set coherency can be maintained. To have lilo boot from a RAID system use: boot=/dev/md1 Change /dev/md1 if the boot device is different. To use a root-on-lvm file system include the line: root=/dev/vg00/root Change /dev/vg00/root if the volume group name or logical volume name are different. Root-on-LVM-on-RAID HOWTO Now disks whose MBR is to be written by lilo have to be indicated with option raid-extra-boot; from lilo.conf man page: raid-extra-boot=<option> This option only has meaning for RAID1 installations. The <option> may be specified as none, auto, mbr-only, or a comma-separated list of devices; e.g., "/dev/hda,/dev/hdc6". Starting with LILO version 22.0, the boot record is normally written to the first sector of the RAID1 device. Use of an explicit list of devices, forces writing of auxiliary boot records only on those devices enumerated, in addition to the boot record on the RAID1 device. Since the version 22 RAID1 codes will never automatically write a boot record on the MBR of device 0x80, if such a boot record is desired, this is the way to have it written. So add this line to lilo.conf: raid-extra-boot="/dev/hda, /dev/hdb" Change the disks to reflect system configuration. Have the kernel mount root read-only by adding this option: read-only Finally specify an image to be loaded at boot with related RAM disk. image=/boot/vmlinuz label=linux initrd=/boot/initrd With non standard disks setup options disk and bios it could be necessary to specify correspondence between disks and BIOS numbers. Look at lilo.conf man page for more details. Below is shown an example /etc/lilo.conf, the same file can be downloaded here (http://www.midhgard.it/files/lvm/lilo.conf). boot=/dev/md1 root=/dev/vg00/root raid-extra-boot="/dev/hda, /dev/hdb" 28

read-only image=/boot/vmlinuz label=linux initrd=/boot/initrd Change the devices, kernel image name and RAM disk name as required or create symbolic links /boot/vmlinuz and /boot/initrd to regular files. /etc/modules File /etc/modules contains modules to be loaded at startup. If the stock kernel is being used and modules configuration has been performed from the main menu (look at section Install stock bf2.4 Debian kernel with modular RAID and LVM support) this file should be already ok. Just check it with: cat /etc/modules If a custom kernel has been installed manual module configuration will be necessary. If any required module or parameter is missing add it. An example of /etc/modules is shown below: # /etc/modules: kernel modules to load at boot time. # # This file should contain the names of kernel modules that are # to be loaded at boot time, one per line. Comments begin with # a "#", and everything on the line after them are ignored. usb-uhci input usbkbd keybdev This file can be downloaded here (http://www.midhgard.it/files/lvm/modules). Write lilo configuration to disk Last but not least, lilo configuration has to be written to disk. Warning! Always remember to do this after any change to /etc/lilo.conf, kernel image file or RAM disk image. It is very easy to make the system unbootable by forgetting this step. To write configuration to disk use: lilo -v The -v option is used to have lilo give some more feedback on what is doing An output similar to the following should result: 29

LILO version 22.2, Copyright (C) 1992-1998 Werner Almesberger Development beyond version 21 Copyright (C) 1999-2001 John Coffman Released 05-Feb-2002 and compiled at 20:57:26 on Apr 13 2002. MAX_IMAGES = 27 Warning: LBA32 addressing assumed Warning: using BIOS device code 0x80 for RAID boot blocks Reading boot sector from /dev/md1 Merging with /boot/boot.b Boot image: /boot/vmlinuz -> /boot/vmlinuz-2.4.18-bf2.4 Mapping RAM disk /boot/initrd -> /boot/initrd-lvm-2.4.18-bf2.4.gz Added Linux * Backup copy of boot sector in /boot/boot.0901 Writing boot sector. The boot record of /dev/md1 has been updated. Reading boot sector from /dev/hde Backup copy of boot sector in /boot/boot.2100 Writing boot sector. The boot record of /dev/hde has been updated. Reading boot sector from /dev/hdg Backup copy of boot sector in /boot/boot.2200 Writing boot sector. The boot record of /dev/hdg has been updated. Exit from chrooted environment If a cdrom or a floppy had been manually mounted, unmount them (verify with mount command). Unmount also the remounted /proc. umount /cdrom umount /floppy umount /proc There are no other steps that have to be performed from the chrooted environment: exit to the original shell. Return to main install menu Note: Before exiting and rebooting it could be worth to take a look at /proc/mdstat: if the newly created arrays are still resyncing, rebooting would have the resync process restart. While this is not a problem it could be a waste of time: look at the ETA and decide if it s best to wait or to reboot. Again: exit 30

to the main install menu. Reboot the system From the menu choose "Reboot the system". As soon as the machine reboot remove Debian CDROM and any floppy, so that the system can boot from hard disk. Look carefully at boot messages: if everything is ok the system will complete boot sequence by loading the RAM disk, mount root file system and then all other file systems. Look at section Boot process overview for more informations. Look at Appendix E: boot messages for some examples of successful boot messages. If some nasty message is printed (something like "Kernel Panic") and boot process stops, don t despair: if the system is unable to load the kernel, the RAM disk or to start LVM and mount root file system for any reason, it is always possible to boot again from installation media and use LVM and RAID extdisk as a rescue disk, activating manually volume group(s) and mounting manually file systems. If this is the case, write down any error message, then boot from install media and try to correct the problem. Look at Appendix C: troubleshooting tips for detailed troubleshooting tips. Terminate installation When the system reboots Debian install process will complete as usual. The command: base-config will be run automatically. This tool is Debian specific and will configure time-zone, password management, normal user accounts, APT, run tasksel and dselect to install additional packages. After installing There are still a few tasks that it should be better to perform before declaring installation complete. Look at boot messages If not already done examine carefully boot messages. Particularly these sections: RAID arrays detections, RAM disk loading, LVM subsystem activation, root file system mounting, modules loading, other file systems mounting. If any strange error messages or warnings are displayed investigate them. Look at Appendix E: boot messages for some examples of successful boot messages. Look at Appendix C: troubleshooting tips for troubleshooting tips. 31

Rescue floppies Root-on-LVM-on-RAID HOWTO A valid substitute for the rescue floppies can be the Debian "Woody" 3.0 cdrom plus the RAID and LVM extension disk. It would be advisable to keep also at hand the floppy with the scripts used to install the system along with the configuration of disks logical volumes. Look at section Required Software for pointers to the extension disk. Look at section Appendix C: troubleshooting tips for recovery instruction in case of a crash. Make a crash test It is better to discover that this new super-fault-tolerant-unbreakable system cannot boot when an hard disk fails before the system goes on production (and before a real crash, too). This is a good time also for checking the rescue floppies, make some practice with RAID disk faults and write down a Disaster Recovery procedure. Warning! Before going on with the following tests, beware that they include possible data loss and array reconstruction. So, if any valuable information has been already put on the system, backup it. Moreover consider that array reconstruction can be a lengthy process. Always disconnect the system from power while physically working at it!!! You have been warned! ;-) A common problem could be that the system is unable to boot from the second hard disk, for a couple of different reasons, ranging from BIOS problems to a mis-configured lilo. A good test for checking that the system will boot with a degraded array is to power down the system, disconnect the first hard disk (both power and data cables) then reboot the system. The RAID subsystem should detect a disk "fault", remove the faulty drives from the arrays and then boot. Moreover, if a hot spare drive had been configured (ex. /dev/hdc), this should be automatically used to start array reconstruction. In this case before going on with other tests wait until reconstruction is complete. Arrays status can be monitored by looking at /proc/mdstat file. If no hot spare was used, a second useful test can be power down again the system, reconnecting the first disk and then boot. The system should detect that the superblock on the first disk is older than that on the second one and keep the first disk from joining the arrays. To put back the first disk into the arrays use the command raidhotadd; with two mirrored hard disk /dev/hda and /dev/hdb, provided that the "faulty" drive was /dev/hda, use: raidhotadd /dev/md1 /dev/hda1 raidhotadd /dev/md2 /dev/hda2 Again, the reconstruction process should start. If a hot spare disk was used there are two options: 32

Leave /dev/hdc into the array and use /dev/hda as hot spare. Update /etc/raidtab to match this situation (swap roles between /dev/hda and /dev/hdc). Return to original situation: mark /dev/hdc as faulty with raidhotgenerateerror, then add /dev/hda to the array with raidhotadd, then wait until reconstruct is completed again. In both cases reboot to verify correct system functionality. Warning! Unless your system has hot swap hard disk and hot swap support do not hot plug any hard disk from the system while it s running (do not unplug the data cable neither the power cable). This would lead to system complete lockup. While the first thought could be "What I installed a RAID system for, then?" this behavior is correct, or it may be defined as a "feature": a software RAID system is a low cost system targeted to protect from a reasonable amount of damage or misfortune, for example hard disk damage limited to some blocks or tracks. If an hard disk suddenly stops responding to commands (as if it was unplugged) the system will lock up and manual shut down, disconnect of the disk and restart will be required. In any case the system will boot up again with the remaining drive(s). It must be clear that this is not a limit due to the use of software raid, but to the hardware architecture instead. The same problem would show up with the use of some low cost RAID SCSI card that does not support hot swap, in case of a serious disk problem. Appendix A: /etc/raidtab examples File /etc/raidtab for RAID 1 + hot spare raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 1 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 device /dev/hdc1 spare-disk 0 raiddev /dev/md2 raid-level 1 nr-raid-disks 2 nr-spare-disks 1 chunk-size 32 persistent-superblock 1 device /dev/hda2 33

raid-disk 0 device /dev/hdb2 raid-disk 1 device /dev/hdc2 spare-disk 0 File /etc/raidtab for RAID 1 + RAID 5 raiddev /dev/md1 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 chunk-size 32 persistent-superblock 1 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md2 raid-level 5 nr-raid-disks 3 nr-spare-disks 0 persistent-superblock 1 parity-algorithm device /dev/hda2 raid-disk 0 device /dev/hdb2 raid-disk 1 device /dev/hdc2 raid-disk 2 left-symmetric Appendix B: more than one volume group There could be cases in which creation of more that one volume group is advisable: a database server (Oracle) is such a common situation. As an example, suppose to have a server with 4 disks, the first couple of disks with two RAID 1 arrays (/dev/md1 and /dev/md2) and the second one with one RAID 1 array (/dev/md3). /dev/md1 will be used "as-is" for boot file system, while /dev/md2 and /dev/md3 will be used each one as a physical disk. They could be allocated to one volume group or two different volume groups. Volume groups should aggregate disks with similar usage: the first volume group (and so the first RAID array) could hold all "standard" file systems (root, tmp, home, var, usr, opt,... ) along with database binaries and also database transaction logs (Redo Logs), while the second volume group (second RAID array) could hold all database datafiles (one logical volume for each datafile if raw devices are used). This leads to a "logical" separation of data and binaries. Moreover, returning to the Oracle example, by carefully planning the database structures disposition over the two volume groups, if one of the groups suffers any unrecoverable damage, information on the other group is sufficient to rebuild an up-to-date database starting from backup. 34

On the other hand such a division can lead to problems like wasting space: probably all disks will be the same in size, while it s unlikely that binaries and transaction logs are big as datafiles, so there would be much space wasted in the first volume group. There could be cases where a "logical" division of data and binaries is not required or advisable, so one volume group would be preferable (ex. Disk space is scarce or valuable). Appendix C: troubleshooting tips If a RAID and LVM system results not bootable anymore the following steps can be used to diagnose the problem: Boot with Debian "Woody" 3.0 cdrom with bf24 kernel. Exit to a shell and load the extension disk. extdisk Manually start raid arrays with: raidstart /dev/md1 repeat this for all raid arrays. Look for error messages during raid arrays start. Look at /proc/mdstat for array status. Start LVM subsystem with: vgscan Activate volume groups with: vgchange -a y vg00 repeat this for all volume groups. Check file systems: fsck /dev/vg00/root repeat this for all logical volumes with a file system inside. Mount file systems over /target chroot into /target 35

Appendix D: kernel compilation tips Here are indicated some advices on compiling a kernel on a Debian system: Install these packages: kernel-package, bzip2, libncurses5-dev Uncompress the kernel under /usr/src/kernel-source-version If a file.config is already available copy it into kernel source directory. Always run make menuconfig (or another kernel configuration command), even if the file.config has been copied, and save the configuration (even if no modification have been done. This to prevent using a config file of an older kernel version with a new one. Run: make-kpkg clean to clean kernel source directory. This has to be done every time that make menuconfig is called and before the next step. Run: make-kpkg --version 1.0 --append-to-version -lvm kernel_image to compile the kernel, modules and create the kernel package. The kernel package is created in /usr/src Appendix E: boot messages Note: The system whose these messages refer to has four IDE controllers and two hard disks connected to the third and fourth controller: the resulting devices are /dev/hde and /dev/hdg. This is an example of kernel boot messages while using the stock bf2.4 kernel: Linux version 2.4.18-bf2.4 (root@zombie) (gcc version 2.95.4 20011002 (Debian prerelease)) # BIOS-provided physical RAM map: BIOS-e820: 0000000000000000-00000000000a0000 (usable) BIOS-e820: 00000000000f0000-0000000000100000 (reserved) BIOS-e820: 0000000000100000-000000001fff0000 (usable) BIOS-e820: 000000001fff0000-000000001fff3000 (ACPI NVS) BIOS-e820: 000000001fff3000-0000000020000000 (ACPI data) BIOS-e820: 00000000ffff0000-0000000100000000 (reserved) On node 0 totalpages: 131056 zone(0): 4096 pages. zone(1): 126960 pages. zone(2): 0 pages. Local APIC disabled by BIOS -- reenabling. Found and enabled local APIC! Kernel command line: auto BOOT_IMAGE=Linux ro root=3a00 Initializing CPU#0 36

Detected 900.068 MHz processor. Console: colour VGA+ 80x25 Calibrating delay loop... 1795.68 BogoMIPS Memory: 511516k/524224k available (1783k kernel code, 12324k reserved, 549k data, 280k init, Dentry-cache hash table entries: 65536 (order: 7, 524288 bytes) Inode-cache hash table entries: 32768 (order: 6, 262144 bytes) Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) Buffer-cache hash table entries: 32768 (order: 5, 131072 bytes) Page-cache hash table entries: 131072 (order: 7, 524288 bytes) CPU: Before vendor init, caps: 0183fbff c1c7fbff 00000000, vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff 00000000 00000000 Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff 00000000 00000000 CPU: Common caps: 0183fbff c1c7fbff 00000000 00000000 CPU: AMD Athlon(tm) Processor stepping 02 Enabling fast FPU save and restore... done. Checking hlt instruction... OK. Checking for popad bug... OK. POSIX conformance testing by UNIFIX enabled ExtINT on CPU#0 ESR value before enabling vector: 00000000 ESR value after enabling vector: 00000000 Using local APIC timer interrupts. calibrating APIC timer...... CPU clock speed is 900.0495 MHz.... host bus clock speed is 200.0110 MHz. cpu: 0, clocks: 2000110, slice: 1000055 CPU0<T0:2000096,T1:1000032,D:9,S:1000055,C:2000110> mtrr: v1.40 (20010327) Richard Gooch (rgooch@atnf.csiro.au) mtrr: detected mtrr type: Intel PCI: PCI BIOS revision 2.10 entry at 0xfb430, last bus=1 PCI: Using configuration type 1 PCI: Probing PCI hardware Unknown bridge resource 0: assuming transparent PCI: Using IRQ router VIA [1106/0686] at 00:07.0 PCI: Disabling Via external APIC routing Linux NET4.0 for Linux 2.4 Based upon Swansea University Computer Society NET3.039 Initializing RT netlink socket Starting kswapd VFS: Diskquotas version dquot_6.4.0 initialized Journalled Block Device driver loaded vga16fb: initializing vga16fb: mapped to 0xc00a0000 Console: switching to colour frame buffer device 80x30 fb0: VGA16 VGA frame buffer device Detected PS/2 Mouse Port. pty: 256 Unix98 ptys configured Serial driver version 5.05c (2001-07-08) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabled ttys00 at 0x03f8 (irq = 4) is a 16550A 37

ttys01 at 0x02f8 (irq = 3) is a 16550A Real Time Clock Driver v1.10e block: 128 slots per queue, batch=32 RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Uniform Multi-Platform E-IDE driver Revision: 6.31 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller on PCI bus 00 dev 39 VP_IDE: chipset revision 16 VP_IDE: not 100% native mode: will probe irqs later ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1 ide0: BM-DMA at 0xc000-0xc007, BIOS settings: hda:pio, hdb:pio ide1: BM-DMA at 0xc008-0xc00f, BIOS settings: hdc:pio, hdd:dma HPT370: IDE controller on PCI bus 00 dev 98 PCI: Found IRQ 11 for device 00:13.0 PCI: Sharing IRQ 11 with 00:09.0 HPT370: chipset revision 3 HPT370: not 100% native mode: will probe irqs later ide2: BM-DMA at 0xe000-0xe007, BIOS settings: hde:dma, hdf:pio ide3: BM-DMA at 0xe008-0xe00f, BIOS settings: hdg:dma, hdh:pio hdd: ASUS CD-S400, ATAPI CD/DVD-ROM drive hde: IBM-DTLA-307045, ATA DISK drive hdg: IBM-DTLA-307045, ATA DISK drive ide1 at 0x170-0x177,0x376 on irq 15 ide2 at 0xd000-0xd007,0xd402 on irq 11 ide3 at 0xd800-0xd807,0xdc02 on irq 11 hde: 90069840 sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdg: 90069840 sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdd: ATAPI 40X CD-ROM drive, 128kB Cache Uniform CD-ROM driver Revision: 3.12 ide-floppy driver 0.97.sv Partition check: hde: [PTBL] [5606/255/63] hde1 hde2 hdg: hdg1 hdg2 Floppy drive(s): fd0 is 1.44M FDC 0 is a post-1991 82077 Loading I2O Core - (c) Copyright 1999 Red Hat Software I2O configuration manager v 0.04. (C) Copyright 1999 Red Hat Software loop: loaded (max 8 devices) Compaq CISS Driver (v 2.4.5) 8139cp 10/100 PCI Ethernet driver v0.0.6 (Nov 19, 2001) 8139cp: pci dev 00:09.0 (id 10ec:8139 rev 10) is not an 8139C+ compatible chip 8139cp: Try the "8139too" driver instead. 8139too Fast Ethernet driver 0.9.24 PCI: Found IRQ 11 for device 00:09.0 PCI: Sharing IRQ 11 with 00:13.0 eth0: RealTek RTL8139 Fast Ethernet at 0xe081a000, 00:d0:70:00:cd:e4, IRQ 11 eth0: Identified 8139 chip type RTL-8139B HDLC support module revision 1.02 for Linux 2.4 Cronyx Ltd, Synchronous PPP and CISCO HDLC (c) 1994 Linux port (c) 1998 Building Number Three Ltd & Jan "Yenya" Kasprzak. ide-floppy driver 0.97.sv 38

Promise Fasttrak(tm) Softwareraid driver 0.03beta: No raid array found Highpoint HPT370 Softwareraid driver for linux version 0.01 No raid array found SCSI subsystem driver Revision: 1.00 Red Hat/Adaptec aacraid driver, Apr 14 2002 DC390: 0 adapters found 3ware Storage Controller device driver for Linux v1.02.00.016. 3w-xxxx: No cards with valid units found. request_module[scsi_hostadapter]: Root fs not mounted request_module[scsi_hostadapter]: Root fs not mounted i2o_scsi.c: Version 0.0.1 chain_pool: 0 bytes @ c19f44a0 (512 byte buffers X 4 can_queue X 0 i2o controllers) NET4: Linux TCP/IP 1.0 for NET4.0 IP Protocols: ICMP, UDP, TCP, IGMP IP: routing cache hash table of 4096 buckets, 32Kbytes TCP: Hash tables configured (established 32768 bind 32768) NET4: Unix domain sockets 1.0/SMP for Linux NET4.0. RAMDISK: Compressed image found at block 0 Freeing initrd memory: 1165k freed VFS: Mounted root (ext2 filesystem). md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: raid1 personality registered as nr 3 LVM version 1.0.1-rc4(ish)(03/10/2001) module loaded [events: 00000004] md: bind<hdg1,1> [events: 00000004] md: bind<hde1,2> md: hde1 s event counter: 00000004 md: hdg1 s event counter: 00000004 md: RAID level 1 does not need chunksize! Continuing anyway. md1: max total readahead window set to 124k md1: 1 data-disks, max readahead per data-disk: 124k raid1: device hde1 operational as mirror 0 raid1: device hdg1 operational as mirror 1 raid1: raid set md1 active with 2 out of 2 mirrors md: updating md1 RAID superblock on device md: hde1 [events: 00000005]<6>(write) hde1 s sb offset: 32000 md: hdg1 [events: 00000005]<6>(write) hdg1 s sb offset: 30592 [events: 00000004] md: bind<hdg2,1> [events: 00000004] md: bind<hde2,2> md: hde2 s event counter: 00000004 md: hdg2 s event counter: 00000004 md: RAID level 1 does not need chunksize! Continuing anyway. md2: max total readahead window set to 124k md2: 1 data-disks, max readahead per data-disk: 124k raid1: device hde2 operational as mirror 0 raid1: device hdg2 operational as mirror 1 raid1: raid set md2 active with 2 out of 2 mirrors md: updating md2 RAID superblock on device md: hde2 [events: 00000005]<6>(write) hde2 s sb offset: 44997952 39

md: hdg2 [events: 00000005]<6>(write) hdg2 s sb offset: 45004096 mdadm: /dev/md2 has been started with 2 drives. vgscan -- reading all physical volumes (this may take a while...) vgscan -- found inactive volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" successfully activated kjournald starting. Commit interval 5 seconds EXT3-fs: lvm(58,0): orphan cleanup on readonly fs EXT3-fs: lvm(58,0): 2 orphan inodes deleted EXT3-fs: recovery complete. EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. change_root: old root has d_count=2 Freeing unused kernel memory: 280k freed INIT: version 2.84 booting Loading /etc/console/boottime.kmap.gz Activating swap. Adding Swap: 131064k swap-space (priority -1) Checking root file system... fsck 1.27 (8-Mar-2002) /dev/vg00/root: clean, 6752/65536 files, 40628/262144 blocks EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,0), internal journal System time was Sat Jan 4 21:18:33 UTC 2003. Setting the System Clock using the Hardware Clock as reference... System Clock set. System local time is now Sat Jan 4 21:18:35 UTC 2003. Calculating module dependencies... done. Loading modules: usb-uhci usb.c: registered new driver usbdevfs usb.c: registered new driver hub usb-uhci.c: $Revision: 1.275 $ time 10:29:43 Apr 14 2002 usb-uhci.c: High bandwidth mode enabled PCI: Found IRQ 12 for device 00:07.2 PCI: Sharing IRQ 12 with 00:07.3 usb-uhci.c: USB UHCI at I/O 0xc400, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 1 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found 40

hub.c: 2 ports detected usb-uhci.c: v1.275:usb Universal Host Controller Interface driver input usbkbd usb.c: registered new driver keyboard usbkbd.c: :USB HID Boot Protocol keyboard driver keybdev Setting up LVM Volume Groups... vgscan -- reading all physical volumes (this may take a while...) vgscan -- found active volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" already active Starting RAID devices: done. Checking all file systems... fsck 1.27 (8-Mar-2002) /dev/md1: clean, 28/5136 files, 3711/20544 blocks /dev/vg00/home: clean, 11/32768 files, 8268/131072 blocks /dev/vg00/opt: clean, 11/4096 files, 1564/16384 blocks /dev/vg00/tmp: clean, 11/32768 files, 8268/131072 blocks /dev/vg00/usr: clean, 13944/131072 files, 163854/524288 blocks /dev/vg00/var: clean, 1144/32768 files, 40846/131072 blocks Setting kernel variables. Loading the saved-state of the serial devices... /dev/ttys0 at 0x03f8 (irq = 4) is a 16550A /dev/ttys1 at 0x02f8 (irq = 3) is a 16550A Mounting local filesystems... /dev/md1 on /boot type ext2 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/home on /home type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,2), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/opt on /opt type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,3), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/tmp on /tmp type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,4), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/usr on /usr type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,5), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/var on /var type ext3 (rw) Running 0dns-down to make sure resolv.conf is ok...done. Cleaning: /etc/network/ifstate. Setting up IP spoofing protection: rp_filter. Configuring network interfaces: eth0: Setting half-duplex based on auto-negotiated partner a done. 41

Starting portmap daemon: portmap. Setting the System Clock using the Hardware Clock as reference... System Clock set. Local time: Sat Jan 4 22:18:42 CET 2003 Cleaning: /tmp /var/lock /var/run. Initializing random number generator... done. Recovering nvi editor sessions... done. INIT: Entering runlevel: 2 Starting system log daemon: syslogd. Starting kernel log daemon: klogd. Starting NFS common utilities: statd. Starting mouse interface server: gpm. Starting internet superserver: inetd. Starting printer spooler: lpd. Not starting NFS kernel daemon: No exports. Starting OpenBSD Secure Shell server: sshd. Starting RAID monitor daemon: mdadm -F. Starting deferred execution scheduler: atd. Starting periodic command scheduler: cron. Debian GNU/Linux 3.0 debian tty1 debian login: This is an example of kernel boot messages while using a cutom kernel with static RAID and LVM support: Linux version 2.4.18-lvm (root@debian) (gcc version 2.95.4 20011002 (Debian prerelease)) #1 BIOS-provided physical RAM map: BIOS-e820: 0000000000000000-00000000000a0000 (usable) BIOS-e820: 00000000000f0000-0000000000100000 (reserved) BIOS-e820: 0000000000100000-000000001fff0000 (usable) BIOS-e820: 000000001fff0000-000000001fff3000 (ACPI NVS) BIOS-e820: 000000001fff3000-0000000020000000 (ACPI data) BIOS-e820: 00000000ffff0000-0000000100000000 (reserved) On node 0 totalpages: 131056 zone(0): 4096 pages. zone(1): 126960 pages. zone(2): 0 pages. Local APIC disabled by BIOS -- reenabling. Found and enabled local APIC! Kernel command line: auto BOOT_IMAGE=Linux ro root=3a00 Initializing CPU#0 Detected 900.068 MHz processor. Console: colour VGA+ 80x25 Calibrating delay loop... 1795.68 BogoMIPS Memory: 511068k/524224k available (2051k kernel code, 12772k reserved, 652k data, 292k init, Dentry-cache hash table entries: 65536 (order: 7, 524288 bytes) Inode-cache hash table entries: 32768 (order: 6, 262144 bytes) Mount-cache hash table entries: 8192 (order: 4, 65536 bytes) Buffer-cache hash table entries: 32768 (order: 5, 131072 bytes) Page-cache hash table entries: 131072 (order: 7, 524288 bytes) 42

CPU: Before vendor init, caps: 0183fbff c1c7fbff 00000000, vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff 00000000 00000000 Intel machine check architecture supported. Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff 00000000 00000000 CPU: Common caps: 0183fbff c1c7fbff 00000000 00000000 Enabling fast FPU save and restore... done. Checking hlt instruction... OK. Checking for popad bug... OK. POSIX conformance testing by UNIFIX mtrr: v1.40 (20010327) Richard Gooch (rgooch@atnf.csiro.au) mtrr: detected mtrr type: Intel CPU: Before vendor init, caps: 0183fbff c1c7fbff 00000000, vendor = 2 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 256K (64 bytes/line) CPU: After vendor init, caps: 0183fbff c1c7fbff 00000000 00000000 Intel machine check reporting enabled on CPU#0. CPU: After generic, caps: 0183fbff c1c7fbff 00000000 00000000 CPU: Common caps: 0183fbff c1c7fbff 00000000 00000000 CPU0: AMD Athlon(tm) Processor stepping 02 per-cpu timeslice cutoff: 730.67 usecs. SMP motherboard not detected. enabled ExtINT on CPU#0 ESR value before enabling vector: 00000000 ESR value after enabling vector: 00000000 Using local APIC timer interrupts. calibrating APIC timer...... CPU clock speed is 900.0494 MHz.... host bus clock speed is 200.0108 MHz. cpu: 0, clocks: 2000108, slice: 1000054 CPU0<T0:2000096,T1:1000032,D:10,S:1000054,C:2000108> Waiting on wait_init_idle (map = 0x0) All processors have done init_idle PCI: PCI BIOS revision 2.10 entry at 0xfb430, last bus=1 PCI: Using configuration type 1 PCI: Probing PCI hardware Unknown bridge resource 0: assuming transparent PCI: Using IRQ router VIA [1106/0686] at 00:07.0 PCI: Disabling Via external APIC routing Linux NET4.0 for Linux 2.4 Based upon Swansea University Computer Society NET3.039 Initializing RT netlink socket Starting kswapd VFS: Diskquotas version dquot_6.4.0 initialized Journalled Block Device driver loaded devfs: v1.10 (20020120) Richard Gooch (rgooch@atnf.csiro.au) devfs: boot_options: 0x1 vga16fb: initializing vga16fb: mapped to 0xc00a0000 Console: switching to colour frame buffer device 80x30 fb0: VGA16 VGA frame buffer device 43

Detected PS/2 Mouse Port. pty: 256 Unix98 ptys configured Serial driver version 5.05c (2001-07-08) with MANY_PORTS SHARE_IRQ SERIAL_PCI enabled ttys00 at 0x03f8 (irq = 4) is a 16550A ttys01 at 0x02f8 (irq = 3) is a 16550A Real Time Clock Driver v1.10e block: 128 slots per queue, batch=32 RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Uniform Multi-Platform E-IDE driver Revision: 6.31 ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: IDE controller on PCI bus 00 dev 39 VP_IDE: chipset revision 16 VP_IDE: not 100% native mode: will probe irqs later ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx VP_IDE: VIA vt82c686a (rev 22) IDE UDMA66 controller on pci00:07.1 ide0: BM-DMA at 0xc000-0xc007, BIOS settings: hda:pio, hdb:pio ide1: BM-DMA at 0xc008-0xc00f, BIOS settings: hdc:pio, hdd:dma HPT370: IDE controller on PCI bus 00 dev 98 PCI: Found IRQ 11 for device 00:13.0 PCI: Sharing IRQ 11 with 00:09.0 HPT370: chipset revision 3 HPT370: not 100% native mode: will probe irqs later ide2: BM-DMA at 0xe000-0xe007, BIOS settings: hde:dma, hdf:pio ide3: BM-DMA at 0xe008-0xe00f, BIOS settings: hdg:dma, hdh:pio hdd: ASUS CD-S400, ATAPI CD/DVD-ROM drive hde: IBM-DTLA-307045, ATA DISK drive hdg: IBM-DTLA-307045, ATA DISK drive ide1 at 0x170-0x177,0x376 on irq 15 ide2 at 0xd000-0xd007,0xd402 on irq 11 ide3 at 0xd800-0xd807,0xdc02 on irq 11 hde: 90069840 sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdg: 90069840 sectors (46116 MB) w/1916kib Cache, CHS=89355/16/63, UDMA(44) hdd: ATAPI 40X CD-ROM drive, 128kB Cache, UDMA(33) Uniform CD-ROM driver Revision: 3.12 ide-floppy driver 0.97.sv Partition check: /dev/ide/host2/bus0/target0/lun0: [PTBL] [5606/255/63] p1 p2 /dev/ide/host2/bus1/target0/lun0: p1 p2 Floppy drive(s): fd0 is 1.44M FDC 0 is a post-1991 82077 Loading I2O Core - (c) Copyright 1999 Red Hat Software I2O configuration manager v 0.04. (C) Copyright 1999 Red Hat Software loop: loaded (max 8 devices) Compaq CISS Driver (v 2.4.5) 8139cp 10/100 PCI Ethernet driver v0.0.6 (Nov 19, 2001) 8139cp: pci dev 00:09.0 (id 10ec:8139 rev 10) is not an 8139C+ compatible chip 8139cp: Try the "8139too" driver instead. 8139too Fast Ethernet driver 0.9.24 PCI: Found IRQ 11 for device 00:09.0 PCI: Sharing IRQ 11 with 00:13.0 eth0: RealTek RTL8139 Fast Ethernet at 0xe081e000, 00:d0:70:00:cd:e4, IRQ 11 eth0: Identified 8139 chip type RTL-8139B 44

HDLC support module revision 1.02 for Linux 2.4 Cronyx Ltd, Synchronous PPP and CISCO HDLC (c) 1994 Linux port (c) 1998 Building Number Three Ltd & Jan "Yenya" Kasprzak. ide-floppy driver 0.97.sv Promise Fasttrak(tm) Softwareraid driver 0.03beta: No raid array found Highpoint HPT370 Softwareraid driver for linux version 0.01 No raid array found SCSI subsystem driver Revision: 1.00 Red Hat/Adaptec aacraid driver, Jan 3 2003 DC390: 0 adapters found 3ware Storage Controller device driver for Linux v1.02.00.016. 3w-xxxx: No cards with valid units found. request_module[scsi_hostadapter]: Root fs not mounted request_module[scsi_hostadapter]: Root fs not mounted Linux Kernel Card Services 3.1.22 options: [pci] [cardbus] [pm] i2o_scsi.c: Version 0.0.1 chain_pool: 0 bytes @ c19fbc80 (512 byte buffers X 4 can_queue X 0 i2o controllers) md: raid1 personality registered as nr 3 md: raid5 personality registered as nr 4 raid5: measuring checksumming speed 8regs : 1214.400 MB/sec 32regs : 1139.200 MB/sec pii_mmx : 2105.200 MB/sec p5_mmx : 2691.600 MB/sec raid5: using function: p5_mmx (2691.600 MB/sec) md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: Autodetecting RAID arrays. [events: 00000004] [events: 00000004] [events: 00000004] [events: 00000004] md: autorun... md: considering ide/host2/bus1/target0/lun0/part2... md: adding ide/host2/bus1/target0/lun0/part2... md: adding ide/host2/bus0/target0/lun0/part2... md: created md2 md: bind<ide/host2/bus0/target0/lun0/part2,1> md: bind<ide/host2/bus1/target0/lun0/part2,2> md: running: <ide/host2/bus1/target0/lun0/part2><ide/host2/bus0/target0/lun0/part2> md: ide/host2/bus1/target0/lun0/part2 s event counter: 00000004 md: ide/host2/bus0/target0/lun0/part2 s event counter: 00000004 md: RAID level 1 does not need chunksize! Continuing anyway. md2: max total readahead window set to 124k md2: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host2/bus1/target0/lun0/part2 operational as mirror 1 raid1: device ide/host2/bus0/target0/lun0/part2 operational as mirror 0 raid1: raid set md2 active with 2 out of 2 mirrors md: updating md2 RAID superblock on device md: ide/host2/bus1/target0/lun0/part2 [events: 00000005]<6>(write) ide/host2/bus1/target0/lu md: ide/host2/bus0/target0/lun0/part2 [events: 00000005]<6>(write) ide/host2/bus0/target0/lu md: considering ide/host2/bus1/target0/lun0/part1... 45

md: adding ide/host2/bus1/target0/lun0/part1... md: adding ide/host2/bus0/target0/lun0/part1... md: created md1 md: bind<ide/host2/bus0/target0/lun0/part1,1> md: bind<ide/host2/bus1/target0/lun0/part1,2> md: running: <ide/host2/bus1/target0/lun0/part1><ide/host2/bus0/target0/lun0/part1> md: ide/host2/bus1/target0/lun0/part1 s event counter: 00000004 md: ide/host2/bus0/target0/lun0/part1 s event counter: 00000004 md: RAID level 1 does not need chunksize! Continuing anyway. md1: max total readahead window set to 124k md1: 1 data-disks, max readahead per data-disk: 124k raid1: device ide/host2/bus1/target0/lun0/part1 operational as mirror 1 raid1: device ide/host2/bus0/target0/lun0/part1 operational as mirror 0 raid1: raid set md1 active with 2 out of 2 mirrors md: updating md1 RAID superblock on device md: ide/host2/bus1/target0/lun0/part1 [events: 00000005]<6>(write) ide/host2/bus1/target0/lu md: ide/host2/bus0/target0/lun0/part1 [events: 00000005]<6>(write) ide/host2/bus0/target0/lu md:... autorun DONE. LVM version 1.0.1-rc4(ish)(03/10/2001) NET4: Linux TCP/IP 1.0 for NET4.0 IP Protocols: ICMP, UDP, TCP, IGMP IP: routing cache hash table of 4096 buckets, 32Kbytes TCP: Hash tables configured (established 32768 bind 32768) NET4: Unix domain sockets 1.0/SMP for Linux NET4.0. ds: no socket drivers loaded! RAMDISK: Compressed image found at block 0 Freeing initrd memory: 1031k freed VFS: Mounted root (ext2 filesystem). Mounted devfs on /dev vgscan -- reading all physical volumes (this may take a while...) vgscan -- found inactive volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group vgchange -- volume group "vg00" successfully activated kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. change_root: old root has d_count=2 Mounted devfs on /dev Freeing unused kernel memory: 292k freed INIT: version 2.84 booting Creating extra device nodes...done. Started device management daemon v1.3.25 for /dev Loading /etc/console/boottime.kmap.gz Activating swap. Adding Swap: 131064k swap-space (priority -1) Checking root file system... fsck 1.27 (8-Mar-2002) /dev/vg00/root: clean, 6766/65536 files, 47853/262144 blocks EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,0), internal journal System time was Sat Jan 4 22:26:31 UTC 2003. 46

Setting the System Clock using the Hardware Clock as reference... System Clock set. System local time is now Sat Jan 4 22:26:33 UTC 2003. Calculating module dependencies... done. Loading modules: usb-uhci usb.c: registered new driver usbdevfs usb.c: registered new driver hub usb-uhci.c: $Revision: 1.275 $ time 17:12:36 Jan 3 2003 usb-uhci.c: High bandwidth mode enabled PCI: Found IRQ 12 for device 00:07.2 PCI: Sharing IRQ 12 with 00:07.3 usb-uhci.c: USB UHCI at I/O 0xc400, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 1 hub.c: USB hub found hub.c: 2 ports detected PCI: Found IRQ 12 for device 00:07.3 PCI: Sharing IRQ 12 with 00:07.2 usb-uhci.c: USB UHCI at I/O 0xc800, IRQ 12 usb-uhci.c: Detected 2 ports usb.c: new USB bus registered, assigned bus number 2 hub.c: USB hub found hub.c: 2 ports detected usb-uhci.c: v1.275:usb Universal Host Controller Interface driver input usbkbd usb.c: registered new driver keyboard usbkbd.c: :USB HID Boot Protocol keyboard driver keybdev Setting up LVM Volume Groups... vgscan -- reading all physical volumes (this may take a while...) modprobe: Can t locate module /dev/nb vgscan -- found active volume group "vg00" vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- WARNING: This program does not do a VGDA backup of your volume group modprobe: Can t locate module /dev/nb vgchange -- volume group "vg00" already active Starting RAID devices: done. Checking all file systems... fsck 1.27 (8-Mar-2002) /dev/md1: clean, 28/5136 files, 3799/20544 blocks /dev/vg00/home: clean, 11/32768 files, 8268/131072 blocks /dev/vg00/opt: clean, 11/4096 files, 1564/16384 blocks /dev/vg00/tmp: clean, 11/32768 files, 8268/131072 blocks /dev/vg00/usr: clean, 13967/131072 files, 163899/524288 blocks /dev/vg00/var: clean, 1153/32768 files, 26342/131072 blocks Setting kernel variables. Loading the saved-state of the serial devices... /dev/tts/0 at 0x03f8 (irq = 4) is a 16550A /dev/tts/1 at 0x02f8 (irq = 3) is a 16550A Mounting local filesystems... /dev/md1 on /boot type ext2 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,1), internal journal EXT3-fs: mounted filesystem with ordered data mode. 47

/dev/vg00/home on /home type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,2), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/opt on /opt type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,3), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/tmp on /tmp type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,4), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/usr on /usr type ext3 (rw) kjournald starting. Commit interval 5 seconds EXT3 FS 2.4-0.9.17, 10 Jan 2002 on lvm(58,5), internal journal EXT3-fs: mounted filesystem with ordered data mode. /dev/vg00/var on /var type ext3 (rw) Running 0dns-down to make sure resolv.conf is ok...done. Cleaning: /etc/network/ifstate. Setting up IP spoofing protection: rp_filter. Configuring network interfaces: eth0: Setting half-duplex based on auto-negotiated partner a done. Starting portmap daemon: portmap. Setting the System Clock using the Hardware Clock as reference... System Clock set. Local time: Sat Jan 4 23:26:40 CET 2003 Cleaning: /tmp /var/lock /var/run. Initializing random number generator... done. Recovering nvi editor sessions... done. INIT: Entering runlevel: 2 Starting system log daemon: syslogd. Starting kernel log daemon: klogd. Starting NFS common utilities: statd. Starting mouse interface server: gpm. spurious 8259A interrupt: IRQ7. Starting internet superserver: inetd. Starting printer spooler: lpd. Not starting NFS kernel daemon: No exports. Starting OpenBSD Secure Shell server: sshd. starting RAID monitor daemon: mdadm -F. Starting deferred execution scheduler: atd. Starting periodic command scheduler: cron. Debian GNU/Linux 3.0 debian tty1 debian login: 48