If you are wanting to setup SoftwareRaid on a Linux Machine you should first read the Software RAID HOWTO which is an excellent introduction to RAID in general. Or, if you want to spend a minute instead of an hour reading, read RaidNotes.
Allthough I setup this machine up with Debian I am pretty sure that most of the steps listed below will be distribution independent. We will start from the basic hardware that doesn't do anything useful and progress until we have a machine that boots up into a full working Debian install off the raid array.
The machine that I used for this configuration has a Intel S845WD1-E Motherboard with two on-board IDE channels plus an extra two channels controlled by a Promise PDC20267 RAID chipset (commonly known as a Promise FastTrak100). Unfortunately Promise is not clever enough to release open source drivers for the RAID portion of this chipset so it is only useable as an IDE controller under linux. The rest of the machine is configured as follows. There is one 40GB Seagate IDE Disk on each of the FastTrak?'s channels. We want to use these two disks to create a RAID-1 array for redundant storage.
Assuming that the physical installation has been completed correctly with a single disk on each IDE channel, 80pin IDE cables, etc. The next problem we face is that the Debian installer (and all other distributions installers?) cannot install directly onto a raid array as the standard kernels do not have software raid support included. To get around this problem I used the first method described in Software RAID HOWTO of placing a 3rd disk on the first on-board IDE channel and installing a Basic Debian install on to that.
** CONFIG_MD ** CONFIG_BLK_DEV_MD ** CONFIG_MD_RAID1 ** CONFIG_BLK_DEV_LVM (Probably not needed) ** General IDE Support ** CONFIG_BLK_DEV_OFFBOARD ** CONFIG_BLK_DEV_PDC202XX_OLD ** CONFIG_PDC202XX_BURST ** CONFIG_PDC202XX_FORCE
Which results in a partition table that should look like this.
mgmt:# fdisk -l /dev/hde
Disk /dev/hde: 255 heads, 63 sectors, 4865 cylinders Units = cylinders of 16065 * 512 bytes
Notice that the type of all the partitions (except the swap partition) is set to Linux raid autodetect (0xFD) - this is important.
I am creating 5 seperate RAID-1 partitions for /, /usr, /var, /tmp, and /home on my machine. Each of these RAID partitions is mirrored on both disks. To create these partitions execute the following commands.
Notice that we specify the SoftwareRaid device that the RAID partition should be located at, the RAID level for the partition, the number of disks in the array and the raw disk partitions to include in the array.
You should cat /proc/mdstat now and you will see that 5 raid partitions listed. You will also notice that they are currently being initialised. Each one will have a progress bar and an ETA for the time at which the construction will be finished. This process happens transparently and you can use the RAID partion while it is being constructed.
While you could start transferring data onto the new RAID partitions as soon as they are formatted I like to reboot first to ensure that they are all detected correctly before you transfer any data onto them incase I've done something wrong. If you are impatient you can skip this step and move straight on to copying the data onto the partitions.
You might notice that we haven't defined the configuration of our RAID arrays in any files yet, we simply issued the 5 commands above and yet when we reboot they are magically there! Information about each RAID array is stored in the superblock of the disk which allows the kernel to automatically locate and assemble the portions of the array as it boots. This allows us to do some really cool stuff as you'll soon see.
Make sure you update /etc/fstab to mount the various portions of the filesystem correctly!!
This tells lilo to install the boot record in the MBR of /dev/hde (the first disk in the RAID array) and that the root filesystem will be located at /dev/md0 (the first RAID array)
While you are turning the machine off to reboot remove the extra drive that you used for the initial install and make sure that you BIOS is correctly configured to boot of the Promise Controller.
On the Intel Motherboard that I was using this required setting the BIOS boot order to Harddisk First and then setting the HardDisk type to FT Something... (your motherboard is most probably different) We also had to define a single Array of type Span on the first disk only.
When you get back into linux you will note that the disk you booted off has now become hda!!!! And your arrays are still working. This is because the array information is stored inside the superblock of the physical drive, so regardless of where it is logically in your system linux can find the array information and setup your arrays.
Run lilo to install the new MBR, Reboot, Make sure your swap is mounted in the correct locations and you're away. You can now use this system as you would any other!
You can also do root-on-RAID1 using a recent stock debian kernel without compiling your own custom kernel. To do so, use mkinitrd to create a custom initrd image with the necessary RAID1 modules.
This step should happen during the Make the system bootable section above, before running "chroot /mnt lilo -v".
to read ROOT=/dev/md0
and continue with the rest of the instructions.
The most recent versions (eg LinuxKernel2.6) of the Debian kernel-image packages build a new initrd image upon installation. They should automatically notice if the root device is /dev/md* and arrange for the appropriate modules to be present in the initrd image and loaded appropriately. So if the software raid array is actually your root filesystem when you do the kernel install, everything should just work.
And then restarted udev. This should apply for Debian Sarge too.