[ale] Converting to RAID5 root
Chris Woodfield
rekoil at semihuman.com
Sun Mar 5 11:17:50 EST 2006
Yeah, I realized I wrote this in a bit of haste. Let me explain the
setup more clearly:
I initially installed my linux distro on a regular ATA drive,
standard debian 3.0 defaults (root at /dev/hda1, /home on /dev/hda5).
I then added three SATA drives to the system, with the intent of
migrating the system over to the raid array. GRUB sees these as (hd2)
through (hd4). The libata driver names then /dev/sda - /dev/sdc.
I set up the following arrays on the three drives:
md0 - sda1, sdb2 - RAID 1 array to load kernel, will mount as /boot.
md1 - sda5, sdb5, sdc5 - RAID5 array to mount as /
md2 - sda6, sdb6, sdc6 - RAID5 array hosting LVM partitions
I solved the kernel panic issue already - I had incorrectly assumed
that the initrd image generated by make-kpkg would load the proper
SATA modules and autodetect my SATA drives. It didn't. So I commented
the initrd out of menu.lst, compiled libata and RAID support into the
kernel directly, and rebooted.
The system now autodetects the arrays, and mounts /dev/md1 as the
root partition, but then bombs with this error immediately after
mounting /dev/md1 :
"Warning: could not open inital console"
And I'm clueless, again.
Any ideas on that one?
-C
On Mar 5, 2006, at 9:26 AM, H. A. Story wrote:
> This defies my logic of RAIDS. Is this a software RAID? A RAID 1
> requires at least 2 drives. A RAID 5 requires at least 3 drives. You
> don't appear to have enough. You would need a total of 5 drives for
> this configuration. Unless you were doing sometime of magic with
> software RAID. And that some how doesn't sound like a good idea.
>
> However if I am wrong I was reading something the other day about
> initrd. This would have to be loaded before attempting to mount the
> root. I THINK. It needs to load the drivers for the RAID before the
> kernel can boot.
>
> Chris Woodfield wrote:
>
>> OK, here goes...
>>
>> I'm converting a standard one-drive debian sid installation to a 3-
>> drive RAID. I current have both the original drive and the three SATA
>> drives installed in the system. I've maid the raid partitions with no
>> issues. I have three RAID partitions:
>>
>> md0 boot partition, RAID 1
>> md1 root, RAID 5
>> md2 LVM volume, RAID 5
>>
>> What I'm trying to do at the moment is make sure I can boot off of
>> the RAID set with md1 as the root partition. Here's the relevant part
>> of menu.lst on the primary drive:
>>
>> title Debian GNU/Linux, kernel 2.6.15.4 RAID
>> root (hd2,0)
>> kernel (hd2,0)/vmlinuz-2.6.15.4 root=/dev/md1 ro
>> initrd (hd2,0)/initrd.img-2.6.15.4
>> savedefault
>> boot
>>
>> title Debian GNU/Linux, kernel 2.6.15.4 RAID (recovery
>> mode)
>> root (hd2,0)
>> kernel (hd2,0)/vmlinuz-2.6.15.4 root=/dev/md1 ro single
>> initrd (hd2,0)/initrd.img-2.6.15.4
>> savedefault
>> boot
>>
>> I've already copied all the filesystems over to the relevant RAID
>> partitions. RAID is compiled into the kernel.
>>
>> By all appearances, GRUB is able to boot the kernel that lives on the
>> md0 volume, but I get a kernel panic at the point where the system
>> attempts to mount /dev/md1 as the root volume. The error reads:
>>
>> VFS: Cannot open root device "md1" or unknown-block(0,0)
>> Please append a correct "root=" option
>> Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-
>> block(0,0)
>>
>> Is there something that needs to be set up prior to this mount
>> operation (a boot arg, for example) such that the kernel knows how to
>> assemble /dev/md1? Is this something that should be in the initrd
>> that make-kpkg creates? Any other ideas?
>>
>> Thanks,
>>
>> -Chris
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://www.ale.org/mailman/listinfo/ale
>>
>>
>>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://www.ale.org/mailman/listinfo/ale
>
More information about the Ale
mailing list