[ale] Converting to RAID5 root
Chris Woodfield
rekoil at semihuman.com
Sun Mar 5 18:10:18 EST 2006
LVM does a couple of things - you are correct in that it allows you
to span drives, but it also allows you to create semi-dynamic
partitions that can be resized as needed.
It's not possible to partition a raid device - you have to partition
the component drives and build your RAID arrays from those. LVM is
the workaround, allowing you to create logical partitions on top of a
RAID.
See the Software RAID howto:
http://www.tldp.org/HOWTO/Software-RAID-HOWTO-11.html
-C
On Mar 5, 2006, at 12:58 PM, Howard A Story wrote:
> Okay that is a software RAID. I get a headache just looking.
> From the
> error that your are getting I start to wonder if it is more system
> setup
> than RAID setup. If your md1 is / what is md2? make sure you don't
> have /tmp or /etc, You know a directory that the system needs
> during boot.
>
> LVM is a logical volume manager?? It allows you to span drives to see
> them as one. If I am not mistaken. Isn't RAID 5 the about the same
> thing with fault tolerance? Can you really do them both on the same
> partitions?
>
> LOL I just saw a good commercial... VW Time to unpimp the ride.
>
>
> Chris Woodfield wrote:
>
>> Yeah, I realized I wrote this in a bit of haste. Let me explain the
>> setup more clearly:
>>
>> I initially installed my linux distro on a regular ATA drive,
>> standard debian 3.0 defaults (root at /dev/hda1, /home on /dev/hda5).
>>
>> I then added three SATA drives to the system, with the intent of
>> migrating the system over to the raid array. GRUB sees these as (hd2)
>> through (hd4). The libata driver names then /dev/sda - /dev/sdc.
>>
>> I set up the following arrays on the three drives:
>>
>> md0 - sda1, sdb2 - RAID 1 array to load kernel, will mount as /boot.
>> md1 - sda5, sdb5, sdc5 - RAID5 array to mount as /
>> md2 - sda6, sdb6, sdc6 - RAID5 array hosting LVM partitions
>>
>> I solved the kernel panic issue already - I had incorrectly assumed
>> that the initrd image generated by make-kpkg would load the proper
>> SATA modules and autodetect my SATA drives. It didn't. So I commented
>> the initrd out of menu.lst, compiled libata and RAID support into the
>> kernel directly, and rebooted.
>>
>> The system now autodetects the arrays, and mounts /dev/md1 as the
>> root partition, but then bombs with this error immediately after
>> mounting /dev/md1 :
>>
>> "Warning: could not open inital console"
>>
>> And I'm clueless, again.
>>
>> Any ideas on that one?
>>
>> -C
>>
>> On Mar 5, 2006, at 9:26 AM, H. A. Story wrote:
>>
>>
>>
>>> This defies my logic of RAIDS. Is this a software RAID? A RAID 1
>>> requires at least 2 drives. A RAID 5 requires at least 3
>>> drives. You
>>> don't appear to have enough. You would need a total of 5 drives for
>>> this configuration. Unless you were doing sometime of magic with
>>> software RAID. And that some how doesn't sound like a good idea.
>>>
>>> However if I am wrong I was reading something the other day about
>>> initrd. This would have to be loaded before attempting to mount the
>>> root. I THINK. It needs to load the drivers for the RAID before
>>> the
>>> kernel can boot.
>>>
>>> Chris Woodfield wrote:
>>>
>>>
>>>
>>>> OK, here goes...
>>>>
>>>> I'm converting a standard one-drive debian sid installation to a 3-
>>>> drive RAID. I current have both the original drive and the three
>>>> SATA
>>>> drives installed in the system. I've maid the raid partitions
>>>> with no
>>>> issues. I have three RAID partitions:
>>>>
>>>> md0 boot partition, RAID 1
>>>> md1 root, RAID 5
>>>> md2 LVM volume, RAID 5
>>>>
>>>> What I'm trying to do at the moment is make sure I can boot off of
>>>> the RAID set with md1 as the root partition. Here's the relevant
>>>> part
>>>> of menu.lst on the primary drive:
>>>>
>>>> title Debian GNU/Linux, kernel 2.6.15.4 RAID
>>>> root (hd2,0)
>>>> kernel (hd2,0)/vmlinuz-2.6.15.4 root=/dev/md1 ro
>>>> initrd (hd2,0)/initrd.img-2.6.15.4
>>>> savedefault
>>>> boot
>>>>
>>>> title Debian GNU/Linux, kernel 2.6.15.4 RAID (recovery
>>>> mode)
>>>> root (hd2,0)
>>>> kernel (hd2,0)/vmlinuz-2.6.15.4 root=/dev/md1 ro single
>>>> initrd (hd2,0)/initrd.img-2.6.15.4
>>>> savedefault
>>>> boot
>>>>
>>>> I've already copied all the filesystems over to the relevant RAID
>>>> partitions. RAID is compiled into the kernel.
>>>>
>>>> By all appearances, GRUB is able to boot the kernel that lives
>>>> on the
>>>> md0 volume, but I get a kernel panic at the point where the system
>>>> attempts to mount /dev/md1 as the root volume. The error reads:
>>>>
>>>> VFS: Cannot open root device "md1" or unknown-block(0,0)
>>>> Please append a correct "root=" option
>>>> Kernel panic - not syncing: VFS: Unable to mount root fs on
>>>> unknown-
>>>> block(0,0)
>>>>
>>>> Is there something that needs to be set up prior to this mount
>>>> operation (a boot arg, for example) such that the kernel knows
>>>> how to
>>>> assemble /dev/md1? Is this something that should be in the initrd
>>>> that make-kpkg creates? Any other ideas?
>>>>
>>>> Thanks,
>>>>
>>>> -Chris
>>>>
>>>> _______________________________________________
>>>> Ale mailing list
>>>> Ale at ale.org
>>>> http://www.ale.org/mailman/listinfo/ale
>>>>
>>>>
>>>>
>>>>
>>>>
>>> _______________________________________________
>>> Ale mailing list
>>> Ale at ale.org
>>> http://www.ale.org/mailman/listinfo/ale
>>>
>>>
>>>
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://www.ale.org/mailman/listinfo/ale
>>
>>
>>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://www.ale.org/mailman/listinfo/ale
>
More information about the Ale
mailing list