[ale] Suse 9.3 and fiber storage

Andrew Wade andrewiwade at gmail.com
Sat Apr 16 11:27:53 EDT 2011


So in your migration, I'd zone the existing disks to your RHEL 5/6 server
then use the rescan-scsi-bus.sh script to map in the disks through the HBA
and from there you can duplicate your setup and map the disks to a
filesystem.

But, beware that these are probably not on a clustered filesystem (gfs
,etc), so unmount it from the Suse side first before mounting on the RHEL
side.  You don't want the systems both messing with data on the same time on
a ext3 or ext4 filesystem.

On Sat, Apr 16, 2011 at 11:23 AM, Andrew Wade <andrewiwade at gmail.com> wrote:

> I work on Fiber Channel connected SAN disks on RHEL 4, and 5.
>
> First you need to enable multipath
>
> service multipathd start
> chkconfig multipath on
>
> Next, you need to install (for RHEL 5) sg3 ultils.  Make sure you're on the
> right Channel in Red Hat Satellite to get those packages.
>
> yum install sg3*
>
> Next, zone the LUNS to your WWN of your HBAs on your desired servers.
>
> Then to discover the Luns that were zoned to your sever, look in
> /usr/bin/rescan-scsi-bus.sh
>
>
>
> If you have RHEL 4, I'd install the qlogic san surfer utils to use their
> script to scan in the luns.   In RHEL 5, native sg3 utils work great and you
> don't have to worry about patching your initrd, etc.
>
>
> Andrew Wade
> RHCE
> This script with no options scans the hba's for new luns and maps them in.
>
> Before running the script, you should make a backup copy of
> /var/lib/multipath/bindings
>
> Then run the rescan-scsi-bus.sh script.
>
> Next do a diff on /var/lib/multipath/bindings and your
> /var/lib/multipath/bindings.bk
>
> You'll see the new LUN that got added.
>
> multipath -ll    will confirm all the luns you've added
>
> Now, for custom aliases, you go back to your bindings file and delete the
> entry it made called mpath0  and the uid of the disk.
>
> Then edit your multipath.conf to use the uid of the disk (which you see in
> multipath -ll ) and give it the alias you want ie: oracle_asm01 , etc.
>
> Then you run multpath -v3  to reread your multipath.conf and rebuild the
> mulitpath configuraton with your new disk alias (instead of the generic
> mpath0, mpath1, etc.)
>
> Next, look at /etc/multipath.conf  (or /etc/multipath/multipath.conf  I'd
> have to look).   Here you can define your aliases if you have any.
>
>
> On Fri, Apr 15, 2011 at 7:50 PM, Greg Freemyer <greg.freemyer at gmail.com>wrote:
>
>> For figuring out what you have:
>>
>> You're getting too complex for what seems a simple job.
>>
>> FC volumes are scsi normally, so /dev/sdb, etc is likely the drives.
>>
>> Just like a physical drive, a FC volume can be used in whole or
>> partitioned.
>>
>> To get the full unpartitioned volume size, look in /sys/block/sdb/...
>>  (You can also just call df.)
>>
>> You should be able to get partition info from /proc/partitions
>>
>> You should see all of your mount points the traditional way.  ie. Look
>> in /etc/fstab and/or run mount.
>>
>> The key thing is FC drives fit into the normal scheme at the level you
>> are talking about.
>>
>> You will have a little more fun setting up the new environment and
>> mounting the volumes.
>>
>> Also, you can't tell how the raid setup is done from the basic linux
>> side.  (There may be management software that tells you, but that will
>> be a separate thing.  Likely the storage guys have that info and you
>> don't.  Trouble is they need to know more detail than just /dev/sdb
>> nomenclature to know which volume you are talking about on their end.)
>>
>> Greg
>>
>>
>>
>>
>> On Fri, Apr 15, 2011 at 4:51 PM, Damon L. Chesser <damon at damtek.com>
>> wrote:
>> > I have a bunch of Suse 9.3 servers with various apps that need to be
>> > migrated to RHEL 5 or 6.
>> >
>> > I have back end SANs attached via QLogic hbas.
>> >
>> > How do I verify how the attached storage is mounted (ie, this mount is
>> > remote via the hba).
>> >
>> > There is no /etc/mulitpath.conf
>> >
>> > What I am looking for is a way I can get info and make a "map" that I
>> > can duplicate on the new OS.  The new storage will be entirely new
>> > partitions/luns on completely different LUNs, but the "structure" might
>> > need to be the same, ie: /somemount is 17G /somemount2 is 15G etc.
>> >
>> > /dev/disk/by-* has by-id and by-uuid and by-path.
>> >
>> > I know this is both rather simple and broad, but I have zero fiber/HBA
>> > experience and it would appear I don't know the proper search terms to
>> > google.
>> >
>> > If it matters the back end is (old) is a Hitachi and I don't know the
>> > front end.  I will not be tasked with slicing up the LUNs, but reporting
>> > what sizes I need them to be, then mounting the partitions with the
>> > proper mount points on the new OS.
>> > --
>> > Damon
>> > damon at damtek.com
>> >
>> > _______________________________________________
>> > Ale mailing list
>> > Ale at ale.org
>> > http://mail.ale.org/mailman/listinfo/ale
>> > See JOBS, ANNOUNCE and SCHOOLS lists at
>> > http://mail.ale.org/mailman/listinfo
>> >
>>
>>
>>
>> --
>> Greg Freemyer
>> Head of EDD Tape Extraction and Processing team
>> Litigation Triage Solutions Specialist
>> http://www.linkedin.com/in/gregfreemyer
>> CNN/TruTV Aired Forensic Imaging Demo -
>>
>> http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/
>>
>> The Norcross Group
>> The Intersection of Evidence & Technology
>> http://www.norcrossgroup.com
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20110416/8187b421/attachment.html 


More information about the Ale mailing list