[ale] Disk IO Question
scott
scott at sboss.net
Thu Oct 29 18:22:02 EDT 2009
This is a quick a dirty /bin/sh hacked up script..
#!/bin/sh
#
# you must hack this up to work for you.
#
# data drive = /data (source of the backups)
# backup drive = /backups (where the backup copy goes)
#
# there is only five folders in /data and no files directly in that
folder.
# the folder names are "data", "apps", "contacts", "images", and
"twitter_backups".
#
# make sure we have destination folders....
#
if [ ! -d /backups/data ]; then
mkdir -p /backups/data
fi
if [ ! -d /backups/apps ]; then
mkdir -p /backups/apps
fi
if [ ! -d /backups/contacts ]; then
mkdir -p /backups/contacts
fi
if [ ! -d /backups/images ]; then
mkdir -p /backups/images
fi
if [ ! -d /backups/twitter_backups ]; then
mkdir -p /backups/twitter_backups
fi
#
# now to do the copies. I am using cp in the example. you can use
cp, tar, cpio, rsync, etc.
# screen -d -m makes a temporary "daemon" out of the process. Once
the process finishes, the screen daemon stops for that particular copy.
#
screen -d -m cp -r /data/data/* /backups/data/
screen -d -m cp -r /data/apps/* /backups/apps/
screen -d -m cp -r /data/contacts/* /backups/contacts/
screen -d -m cp -r /data/images/* /backups/images/
screen -d -m cp -r /data/twitter_backups/* /backups/twitter_backups/
personally I would write it in perl (I love perl) and put lots more
error checking and setup checking (like to make sure the source and
destination drives are mounted, etc).
but this gives you an idea.
On Oct 29, 2009, at 6:01 PM, mmillard1 at comcast.net wrote:
> How do I copy the files multithreaded? I've honestly never tried it.
>
> ----- Original Message -----
> From: "scott" <scott at sboss.net>
> To: "Atlanta Linux Enthusiasts - Yes! We run Linux!" <ale at ale.org>
> Sent: Thursday, October 29, 2009 5:47:47 PM GMT -05:00 US/Canada
> Eastern
> Subject: Re: [ale] Disk IO Question
>
>
> There is a few bottlenecks.. (or potential ones):
> * PCI bus speed (the card AND the mobo bus)
> * speed rates/transfer rates/seek times on the drives
> * the enclosures might have a max throughput that is a bottleneck.
> * cpu usage (getting cpu bound is easy to do).
> * memory usage (running out of physical ram, and swapping is killer
> (in a bad way) in this situation).
>
> also are you copying one (or very few) large files, or lots of
> medium sized ones? if you are copying lots of files, you can run
> multithreaded on the copy process to use more bandwith of the system.
>
> hard to give you the smoking gun.
>
> Sorry
>
> On Oct 29, 2009, at 5:40 PM, Greg Clifton wrote:
>
> What about bus contention. I'm not familiar with the IBM model you
> mentioned, but if you are running PCI cards (32bit, 33MHz) then you
> do have a bit of a bottleneck there, plus if there are other
> expansion cards in the system they can slow things down.
> GC
>
>
> On Thu, Oct 29, 2009 at 5:10 PM, James Taylor<James.Taylor at eastcobbgroup.com
> > wrote:
> Make that 15k RPM *SAS* drives...
>
> >>> "James Taylor" <James.Taylor at eastcobbgroup.com> 10/29/2009
> 05:04 PM >>>
> I would suggest that your datapaths are not going to be the
> bottleneck with SATA drives.
> I had a client running SATA for several years on an iSCSI
> appliance, and one day we hit a brick wall with the mail
> performance. We spent weeks trying to tune the I/O paths to deal
> with it, and when we looked at the actual SATA drive transfer
> capacity, we realized the drives were the problem.
> We replaced the drives with 15k RPM SATA drives, and life has been
> good ever since.
> Check your drive specs and see if that's where the limitation really
> is.
> -jt
>
>
> James Taylor
> The East Cobb Group, Inc.
> 678-697-9420
> james.taylor at eastcobbgroup.com
> http://www.eastcobbgroup.com
>
>
>
>
> >>> <mmillard1 at comcast.net> 10/29/2009 04:49 PM >>>
>
>
> I have a Suse OES server running on an IBM 346with 2 gig of RAM.
> I've installed two Addonics Multilane SATA cards connect with
> Multilane cables to external SATA enclosures housing 4 WD Caviar
> Green 1.5 TB Drives.
>
>
>
> I have about 1.8 TB of Data on one Enclosure which changes daily.
> My plan was to copy the 1.8 TB to the second enclosure daily and
> send the drives in that unit off site.
>
>
>
> My expectation was that by having 4 independant SATA paths per unit
> would give me substatial performance when moving data between these
> units.
>
>
>
> I'm seeing a constant speed of about 60 gig per hour. This is much
> slower than I expected. Have any of you done anything like this? Is
> this the kind of performance I should expect? Hopefull some of you
> bright people can share some wisdom with me.
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
> _______________________________________________ Ale mailing list
> Ale at ale.orghttp://mail.ale.org/mailman/listinfo/ale See JOBS,
> ANNOUNCE and SCHOOLS lists athttp://mail.ale.org/mailman/listinfo
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20091029/941fb4be/attachment.html
More information about the Ale
mailing list