[ale] Disk IO Question

scott scott at sboss.net
Thu Oct 29 17:47:47 EDT 2009


There is a few bottlenecks.. (or potential ones):
* PCI bus speed (the card AND the mobo bus)
* speed rates/transfer rates/seek times on the drives
* the enclosures might have a max throughput that is a bottleneck.
* cpu usage (getting cpu bound is easy to do).
* memory usage (running out of physical ram, and swapping is killer  
(in a bad way) in this situation).

also are you copying one (or very few) large files, or lots of medium  
sized ones?  if you are copying lots of files, you can run  
multithreaded on the copy process to use more bandwith of the system.

hard to give you the smoking gun.

Sorry

On Oct 29, 2009, at 5:40 PM, Greg Clifton wrote:

> What about bus contention. I'm not familiar with the IBM model you  
> mentioned, but if you are running PCI cards (32bit, 33MHz) then you  
> do have a bit of a bottleneck there, plus if there are other  
> expansion cards in the system they can slow things down.
> GC
>
>
> On Thu, Oct 29, 2009 at 5:10 PM, James Taylor <James.Taylor at eastcobbgroup.com 
> > wrote:
> Make that 15k RPM *SAS* drives...
>
> >>> "James Taylor" <James.Taylor at eastcobbgroup.com> 10/29/2009   
> 05:04 PM >>>
> I would suggest that your datapaths are not going to be the  
> bottleneck with SATA drives.
> I had a client running SATA for several years on an iSCSI   
> appliance, and one day we hit a brick wall with the mail  
> performance.  We spent weeks trying to tune the I/O paths to deal  
> with it, and when we looked at the actual SATA drive transfer  
> capacity, we realized the drives were the problem.
> We replaced the drives with 15k RPM SATA drives, and life has been  
> good ever since.
> Check your drive specs and see if that's where the limitation really  
> is.
> -jt
>
>
> James Taylor
> The East Cobb Group, Inc.
> 678-697-9420
> james.taylor at eastcobbgroup.com
> http://www.eastcobbgroup.com
>
>
>
>
> >>> <mmillard1 at comcast.net> 10/29/2009  04:49 PM >>>
>
>
> I have a Suse OES server running on an  IBM 346with 2 gig of RAM.    
> I've installed two Addonics Multilane SATA cards connect with  
> Multilane cables to external SATA enclosures housing 4 WD Caviar  
> Green 1.5 TB Drives.
>
>
>
> I have about 1.8 TB of Data on one Enclosure which changes daily.   
> My plan was to copy the 1.8 TB to the second enclosure daily and  
> send the drives in that unit off site.
>
>
>
> My expectation was that by having 4 independant SATA paths per unit   
> would give me  substatial performance when moving data between these  
> units.
>
>
>
> I'm seeing a constant speed of about 60 gig per hour.  This is much  
> slower than I expected.  Have any of you done anything like this? Is  
> this the kind of performance I should expect?  Hopefull some of you  
> bright people can share some wisdom with me.
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20091029/9277a866/attachment.html 


More information about the Ale mailing list