[ale] Ubuntu Linux Defrag EXT4

Pat Regan thehead at patshead.com
Mon Sep 13 19:47:45 EDT 2010


On Mon, 13 Sep 2010 18:28:42 -0400
Greg Freemyer <greg.freemyer at gmail.com> wrote:

> Pat, I clearly know too much about the e4defrag tool, so stop reading
> now if you don't want lots of detail.  The main reason I know it so
> well is I'm part of a project that is using the EXT4_IOC_MOVE_EXT
> ioctl for other purposes.  But I monitor the ext4 mailing list for
> discussion about that ioctl to see if anything pertinent to my project
> pops up.

You can never know too much!

> As you imply, ext4 is pretty good at keeping files defrag'ed in the
> first place, but if you have a file like a log file the slowly grows,
> or a sparse file like the Virtual Disk for a VM that is growing
> randomly at internal block ranges, I can see it happening.  Especially
> if the partition is low on disk space.

I don't think there is any reason at all for large files to be
perfectly contiguous.  As long as each fragment is at least several
megabytes (or maybe tens of megabytes these days?) you aren't going to
impact performance in any sort of noticeable way.

There aren't very many workloads that can severely impact ext2/3/4 even
if you run with a near full disk as long as you don't lower your
default 5% reserved block count.  That's 50 gig of juggling space on a
1 TB drive, which is a boatload :)

> fyi: There are patches under discussion for both the kernel and user
> space portions of ext4 / e4defrag to group associated files together
> on the disk.  One proposed implementation was to feed a group of files
> into e4defrag that you want sequential.  It would then fallocate a
> single large file big enough to provide sequential data blocks for all
> the files one after another.  Then use  EXT4_IOC_MOVE_EXT to migrate
> those data blocks out of the single large donor file to the smaller
> individual files.

Reordering files is very smart if they are small files (1-2 MB or
less?).  I'm glad that it sounds like it works more intelligently than
microsoft's own NTFS defragger.  Lumping every single one of my files
together at the front of my disk is not only a waste of time but it
will also lead to even more fragmentation.  I don't like that self
reinforcing cycle :)

> Thus if KDE startup was your big concern and you knew the order in
> which the executables and libraries would be loaded, you could lay
> them all out sequentially on disk.  For some reason I don't understand
> that concept has not yet gotten a positive response from the kernel
> defrag devel guys.

I'm running btrfs on my laptop now.  It is the first time I might have
any reason to think about fragmentation, since copy-on-write tends to
promote fragmentation.  Is anyone else playing with btrfs?  I'm only
running it on an SSD so I don't expect to notice it but I am very
curious how other people are making out.

Pat


More information about the Ale mailing list