[ale] Bacula or Rsync + Windows

George Allen glallen01 at gmail.com
Wed Oct 6 11:38:58 EDT 2010


Well - if I were going to go with a distributed filesystem setup, I'd
want to use AFS: http://en.wikipedia.org/wiki/Andrew_File_System

Duke had a setup of this running on Solaris, so that your user
directory would mount on either a windows, OSX, linux, or solaris box.
It handled periodic snapshots so that you could grab the 1d/wk/mo old
version of files (or something similar, it was a while back) and it
could also cache locally - so you weren't constantly pulling data like
samba.

That was the best system I've seen for user-profiles. It's just is too
much 'different' for me to ever get approved in this environment.

On Wed, Oct 6, 2010 at 11:23 AM, Michael B. Trausch <mike at trausch.us> wrote:
> On Wed, 2010-10-06 at 11:00 -0400, George Allen wrote:
>> We have several dozen sites running off T1's, and mobile users running
>> off line or on VPN sometimes.
>> So - hosting everything over the WAN isn't really an option.
>>
>> I'd prefer something like the windows equiv. of a CRON job that wakes
>> up, determines if it's on the home network, tests bandwidth calls
>> rsync, and reports success/failure.
>>
>> Now - if I had my way, I'd put them all on x-terminals and teach them
>> unix and ssh, but that's not an option either.
>
> Hrm.  Well, bandwidth limitations are the bane of us all for such
> things, I think.
>
> There are some things that I am trying to look into in order to have a
> distributed filesystem share that is suitable for travelling across slow
> WAN links.  So far, though, I haven't found anything _yet_.
>
> Ideally, I think that btrfs + the new network filesystem built on top of
> it, combined with some form of conflict resolution, would be the way to
> go.  Unfortunately, while btrfs is stable enough for personal use and
> maybe even use in a business (I have only started testing it again with
> the 2.6.35 kernel series; I encountered problems with 2.6.33/2.6.34),
> the network filesystem hasn't gone anywhere AFAIK.
>
> Imagine a network filesystem that were designed to scale to a terabyte
> or so of data, kept in near-real-time sync across many servers with the
> only assurance being that at least 384 Kbps were available between the
> servers.  Obviously in such a situation a new server would have to be
> "seeded" with the data before going on-net, but such a filesystem would
> look very much like a database engine, I would expect, in that it would
> have a longer-lasting "journal", and you could bring up a copy of the
> data that knew when it was from and just replay the updates before
> joining (or rejoining) the filesystem network.
>
> I have to wonder what it would take to make that possible given current
> filesystems, and be able to do things like enforce POSIX filesystem and
> file locking semantics on it.  It would probably take someone with far
> greater knowledge in math than I, because it would have to take
> advantage of the mathematical properties of binary data in order to
> effect efficient and near-real-time compression and updates across an
> entire net.  It'd have to be peer-to-peer (that is, decentralized),
> perhaps with appointed privileged machines that are the seeds for the
> P2P filesystem network.
>
> I guess the real question is:  What would it take to create a
> multiterabyte filesystem that scaled to the whole of the Internet,
> including low and very low bandwidth connections?
>
>        --- Mike
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>



More information about the Ale mailing list