[ale] Two offices, one data pool
atllinuxenthinfo at c3energy.com
Thu Feb 17 14:24:24 EST 2011
Here are some random, possibly non relevant thoughts.
* Perhaps there is a way to get your wish to get your wish of itty bitty
level edits. But, it would probably require a difficult transitional
process. Would it be possible to put the most commonly used documents,
not necessarily spreadsheets, into an SQL database? Each sentence, or
each paragraph, or each page could be a record. Different people could
edit those little components at the same time. It could take the
structure of a linked list. A reporting module could provide access to
the "document" as one contiguous presentation, with the latest
committed changes. You could have version tracking and the ability to
revert to historical versions. The reporting module could also build
pdf files or word documents for publishing to other areas, etc.
* The publishing industry has GOT to have something like this for
* Perhaps a CMS, like Drupal could be used for something like this, or
some wiki software.
* Wasn't Google Wave (now deprecated) supposed to allow simultaneous
* Someone mentioned that Subversion might be helpful. GIT is another
popular system for version control. http://git-scm.com/
* I don't know how much control you have over the data pipe to the
remote site, but, in the historical case I mentioned, I was able to
reduce the data traffic on the leased line by about 80% by enabling
selective filters on the gateways. That way, I got much better use out
of the pipe. You might want to consider on the fly data compression on
the endpoints, if not already active. Regardless, 384 Kbps is REALLY
* Could you bring more bandwidth on line in parallel by using something
like 3G / 4G wireless broadband from a cellular company, or something
like CLEAR, etc?
On 02/17/2011 11:53 AM, Michael B. Trausch wrote:
> On Thu, 2011-02-17 at 11:11 -0500, Ron Frazier wrote:
>> This may be a stupid question, but, if you establish a VPN, couldn't
>> the remote office directly access the same database as the home base,
>> with all the normal locking mechanisms, etc.
> That would be a half-way decent thing to do, with one exception: it's
> very bandwidth intensive, and the available upstream bandwidth in the
> local office slightly less than 384Kbps---in other words, slightly less
> than 48 KB/sec.
> The local office runs services (including Internet mail and XMPP) which
> makes it difficult to share the bandwidth for this sort of thing. Let's
> say that someone wants to open a spreadsheet that is 1 MB in the remote
> office, and it's stored in the local office (note that I'm using "local"
> to refer to Atlanta, and "remote" to refer to the office that is 500
> miles away). It would take them 21 seconds (best-case scenario) to open
> the file. Every time they updated the file, that would trigger
> round-trips between the remote and the local offices, as well,
> significantly slowing things down.
>> I dealt with a situation like that once while working with Delta Air
>> Lines. We had a remote office with a (very slow) leased line and
>> network bridges (more accurately gateways) at each end of the
>> connection. The remote site connected directly to a Clipper database
>> just as though they were sitting at headquarters. All the locking
>> stuff worked fine.
> The big difference being that instead of a database, we're talking about
> remote file access. Database commands tend to be relatively small, and
> they can (usually, with something that is well-designed) provide answers
> that are relatively small, too. In this situation, however, we're
> talking about "stupid" (that is, neither "intelligent" nor efficient)
> client software. It makes the assumption that the filesystem is local,
> and that all access to the files it is using are inexpensive.
> Oh, would it that one could do something like have an office suite that
> would work with files in tiny itty-bitty chunks, only needing to access
> the part of the file that is being displayed and/or modified, and
> sending changes back and forth over the wire using some nifty efficient
> application layer protocol. Would it that that were the case...
>> They could also access shared word processing documents, etc. just
>> like they were at headquarters. I had to spend a whole day once
>> tweaking the gateway not to forward superfluous traffic to the remote
>> site because performance was abysmal.
> The current infrastructure uses SMB/CIFS in an NT4 style domain. Reason
> being that Samba 4 had not implemented enough functionality to do an
> Active Directory style domain, though that would have been much
> preferred. NT4 domains have two hard requirements: IPv4 (for the
> ability to do broadcast, when using NetBIOS over IP) and the ability to
> be really flipping chatty with each other. It's possible to have
> NT4-style domains over multiple subnets, but then you have to have WINS
> servers that can handle name resolution, and you still have all the
> other problems that come with it, including periods of up to 45 minutes
> to an hour where the browse lists are completely unstable in the event
> of almost any change on the network. Very much displeasing.
> I'm evaluating Samba 4 again, and it's looking a _lot_ better than it
> was previously. I still haven't figured out how it does roaming
> profiles, and I still haven't figured out home drives/directories in
> Samba 4, but I'm sure that it supports both of them from what I've been
>> Also, I had to store a local copy of my Clipper database app and load
>> it from the local hard drive at the remote site rather than retrieving
>> it over the leased line when someone started it. I did similar things
>> with the executables for common office applications. So, the remote
>> site started up executables locally, but accessed data files from the
>> file share at headquarters. It never did work great, but it was
>> acceptable. With a VPN, I was thinking you could do something
>> similar, as the VPN would act like a bridge. That would eliminate
>> your concurrency problems. Maybe something like Himachi might work.
>> Just a thought.
> A routed VPN would be a good idea, and in fact I am going to set one up.
> But the ultimate goal is to reduce the latency required in common
> scenarios where people work with the same files with temporal relation.
> There are 200 GB of data over approximately 150,000 files. Probably
> about 40,000 of those files are regularly used.
> --- Mike
(PS - If you email me and don't get a quick response, you might want to
call on the phone. I get about 300 emails per day from alternate energy
mailing lists and such. I don't always see new messages very quickly.)
770-205-9422 (O) Leave a message.
linuxdude AT c3energy.com
More information about the Ale