Hi all!

I've an issue that has has me stumped. I have a proprietary application running on a RHEL box that exports large (200GB+) blocks of data that then gets imported into another closed source app on another RHEL server for further processing. This is no big deal if both servers are on the same LAN. One server mounts an NFS export off the others disk, and they work away. It's manual, but it works.

What's causing me problems is that the second server is being transferred to another location. Directly NFS mounting a remote disk over a VPN is a non-runner because of NFS's poor performance over high latency links, and the round trip on this link can hit 1000ms at times. Exporting to local disk and then FTPing it to the second site is a non-runner too, because it takes way too long (an hour @100Mbps to dump the data to local disk, and then another hour @100Mbps to FTP it).

Ideally I'd like to export to local disk here, and have *something* copy it to the remote site as it's written, rather than having to wait until the dump is finished to start, i.e. some form of replication. The dump consists of lots of itty bitty files, so just running an rsync over and over again won't do the job. Google turns up some hackery involving FAM and Perl, but the implementation is old, and I'm not sure it'll cope with 4KB - 50KB files getting spat out at 100Mbps. I could also spend money on commercial replication software, but I have zero budget available. Has anyone got practical experience at implementing FOSS replication replication they can share?

g