Tim Watts
2014-06-21 10:43:21 UTC
There are many clustered filesystems for linux - most seem to have HPC
clustering or failover in mind and assume there is solid networking
between the hosts.
I'm after one that would suit multiple client "ordinary/home" usage with
intermittent connectivity.
Right now, I have a central NFS server at home which is backed up
properly. I work mostly on a laptop (which is the way everyone in my
family is going, we have no "desktop" - just a monitor and keyboard for
docking to). I occasionally sync back to base with unison, which is a
great tool.
I'm not looking for a cachefs type thing that depends on the network
being there - I'm after a full on replicated (at the file level, not the
block[1]) filesystem preferably with no concept of a master (unison
handles this quite well).
[1] Replication will always have some clashes. I'd rather have good
files with the possibility one file is not the right version, than have
a buggered FS.
So there seem to be a couple of directions:
1) Run unison as root from a script with a carefully chosen config file
per FS area. Do some DIY so the script runs when (say) at-home WiFi is
detected, so as to avoid syncing over a mobile or work link.
Email errors to me for manual fixing (unison generally "does the right
thing" and baulks before doing something that is not provably correct).
Or
2) Find a more elegant solution that works at the kernel or daemon level.
So:
1 - Anyone done this and did it work out?
2 - Any FSs worth looking at that would behave well in a
WAN-with-intermittent-connectivity context?
Cheers :)
Tim
clustering or failover in mind and assume there is solid networking
between the hosts.
I'm after one that would suit multiple client "ordinary/home" usage with
intermittent connectivity.
Right now, I have a central NFS server at home which is backed up
properly. I work mostly on a laptop (which is the way everyone in my
family is going, we have no "desktop" - just a monitor and keyboard for
docking to). I occasionally sync back to base with unison, which is a
great tool.
I'm not looking for a cachefs type thing that depends on the network
being there - I'm after a full on replicated (at the file level, not the
block[1]) filesystem preferably with no concept of a master (unison
handles this quite well).
[1] Replication will always have some clashes. I'd rather have good
files with the possibility one file is not the right version, than have
a buggered FS.
So there seem to be a couple of directions:
1) Run unison as root from a script with a carefully chosen config file
per FS area. Do some DIY so the script runs when (say) at-home WiFi is
detected, so as to avoid syncing over a mobile or work link.
Email errors to me for manual fixing (unison generally "does the right
thing" and baulks before doing something that is not provably correct).
Or
2) Find a more elegant solution that works at the kernel or daemon level.
So:
1 - Anyone done this and did it work out?
2 - Any FSs worth looking at that would behave well in a
WAN-with-intermittent-connectivity context?
Cheers :)
Tim