Discussion:
[Gluster-users] HELP : Files lost after DHT expansion
eagleeyes
2009-07-04 00:03:34 UTC
Permalink
When i update to gluster2.0.3 ,after dht expansion ,double directorys appear in the gluster directory why 

client configure
volume dht
type cluster/dht
option lookup-unhashed yes
option min-free-disk 10%
subvolumes client1 client2 client3 client4 client5 client6 client7 client8
#subvolumes client1 client2 client3 client4
end-volume


2009-07-02



eagleeyes



发件人 Anand Babu Periasamy
发送时闎 2009-07-01 13:41:20
收件人 Anand Avati
抄送 eagleeyes
䞻题 Re: HELP : Files lost after DHT expansion
2.0.3 is scheduled for release tomorrow evening PST.
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]
Anand Avati wrote:
>> 
>> Sorry, it is a Intranet , just like what your say , the data is just
>> invisible on the mountpoint , when i designate file name , it will be
>> visible . But in Production environment ,this Phenomenon will make
>> some issue for applications.
>>
>
> This problem is solved in 2.0.3. Please upgrade and you should be able to see your files again.
>
> Avati
Sachidananda
2009-07-04 03:39:00 UTC
Permalink
Hi,

eagleeyes wrote:
> When i update to gluster2.0.3 ,after dht expansion ,double directorys
> appear in the gluster directory ,why ?
>
> client configure
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> option min-free-disk 10%
> subvolumes client1 client2 client3 client4 client5 client6 client7
client8
> #subvolumes client1 client2 client3 client4
> end-volume
>
>

Can you please send us your server/client volume files?

--
Sachidananda.
eagleeyes
2009-07-06 01:22:20 UTC
Permalink
gfs1:~ # cat /etc/glusterfs/glusterfsd-sever.vol
volume posix1
type storage/posix # POSIX FS translator
option directory /data/data1 # Export this directory
end-volume
volume posix2
type storage/posix # POSIX FS translator
option directory /data/data2 # Export this directory
end-volume
volume posix3
type storage/posix # POSIX FS translator
option directory /data/data3 # Export this directory
end-volume
volume posix4
type storage/posix # POSIX FS translator
option directory /data/data4 # Export this directory
end-volume
volume posix5
type storage/posix # POSIX FS translator
option directory /data/data5 # Export this directory
end-volume
volume posix6
type storage/posix # POSIX FS translator
option directory /data/data6 # Export this directory
end-volume
volume posix7
type storage/posix # POSIX FS translator
option directory /data/data7 # Export this directory
end-volume
volume posix8
type storage/posix # POSIX FS translator
option directory /data/data8 # Export this directory
end-volume
volume brick1
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix1
end-volume
volume brick2
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix2
end-volume
volume brick3
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix3
end-volume
volume brick4
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix4
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp
option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
option transport.socket.listen-port 6996 # Default is 6996
subvolumes brick1 brick2 brick3 brick4
option auth.addr.brick1.allow * # Allow access to "brick" volume
option auth.addr.brick2.allow * # Allow access to "brick" volume
option auth.addr.brick3.allow * # Allow access to "brick" volume
option auth.addr.brick4.allow * # Allow access to "brick" volume
end-volume


2009-07-06



eagleeyes



发件人 Sachidananda
发送时闎 2009-07-04 11:39:03
收件人 eagleeyes
抄送 gluster-users
䞻题 Re: [Gluster-users] HELP : Files lost after DHT expansion

Hi,
eagleeyes wrote:
> When i update to gluster2.0.3 ,after dht expansion ,double directorys
> appear in the gluster directory why 
>
> client configure
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> option min-free-disk 10%
> subvolumes client1 client2 client3 client4 client5 client6 client7
client8
> #subvolumes client1 client2 client3 client4
> end-volume
>
>
Can you please send us your server/client volume files?
--
Sachidananda.
eagleeyes
2009-07-06 01:25:34 UTC
Permalink
The server configuration file is:

gfs1:~ # cat /etc/glusterfs/glusterfsd-sever.vol
volume posix1
type storage/posix # POSIX FS translator
option directory /data/data1 # Export this directory
end-volume
volume posix2
type storage/posix # POSIX FS translator
option directory /data/data2 # Export this directory
end-volume
volume posix3
type storage/posix # POSIX FS translator
option directory /data/data3 # Export this directory
end-volume
volume posix4
type storage/posix # POSIX FS translator
option directory /data/data4 # Export this directory
end-volume
volume posix5
type storage/posix # POSIX FS translator
option directory /data/data5 # Export this directory
end-volume
volume posix6
type storage/posix # POSIX FS translator
option directory /data/data6 # Export this directory
end-volume
volume posix7
type storage/posix # POSIX FS translator
option directory /data/data7 # Export this directory
end-volume
volume posix8
type storage/posix # POSIX FS translator
option directory /data/data8 # Export this directory
end-volume
volume brick1
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix1
end-volume
volume brick2
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix2
end-volume
volume brick3
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix3
end-volume
volume brick4
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix4
end-volume

volume brick5
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix5
end-volume

volume brick6
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix6
end-volume

volume brick7
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix7
end-volume

volume brick8
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix8
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp
option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
option transport.socket.listen-port 6996 # Default is 6996
subvolumes brick1 brick2 brick3 brick4 brick5 brick6 brick7 brick8
option auth.addr.brick1.allow * # Allow access to "brick" volume
option auth.addr.brick2.allow * # Allow access to "brick" volume
option auth.addr.brick3.allow * # Allow access to "brick" volume
option auth.addr.brick4.allow * # Allow access to "brick" volume
option auth.addr.brick5.allow * # Allow access to "brick" volume
option auth.addr.brick6.allow * # Allow access to "brick" volume
option auth.addr.brick7.allow * # Allow access to "brick" volume
option auth.addr.brick8.allow * # Allow access to "brick" volume
end-volume


2009-07-06



eagleeyes



发件人 Sachidananda
发送时闎 2009-07-04 11:39:03
收件人 eagleeyes
抄送 gluster-users
䞻题 Re: [Gluster-users] HELP : Files lost after DHT expansion
Hi,
eagleeyes wrote:
> When i update to gluster2.0.3 ,after dht expansion ,double directorys
> appear in the gluster directory why 
>
> client configure
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> option min-free-disk 10%
> subvolumes client1 client2 client3 client4 client5 client6 client7
client8
> #subvolumes client1 client2 client3 client4
> end-volume
>
>
Can you please send us your server/client volume files?
--
Sachidananda.
eagleeyes
2009-07-06 03:01:57 UTC
Permalink
HI
I use gluster2.0.3rc1 with fuse 2.8 in kernel 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount message was :

/dev/hda4 on /data type reiserfs (rw,user_xattr)
glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)



There was some error when i "touce 111" in gluster directory ,the error was :
/home: Transport endpoint is not connected

pending frames:
patchset: e0db4ff890b591a58332994e37ce6db2bf430213
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.3rc1
[0xffffe400]
/lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
/lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
/lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
/lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
/lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
/lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
/lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
/lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
/lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
/lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
/lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
/lib/libglusterfs.so.0[0xb7facbda]
/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
glusterfs(main+0xc2e)[0x804b6ae]
/lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
glusterfs[0x8049c11]
---------

the server configuration

gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
volume posix1
type storage/posix # POSIX FS translator
option directory /data/data1 # Export this directory
end-volume
volume posix2
type storage/posix # POSIX FS translator
option directory /data/data2 # Export this directory
end-volume
volume posix3
type storage/posix # POSIX FS translator
option directory /data/data3 # Export this directory
end-volume
volume posix4
type storage/posix # POSIX FS translator
option directory /data/data4 # Export this directory
end-volume
volume posix5
type storage/posix # POSIX FS translator
option directory /data/data5 # Export this directory
end-volume
volume posix6
type storage/posix # POSIX FS translator
option directory /data/data6 # Export this directory
end-volume
volume posix7
type storage/posix # POSIX FS translator
option directory /data/data7 # Export this directory
end-volume
volume posix8
type storage/posix # POSIX FS translator
option directory /data/data8 # Export this directory
end-volume
volume brick1
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix1
end-volume
volume brick2
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix2
end-volume
volume brick3
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix3
end-volume
volume brick4
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix4
end-volume
volume brick5
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix5
end-volume
volume brick6
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix6
end-volume
volume brick7
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix7
end-volume
volume brick8
type features/posix-locks
option mandatory-locks on # enables mandatory locking on all files
subvolumes posix8
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp
option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
option transport.socket.listen-port 6996 # Default is 6996
subvolumes brick1 brick2 brick3 brick4
option auth.addr.brick1.allow * # Allow access to "brick" volume
option auth.addr.brick2.allow * # Allow access to "brick" volume
option auth.addr.brick3.allow * # Allow access to "brick" volume
option auth.addr.brick4.allow * # Allow access to "brick" volume
option auth.addr.brick5.allow * # Allow access to "brick" volume
option auth.addr.brick6.allow * # Allow access to "brick" volume
option auth.addr.brick7.allow * # Allow access to "brick" volume
option auth.addr.brick8.allow * # Allow access to "brick" volume
end-volume

the client configuration:

gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
volume client1
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
option remote-subvolume brick1 # name of the remote volume
end-volume
volume client2
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick2 # name of the remote volume
end-volume
volume client3
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick3 # name of the remote volume
end-volume
volume client4
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick4 # name of the remote volume
end-volume
volume client5
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick1 # name of the remote volume
end-volume
volume client6
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick2 # name of the remote volume
end-volume
volume client7
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick3 # name of the remote volume
end-volume
volume client8
type protocol/client
option transport-type tcp
option remote-host 172.20.92.240 # IP address of the remote brick2
option remote-port 6996
#option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick4 # name of the remote volume
end-volume
#volume afr3
# type cluster/afr
# subvolumes client3 client6
#end-volume
volume dht
type cluster/dht
option lookup-unhashed yes
subvolumes client1 client2 client3 client4
end-volume

Could you help me ?



2009-07-06



eagleeyes



发件人 Sachidananda
发送时闎 2009-07-04 11:39:03
收件人 eagleeyes
抄送 gluster-users
䞻题 Re: [Gluster-users] HELP : Files lost after DHT expansion

Hi,
eagleeyes wrote:
> When i update to gluster2.0.3 ,after dht expansion ,double directorys
> appear in the gluster directory why 
>
> client configure
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> option min-free-disk 10%
> subvolumes client1 client2 client3 client4 client5 client6 client7
client8
> #subvolumes client1 client2 client3 client4
> end-volume
>
>
Can you please send us your server/client volume files?
--
Sachidananda.
Anand Avati
2009-07-06 04:09:12 UTC
Permalink
Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
has been fixed in rc2.

Avati

On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
> HI
>    I use gluster2.0.3rc1 with  fuse 2.8 in kernel
> 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel  2.6.30 ) . the mount
> message was :
>
> /dev/hda4 on /data type reiserfs (rw,user_xattr)
> glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>
>
>
>  There was some error when i "touce 111" in gluster directory ,the error was
> :
>  /home: Transport endpoint is not connected
>
> pending frames:
> patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc1
> [0xffffe400]
> /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
> /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
> /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
> /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
> /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> /lib/libglusterfs.so.0[0xb7facbda]
> /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> glusterfs(main+0xc2e)[0x804b6ae]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> glusterfs[0x8049c11]
> ---------
>
> the server configuration
>
> gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> volume posix1
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data1        # Export this directory
> end-volume
> volume posix2
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data2        # Export this directory
> end-volume
> volume posix3
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data3        # Export this directory
> end-volume
> volume posix4
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data4        # Export this directory
> end-volume
> volume posix5
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data5        # Export this directory
> end-volume
> volume posix6
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data6        # Export this directory
> end-volume
> volume posix7
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data7        # Export this directory
> end-volume
> volume posix8
>   type storage/posix                   # POSIX FS translator
>   option directory /data/data8        # Export this directory
> end-volume
> volume brick1
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix1
> end-volume
> volume brick2
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix2
> end-volume
> volume brick3
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix3
> end-volume
> volume brick4
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix4
> end-volume
> volume brick5
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix5
> end-volume
> volume brick6
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix6
> end-volume
> volume brick7
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix7
> end-volume
> volume brick8
>   type features/posix-locks
>   option mandatory-locks on          # enables mandatory locking on all files
>   subvolumes posix8
> end-volume
> ### Add network serving capability to above brick.
> volume server
>   type protocol/server
>   option transport-type tcp
>   option transport.socket.bind-address 172.20.92.240     # Default is to listen on all interfaces
>   option transport.socket.listen-port 6996              # Default is 6996
>   subvolumes brick1 brick2 brick3 brick4
>   option auth.addr.brick1.allow * # Allow access to "brick" volume
>   option auth.addr.brick2.allow * # Allow access to "brick" volume
>   option auth.addr.brick3.allow * # Allow access to "brick" volume
>   option auth.addr.brick4.allow * # Allow access to "brick" volume
>   option auth.addr.brick5.allow * # Allow access to "brick" volume
>   option auth.addr.brick6.allow * # Allow access to "brick" volume
>   option auth.addr.brick7.allow * # Allow access to "brick" volume
>   option auth.addr.brick8.allow * # Allow access to "brick" volume
> end-volume
>
> the client configuration:
>
> gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> volume client1
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240        # IP address of the remote brick2
>   option remote-port 6996
>   option remote-subvolume brick1       # name of the remote volume
> end-volume
> volume client2
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240         # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick2       # name of the remote volume
> end-volume
> volume client3
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick3       # name of the remote volume
> end-volume
> volume client4
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick4       # name of the remote volume
> end-volume
> volume client5
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick1       # name of the remote volume
> end-volume
> volume client6
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick2       # name of the remote volume
> end-volume
> volume client7
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick3       # name of the remote volume
> end-volume
> volume client8
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.20.92.240      # IP address of the remote brick2
>   option remote-port 6996
>   #option transport-timeout 10          # seconds to wait for a reply
>   option remote-subvolume brick4       # name of the remote volume
> end-volume
> #volume afr3
> #  type cluster/afr
> #  subvolumes client3 client6
> #end-volume
> volume dht
>   type cluster/dht
>   option lookup-unhashed yes
>   subvolumes client1 client2  client3 client4
> end-volume
>
> Could you help me ?
>
>
>
> 2009-07-06
> ________________________________
> eagleeyes
> ________________________________
> 发件人: Sachidananda
> 发送时间: 2009-07-04  11:39:03
> 收件人: eagleeyes
> 抄送: gluster-users
> 主题: Re: [Gluster-users] HELP : Files lost after DHT expansion
> Hi,
> eagleeyes wrote:
>  > When i  update to gluster2.0.3 ,after dht expansion ,double  directorys
>  > appear in the gluster directory ,why ?
>  >
>  > client configure
>  > volume dht
>  >   type cluster/dht
>  >   option lookup-unhashed yes
>  >   option min-free-disk 10%
>  >   subvolumes client1 client2  client3 client4 client5 client6 client7
> client8
>  >   #subvolumes client1 client2  client3 client4
>  > end-volume
>  >
>  >
> Can you please send us your server/client volume files?
> --
> Sachidananda.
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
eagleeyes
2009-07-06 07:01:18 UTC
Permalink
HI

1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE sp10 ,kernel 2.6.30.
There were some error log :
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
patchset: 65524f58b29f0b813549412ba6422711a505f5d8
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.3rc2
[0xffffe400]
/usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
/lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
/lib/libpthread.so.0[0xb7f0d2ab]
/lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
---------
2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse sp10, kernel 2.6.16.21-0.8-smp ,
when i expanded dht volumes from four to six ,then i "rm *" in gluster directory , there were some error :

[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)

and new files didn't write into the new volumes after expansion .




2009-07-06



eagleeyes



发件人 Anand Avati
发送时闎 2009-07-06 12:09:13
收件人 eagleeyes
抄送 gluster-users
䞻题 Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in kernel2.6.30 ,help !!!!!

Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
has been fixed in rc2.
Avati
On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
> HI
> I use gluster2.0.3rc1 with fuse 2.8 in kernel
> 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
> message was :
>
> /dev/hda4 on /data type reiserfs (rw,user_xattr)
> glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>
>
>
> There was some error when i "touce 111" in gluster directory ,the error was
> :
> /home: Transport endpoint is not connected
>
> pending frames:
> patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc1
> [0xffffe400]
> /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
> /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
> /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
> /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
> /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> /lib/libglusterfs.so.0[0xb7facbda]
> /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> glusterfs(main+0xc2e)[0x804b6ae]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> glusterfs[0x8049c11]
> ---------
>
> the server configuration
>
> gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> volume posix1
> type storage/posix # POSIX FS translator
> option directory /data/data1 # Export this directory
> end-volume
> volume posix2
> type storage/posix # POSIX FS translator
> option directory /data/data2 # Export this directory
> end-volume
> volume posix3
> type storage/posix # POSIX FS translator
> option directory /data/data3 # Export this directory
> end-volume
> volume posix4
> type storage/posix # POSIX FS translator
> option directory /data/data4 # Export this directory
> end-volume
> volume posix5
> type storage/posix # POSIX FS translator
> option directory /data/data5 # Export this directory
> end-volume
> volume posix6
> type storage/posix # POSIX FS translator
> option directory /data/data6 # Export this directory
> end-volume
> volume posix7
> type storage/posix # POSIX FS translator
> option directory /data/data7 # Export this directory
> end-volume
> volume posix8
> type storage/posix # POSIX FS translator
> option directory /data/data8 # Export this directory
> end-volume
> volume brick1
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix1
> end-volume
> volume brick2
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix2
> end-volume
> volume brick3
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix3
> end-volume
> volume brick4
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix4
> end-volume
> volume brick5
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix5
> end-volume
> volume brick6
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix6
> end-volume
> volume brick7
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix7
> end-volume
> volume brick8
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix8
> end-volume
> ### Add network serving capability to above brick.
> volume server
> type protocol/server
> option transport-type tcp
> option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
> option transport.socket.listen-port 6996 # Default is 6996
> subvolumes brick1 brick2 brick3 brick4
> option auth.addr.brick1.allow * # Allow access to "brick" volume
> option auth.addr.brick2.allow * # Allow access to "brick" volume
> option auth.addr.brick3.allow * # Allow access to "brick" volume
> option auth.addr.brick4.allow * # Allow access to "brick" volume
> option auth.addr.brick5.allow * # Allow access to "brick" volume
> option auth.addr.brick6.allow * # Allow access to "brick" volume
> option auth.addr.brick7.allow * # Allow access to "brick" volume
> option auth.addr.brick8.allow * # Allow access to "brick" volume
> end-volume
>
> the client configuration:
>
> gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> volume client1
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client2
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client3
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client4
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> volume client5
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client6
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client7
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client8
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> #volume afr3
> # type cluster/afr
> # subvolumes client3 client6
> #end-volume
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> subvolumes client1 client2 client3 client4
> end-volume
>
> Could you help me ?
>
>
>
> 2009-07-06
> ________________________________
> eagleeyes
> ________________________________
> 发件人 Sachidananda
> 发送时闎 2009-07-04 11:39:03
> 收件人 eagleeyes
> 抄送 gluster-users
> 䞻题 Re: [Gluster-users] HELP : Files lost after DHT expansion
> Hi,
> eagleeyes wrote:
> > When i update to gluster2.0.3 ,after dht expansion ,double directorys
> > appear in the gluster directory why 
> >
> > client configure
> > volume dht
> > type cluster/dht
> > option lookup-unhashed yes
> > option min-free-disk 10%
> > subvolumes client1 client2 client3 client4 client5 client6 client7
> client8
> > #subvolumes client1 client2 client3 client4
> > end-volume
> >
> >
> Can you please send us your server/client volume files?
> --
> Sachidananda.
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
Harshavardhana
2009-07-06 09:18:32 UTC
Permalink
Eagleyes,

I think you are using glusterfs with two different versions of fuse API
versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21
version. I would suggest you to use same fuse API versions for glusterfs.
Can i have a few details

1. dmesg | grep -i fuse (on each clients)
2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each
clients)

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <***@126.com> wrote:

> HI
>
> 1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE
> sp10 ,kernel 2.6.30.
> There were some error log :
> pending frames:
> frame : type(1) op(WRITE)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> patchset: 65524f58b29f0b813549412ba6422711a505f5d8
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc2
> [0xffffe400]
> /usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
> /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
> /lib/libpthread.so.0[0xb7f0d2ab]
> /lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
> ---------
> 2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse
> sp10, kernel 2.6.16.21-0.8-smp ,
> when i expanded dht volumes from four to six ,then i "rm *" in gluster
> directory , there were some error :
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
>
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)
>
>
> and new files didn't write into the new volumes after expansion .
>
>
>
>
> 2009-07-06
> ------------------------------
> eagleeyes
> ------------------------------
> *·¢ŒþÈË£º* Anand Avati
> *·¢ËÍʱŒä£º* 2009-07-06 12:09:13
> *ÊÕŒþÈË£º* eagleeyes
> *³­ËÍ£º* gluster-users
> *Ö÷Ì⣺* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in
> kernel2.6.30 ,help !!!!!
> Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
> has been fixed in rc2.
> Avati
> On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
> > HI
> > I use gluster2.0.3rc1 with fuse 2.8 in kernel
>
> > 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
> > message was :
> >
> > /dev/hda4 on /data type reiserfs (rw,user_xattr)
>
> > glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
> >
> >
> >
>
> > There was some error when i "touce 111" in gluster directory ,the error was
> > :
> > /home: Transport endpoint is not connected
> >
> > pending frames:
> > patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> > signal received: 11
> > configuration details:argp 1
> > backtrace 1
> > dlfcn 1
> > fdatasync 1
> > libpthread 1
> > llistxattr 1
> > setfsid 1
> > spinlock 1
> > epoll.h 1
> > xattr.h 1
> > st_atim.tv_nsec 1
> > package-string: glusterfs 2.0.3rc1
> > [0xffffe400]
> > /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
>
> > /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
>
> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
>
> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
>
> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> > /lib/libglusterfs.so.0[0xb7facbda]
> > /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> > glusterfs(main+0xc2e)[0x804b6ae]
> > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> > glusterfs[0x8049c11]
> > ---------
> >
> > the server configuration
> >
> > gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> > volume posix1
> > type storage/posix # POSIX FS translator
> > option directory /data/data1 # Export this directory
> > end-volume
> > volume posix2
> > type storage/posix # POSIX FS translator
> > option directory /data/data2 # Export this directory
> > end-volume
> > volume posix3
> > type storage/posix # POSIX FS translator
> > option directory /data/data3 # Export this directory
> > end-volume
> > volume posix4
> > type storage/posix # POSIX FS translator
> > option directory /data/data4 # Export this directory
> > end-volume
> > volume posix5
> > type storage/posix # POSIX FS translator
> > option directory /data/data5 # Export this directory
> > end-volume
> > volume posix6
> > type storage/posix # POSIX FS translator
> > option directory /data/data6 # Export this directory
> > end-volume
> > volume posix7
> > type storage/posix # POSIX FS translator
> > option directory /data/data7 # Export this directory
> > end-volume
> > volume posix8
> > type storage/posix # POSIX FS translator
> > option directory /data/data8 # Export this directory
> > end-volume
> > volume brick1
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix1
> > end-volume
> > volume brick2
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix2
> > end-volume
> > volume brick3
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix3
> > end-volume
> > volume brick4
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix4
> > end-volume
> > volume brick5
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix5
> > end-volume
> > volume brick6
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix6
> > end-volume
> > volume brick7
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix7
> > end-volume
> > volume brick8
> > type features/posix-locks
>
> > option mandatory-locks on # enables mandatory locking on all files
> > subvolumes posix8
> > end-volume
> > ### Add network serving capability to above brick.
> > volume server
> > type protocol/server
> > option transport-type tcp
>
> > option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
> > option transport.socket.listen-port 6996 # Default is 6996
> > subvolumes brick1 brick2 brick3 brick4
> > option auth.addr.brick1.allow * # Allow access to "brick" volume
> > option auth.addr.brick2.allow * # Allow access to "brick" volume
> > option auth.addr.brick3.allow * # Allow access to "brick" volume
> > option auth.addr.brick4.allow * # Allow access to "brick" volume
> > option auth.addr.brick5.allow * # Allow access to "brick" volume
> > option auth.addr.brick6.allow * # Allow access to "brick" volume
> > option auth.addr.brick7.allow * # Allow access to "brick" volume
> > option auth.addr.brick8.allow * # Allow access to "brick" volume
> > end-volume
> >
> > the client configuration:
> >
> > gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> > volume client1
> > type protocol/client
> > option transport-type tcp
>
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > option remote-subvolume brick1 # name of the remote volume
> > end-volume
> > volume client2
> > type protocol/client
> > option transport-type tcp
>
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick2 # name of the remote volume
> > end-volume
> > volume client3
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick3 # name of the remote volume
> > end-volume
> > volume client4
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick4 # name of the remote volume
> > end-volume
> > volume client5
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick1 # name of the remote volume
> > end-volume
> > volume client6
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick2 # name of the remote volume
> > end-volume
> > volume client7
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick3 # name of the remote volume
> > end-volume
> > volume client8
> > type protocol/client
> > option transport-type tcp
> > option remote-host 172.20.92.240 # IP address of the remote brick2
> > option remote-port 6996
> > #option transport-timeout 10 # seconds to wait for a reply
> > option remote-subvolume brick4 # name of the remote volume
> > end-volume
> > #volume afr3
> > # type cluster/afr
> > # subvolumes client3 client6
> > #end-volume
> > volume dht
> > type cluster/dht
> > option lookup-unhashed yes
> > subvolumes client1 client2 client3 client4
> > end-volume
> >
> > Could you help me ?
> >
> >
> >
> > 2009-07-06
> > ________________________________
> > eagleeyes
> > ________________________________
> > ·¢ŒþÈË£º Sachidananda
> > ·¢ËÍʱŒä£º 2009-07-04 11:39:03
> > ÊÕŒþÈË£º eagleeyes
> > ³­ËÍ£º gluster-users
> > Ö÷Ì⣺ Re: [Gluster-users] HELP : Files lost after DHT expansion
> > Hi,
> > eagleeyes wrote:
>
> > > When i update to gluster2.0.3 ,after dht expansion ,double directorys
> > > appear in the gluster directory £¬why £¿
> > >
> > > client configure
> > > volume dht
> > > type cluster/dht
> > > option lookup-unhashed yes
> > > option min-free-disk 10%
> > > subvolumes client1 client2 client3 client4 client5 client6 client7
> > client8
> > > #subvolumes client1 client2 client3 client4
> > > end-volume
> > >
> > >
> > Can you please send us your server/client volume files?
> > --
> > Sachidananda.
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-***@gluster.org
> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >
> >
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
eagleeyes
2009-07-06 09:26:29 UTC
Permalink
How could i remove one of fuse ?


client 1
2.6.30
gfs1:~ # dmesg | grep -i fuse
fuse init (API version 7.11)
gfs1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
#define FUSE_MINOR_VERSION 7
#define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
# undef FUSE_MINOR_VERSION
# define FUSE_MINOR_VERSION 5
# define FUSE_MINOR_VERSION 4
# define FUSE_MINOR_VERSION 1
# define FUSE_MINOR_VERSION 1


client 2
2.6.16.21-0.8-smp
linux-2ca1:~ # dmesg | grep -i fuse
fuse: module not supported by Novell, setting U taint flag.
fuse init (API version 7.6)
linux-2ca1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
#define FUSE_MINOR_VERSION 8
#define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
# undef FUSE_MINOR_VERSION
# define FUSE_MINOR_VERSION 5
# define FUSE_MINOR_VERSION 4
# define FUSE_MINOR_VERSION 1
# define FUSE_MINOR_VERSION 1






2009-07-06



eagleeyes



·¢ŒþÈË£º Harshavardhana
·¢ËÍʱŒä£º 2009-07-06 17:18:34
ÊÕŒþÈË£º eagleeyes
³­ËÍ£º Anand Avati; gluster-users
Ö÷Ì⣺ Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 inkernel2.6.30 ,help !!!!!

Eagleyes,

I think you are using glusterfs with two different versions of fuse API versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21 version. I would suggest you to use same fuse API versions for glusterfs. Can i have a few details

1. dmesg | grep -i fuse (on each clients)
2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each clients)

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/



On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <***@126.com> wrote:

HI

1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE sp10 ,kernel 2.6.30.
There were some error log :
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
patchset: 65524f58b29f0b813549412ba6422711a505f5d8
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.3rc2
[0xffffe400]
/usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
/lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
/lib/libpthread.so.0[0xb7f0d2ab]
/lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
---------
2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse sp10, kernel 2.6.16.21-0.8-smp ,
when i expanded dht volumes from four to six ,then i "rm *" in gluster directory , there were some error :

[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)

and new files didn't write into the new volumes after expansion .




2009-07-06



eagleeyes



·¢ŒþÈË£º Anand Avati
·¢ËÍʱŒä£º 2009-07-06 12:09:13
ÊÕŒþÈË£º eagleeyes
³­ËÍ£º gluster-users
Ö÷Ì⣺ Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in kernel2.6.30 ,help !!!!!
Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
has been fixed in rc2.
Avati
On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
> HI
> I use gluster2.0.3rc1 with fuse 2.8 in kernel
> 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
> message was :
>
> /dev/hda4 on /data type reiserfs (rw,user_xattr)
> glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>
>
>
> There was some error when i "touce 111" in gluster directory ,the error was
> :
> /home: Transport endpoint is not connected
>
> pending frames:
> patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc1
> [0xffffe400]
> /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
> /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
> /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
> /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
> /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> /lib/libglusterfs.so.0[0xb7facbda]
> /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> glusterfs(main+0xc2e)[0x804b6ae]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> glusterfs[0x8049c11]
> ---------
>
> the server configuration
>
> gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> volume posix1
> type storage/posix # POSIX FS translator
> option directory /data/data1 # Export this directory
> end-volume
> volume posix2
> type storage/posix # POSIX FS translator
> option directory /data/data2 # Export this directory
> end-volume
> volume posix3
> type storage/posix # POSIX FS translator
> option directory /data/data3 # Export this directory
> end-volume
> volume posix4
> type storage/posix # POSIX FS translator
> option directory /data/data4 # Export this directory
> end-volume
> volume posix5
> type storage/posix # POSIX FS translator
> option directory /data/data5 # Export this directory
> end-volume
> volume posix6
> type storage/posix # POSIX FS translator
> option directory /data/data6 # Export this directory
> end-volume
> volume posix7
> type storage/posix # POSIX FS translator
> option directory /data/data7 # Export this directory
> end-volume
> volume posix8
> type storage/posix # POSIX FS translator
> option directory /data/data8 # Export this directory
> end-volume
> volume brick1
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix1
> end-volume
> volume brick2
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix2
> end-volume
> volume brick3
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix3
> end-volume
> volume brick4
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix4
> end-volume
> volume brick5
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix5
> end-volume
> volume brick6
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix6
> end-volume
> volume brick7
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix7
> end-volume
> volume brick8
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix8
> end-volume
> ### Add network serving capability to above brick.
> volume server
> type protocol/server
> option transport-type tcp
> option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
> option transport.socket.listen-port 6996 # Default is 6996
> subvolumes brick1 brick2 brick3 brick4
> option auth.addr.brick1.allow * # Allow access to "brick" volume
> option auth.addr.brick2.allow * # Allow access to "brick" volume
> option auth.addr.brick3.allow * # Allow access to "brick" volume
> option auth.addr.brick4.allow * # Allow access to "brick" volume
> option auth.addr.brick5.allow * # Allow access to "brick" volume
> option auth.addr.brick6.allow * # Allow access to "brick" volume
> option auth.addr.brick7.allow * # Allow access to "brick" volume
> option auth.addr.brick8.allow * # Allow access to "brick" volume
> end-volume
>
> the client configuration:
>
> gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> volume client1
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client2
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client3
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client4
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> volume client5
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client6
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client7
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client8
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> #volume afr3
> # type cluster/afr
> # subvolumes client3 client6
> #end-volume
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> subvolumes client1 client2 client3 client4
> end-volume
>
> Could you help me ?
>
>
>
> 2009-07-06
> ________________________________
> eagleeyes
> ________________________________
> ·¢ŒþÈË£º Sachidananda
> ·¢ËÍʱŒä£º 2009-07-04 11:39:03
> ÊÕŒþÈË£º eagleeyes
> ³­ËÍ£º gluster-users
> Ö÷Ì⣺ Re: [Gluster-users] HELP : Files lost after DHT expansion
> Hi,
> eagleeyes wrote:
> > When i update to gluster2.0.3 ,after dht expansion ,double directorys
> > appear in the gluster directory £¬why £¿
> >
> > client configure
> > volume dht
> > type cluster/dht
> > option lookup-unhashed yes
> > option min-free-disk 10%
> > subvolumes client1 client2 client3 client4 client5 client6 client7
> client8
> > #subvolumes client1 client2 client3 client4
> > end-volume
> >
> >
> Can you please send us your server/client volume files?
> --
> Sachidananda.
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>

_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Harshavardhana
2009-07-06 09:34:56 UTC
Permalink
If you can see fuse version on client 2 is different than fuse version of
client 1. Install the fuse -2.7.x release and recompile glusterfs on client
2 machine. It should work.

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


2009/7/6 eagleeyes <***@126.com>

> How could i remove one of fuse ?
>
>
> client 1
> 2.6.30
> gfs1:~ # dmesg | grep -i fuse
> fuse init (API version 7.11)
> gfs1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
> #define FUSE_MINOR_VERSION 7
>
> #define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
> # undef FUSE_MINOR_VERSION
> # define FUSE_MINOR_VERSION 5
> # define FUSE_MINOR_VERSION 4
> # define FUSE_MINOR_VERSION 1
> # define FUSE_MINOR_VERSION 1
>
>
> client 2
> 2.6.16.21-0.8-smp
> linux-2ca1:~ # dmesg | grep -i fuse
> fuse: module not supported by Novell, setting U taint flag.
> fuse init (API version 7.6)
>
> linux-2ca1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
> #define FUSE_MINOR_VERSION 8
>
> #define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
> # undef FUSE_MINOR_VERSION
> # define FUSE_MINOR_VERSION 5
> # define FUSE_MINOR_VERSION 4
> # define FUSE_MINOR_VERSION 1
> # define FUSE_MINOR_VERSION 1
>
>
>
>
>
>
> 2009-07-06
> ------------------------------
> eagleeyes
> ------------------------------
> *·¢ŒþÈË£º* Harshavardhana
> *·¢ËÍʱŒä£º* 2009-07-06 17:18:34
> *ÊÕŒþÈË£º* eagleeyes
> *³­ËÍ£º* Anand Avati; gluster-users
> *Ö÷Ì⣺* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8
> inkernel2.6.30 ,help !!!!!
> Eagleyes,
>
> I think you are using glusterfs with two different versions of fuse API
> versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21
> version. I would suggest you to use same fuse API versions for glusterfs.
> Can i have a few details
>
> 1. dmesg | grep -i fuse (on each clients)
> 2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each
> clients)
>
> Regards
> --
> Harshavardhana
> Z Research Inc http://www.zresearch.com/
>
>
> On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <***@126.com> wrote:
>
>> HI
>>
>> 1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE
>> sp10 ,kernel 2.6.30.
>> There were some error log :
>> pending frames:
>> frame : type(1) op(WRITE)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>> patchset: 65524f58b29f0b813549412ba6422711a505f5d8
>> signal received: 11
>> configuration details:argp 1
>> backtrace 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 2.0.3rc2
>> [0xffffe400]
>> /usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
>> /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
>> /lib/libpthread.so.0[0xb7f0d2ab]
>> /lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
>> ---------
>> 2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse
>> sp10, kernel 2.6.16.21-0.8-smp ,
>> when i expanded dht volumes from four to six ,then i "rm *" in gluster
>> directory , there were some error :
>>
>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
>>
>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
>>
>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
>>
>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)
>>
>>
>> and new files didn't write into the new volumes after expansion .
>>
>>
>>
>>
>> 2009-07-06
>> ------------------------------
>> eagleeyes
>> ------------------------------
>> *·¢ŒþÈË£º* Anand Avati
>> *·¢ËÍʱŒä£º* 2009-07-06 12:09:13
>> *ÊÕŒþÈË£º* eagleeyes
>> *³­ËÍ£º* gluster-users
>> *Ö÷Ì⣺* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in
>> kernel2.6.30 ,help !!!!!
>> Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
>> has been fixed in rc2.
>> Avati
>> On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
>> > HI
>> > I use gluster2.0.3rc1 with fuse 2.8 in kernel
>>
>> > 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
>> > message was :
>> >
>> > /dev/hda4 on /data type reiserfs (rw,user_xattr)
>>
>> > glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>> >
>> >
>> >
>>
>> > There was some error when i "touce 111" in gluster directory ,the error was
>> > :
>> > /home: Transport endpoint is not connected
>> >
>> > pending frames:
>> > patchset: e0db4ff890b591a58332994e37ce6db2bf430213
>> > signal received: 11
>> > configuration details:argp 1
>> > backtrace 1
>> > dlfcn 1
>> > fdatasync 1
>> > libpthread 1
>> > llistxattr 1
>> > setfsid 1
>> > spinlock 1
>> > epoll.h 1
>> > xattr.h 1
>> > st_atim.tv_nsec 1
>> > package-string: glusterfs 2.0.3rc1
>> > [0xffffe400]
>> > /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
>>
>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
>>
>> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
>>
>> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
>> > /lib/libglusterfs.so.0[0xb7facbda]
>> > /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
>> > glusterfs(main+0xc2e)[0x804b6ae]
>> > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
>> > glusterfs[0x8049c11]
>> > ---------
>> >
>> > the server configuration
>> >
>> > gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
>> > volume posix1
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data1 # Export this directory
>> > end-volume
>> > volume posix2
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data2 # Export this directory
>> > end-volume
>> > volume posix3
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data3 # Export this directory
>> > end-volume
>> > volume posix4
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data4 # Export this directory
>> > end-volume
>> > volume posix5
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data5 # Export this directory
>> > end-volume
>> > volume posix6
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data6 # Export this directory
>> > end-volume
>> > volume posix7
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data7 # Export this directory
>> > end-volume
>> > volume posix8
>> > type storage/posix # POSIX FS translator
>> > option directory /data/data8 # Export this directory
>> > end-volume
>> > volume brick1
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix1
>> > end-volume
>> > volume brick2
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix2
>> > end-volume
>> > volume brick3
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix3
>> > end-volume
>> > volume brick4
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix4
>> > end-volume
>> > volume brick5
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix5
>> > end-volume
>> > volume brick6
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix6
>> > end-volume
>> > volume brick7
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix7
>> > end-volume
>> > volume brick8
>> > type features/posix-locks
>>
>> > option mandatory-locks on # enables mandatory locking on all files
>> > subvolumes posix8
>> > end-volume
>> > ### Add network serving capability to above brick.
>> > volume server
>> > type protocol/server
>> > option transport-type tcp
>>
>> > option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
>>
>> > option transport.socket.listen-port 6996 # Default is 6996
>> > subvolumes brick1 brick2 brick3 brick4
>> > option auth.addr.brick1.allow * # Allow access to "brick" volume
>> > option auth.addr.brick2.allow * # Allow access to "brick" volume
>> > option auth.addr.brick3.allow * # Allow access to "brick" volume
>> > option auth.addr.brick4.allow * # Allow access to "brick" volume
>> > option auth.addr.brick5.allow * # Allow access to "brick" volume
>> > option auth.addr.brick6.allow * # Allow access to "brick" volume
>> > option auth.addr.brick7.allow * # Allow access to "brick" volume
>> > option auth.addr.brick8.allow * # Allow access to "brick" volume
>> > end-volume
>> >
>> > the client configuration:
>> >
>> > gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
>> > volume client1
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > option remote-subvolume brick1 # name of the remote volume
>> > end-volume
>> > volume client2
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick2 # name of the remote volume
>> > end-volume
>> > volume client3
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick3 # name of the remote volume
>> > end-volume
>> > volume client4
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick4 # name of the remote volume
>> > end-volume
>> > volume client5
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick1 # name of the remote volume
>> > end-volume
>> > volume client6
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick2 # name of the remote volume
>> > end-volume
>> > volume client7
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick3 # name of the remote volume
>> > end-volume
>> > volume client8
>> > type protocol/client
>> > option transport-type tcp
>>
>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>> > option remote-port 6996
>> > #option transport-timeout 10 # seconds to wait for a reply
>> > option remote-subvolume brick4 # name of the remote volume
>> > end-volume
>> > #volume afr3
>> > # type cluster/afr
>> > # subvolumes client3 client6
>> > #end-volume
>> > volume dht
>> > type cluster/dht
>> > option lookup-unhashed yes
>> > subvolumes client1 client2 client3 client4
>> > end-volume
>> >
>> > Could you help me ?
>> >
>> >
>> >
>> > 2009-07-06
>> > ________________________________
>> > eagleeyes
>> > ________________________________
>> > ·¢ŒþÈË£º Sachidananda
>> > ·¢ËÍʱŒä£º 2009-07-04 11:39:03
>> > ÊÕŒþÈË£º eagleeyes
>> > ³­ËÍ£º gluster-users
>> > Ö÷Ì⣺ Re: [Gluster-users] HELP : Files lost after DHT expansion
>> > Hi,
>> > eagleeyes wrote:
>>
>> > > When i update to gluster2.0.3 ,after dht expansion ,double directorys
>> > > appear in the gluster directory £¬why £¿
>> > >
>> > > client configure
>> > > volume dht
>> > > type cluster/dht
>> > > option lookup-unhashed yes
>> > > option min-free-disk 10%
>> > > subvolumes client1 client2 client3 client4 client5 client6 client7
>> > client8
>> > > #subvolumes client1 client2 client3 client4
>> > > end-volume
>> > >
>> > >
>> > Can you please send us your server/client volume files?
>> > --
>> > Sachidananda.
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-***@gluster.org
>> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>> >
>> >
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-***@gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
eagleeyes
2009-07-06 09:39:19 UTC
Permalink
I use the gluster only on itself space ,not a share pool . ,how to solve it ?

In your word ,if some different client use the same share pool space ,their fuse must the same version ?

2009-07-06



eagleeyes



·¢ŒþÈË£º Harshavardhana
·¢ËÍʱŒä£º 2009-07-06 17:34:58
ÊÕŒþÈË£º eagleeyes
³­ËÍ£º gluster-users
Ö÷Ì⣺ Re: Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8inkernel2.6.30 ,help !!!!!

If you can see fuse version on client 2 is different than fuse version of client 1. Install the fuse -2.7.x release and recompile glusterfs on client 2 machine. It should work.

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/



2009/7/6 eagleeyes <***@126.com>

How could i remove one of fuse ?


client 1
2.6.30
gfs1:~ # dmesg | grep -i fuse
fuse init (API version 7.11)
gfs1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
#define FUSE_MINOR_VERSION 7
#define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
# undef FUSE_MINOR_VERSION
# define FUSE_MINOR_VERSION 5
# define FUSE_MINOR_VERSION 4
# define FUSE_MINOR_VERSION 1
# define FUSE_MINOR_VERSION 1


client 2
2.6.16.21-0.8-smp
linux-2ca1:~ # dmesg | grep -i fuse
fuse: module not supported by Novell, setting U taint flag.
fuse init (API version 7.6)
linux-2ca1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
#define FUSE_MINOR_VERSION 8
#define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
# undef FUSE_MINOR_VERSION
# define FUSE_MINOR_VERSION 5
# define FUSE_MINOR_VERSION 4
# define FUSE_MINOR_VERSION 1
# define FUSE_MINOR_VERSION 1






2009-07-06



eagleeyes



·¢ŒþÈË£º Harshavardhana
·¢ËÍʱŒä£º 2009-07-06 17:18:34
ÊÕŒþÈË£º eagleeyes
³­ËÍ£º Anand Avati; gluster-users
Ö÷Ì⣺ Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 inkernel2.6.30 ,help !!!!!
Eagleyes,

I think you are using glusterfs with two different versions of fuse API versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21 version. I would suggest you to use same fuse API versions for glusterfs. Can i have a few details

1. dmesg | grep -i fuse (on each clients)
2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each clients)

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/



On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <***@126.com> wrote:

HI

1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE sp10 ,kernel 2.6.30.
There were some error log :
pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
patchset: 65524f58b29f0b813549412ba6422711a505f5d8
signal received: 11
configuration details:argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.3rc2
[0xffffe400]
/usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
/lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
/lib/libpthread.so.0[0xb7f0d2ab]
/lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
---------
2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse sp10, kernel 2.6.16.21-0.8-smp ,
when i expanded dht volumes from four to six ,then i "rm *" in gluster directory , there were some error :

[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
[2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)

and new files didn't write into the new volumes after expansion .




2009-07-06



eagleeyes



·¢ŒþÈË£º Anand Avati
·¢ËÍʱŒä£º 2009-07-06 12:09:13
ÊÕŒþÈË£º eagleeyes
³­ËÍ£º gluster-users
Ö÷Ì⣺ Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in kernel2.6.30 ,help !!!!!
Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
has been fixed in rc2.
Avati
On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
> HI
> I use gluster2.0.3rc1 with fuse 2.8 in kernel
> 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
> message was :
>
> /dev/hda4 on /data type reiserfs (rw,user_xattr)
> glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>
>
>
> There was some error when i "touce 111" in gluster directory ,the error was
> :
> /home: Transport endpoint is not connected
>
> pending frames:
> patchset: e0db4ff890b591a58332994e37ce6db2bf430213
> signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.3rc1
> [0xffffe400]
> /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
> /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
> /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
> /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
> /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
> /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
> /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
> /lib/libglusterfs.so.0[0xb7facbda]
> /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
> glusterfs(main+0xc2e)[0x804b6ae]
> /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
> glusterfs[0x8049c11]
> ---------
>
> the server configuration
>
> gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
> volume posix1
> type storage/posix # POSIX FS translator
> option directory /data/data1 # Export this directory
> end-volume
> volume posix2
> type storage/posix # POSIX FS translator
> option directory /data/data2 # Export this directory
> end-volume
> volume posix3
> type storage/posix # POSIX FS translator
> option directory /data/data3 # Export this directory
> end-volume
> volume posix4
> type storage/posix # POSIX FS translator
> option directory /data/data4 # Export this directory
> end-volume
> volume posix5
> type storage/posix # POSIX FS translator
> option directory /data/data5 # Export this directory
> end-volume
> volume posix6
> type storage/posix # POSIX FS translator
> option directory /data/data6 # Export this directory
> end-volume
> volume posix7
> type storage/posix # POSIX FS translator
> option directory /data/data7 # Export this directory
> end-volume
> volume posix8
> type storage/posix # POSIX FS translator
> option directory /data/data8 # Export this directory
> end-volume
> volume brick1
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix1
> end-volume
> volume brick2
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix2
> end-volume
> volume brick3
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix3
> end-volume
> volume brick4
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix4
> end-volume
> volume brick5
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix5
> end-volume
> volume brick6
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix6
> end-volume
> volume brick7
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix7
> end-volume
> volume brick8
> type features/posix-locks
> option mandatory-locks on # enables mandatory locking on all files
> subvolumes posix8
> end-volume
> ### Add network serving capability to above brick.
> volume server
> type protocol/server
> option transport-type tcp
> option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
> option transport.socket.listen-port 6996 # Default is 6996
> subvolumes brick1 brick2 brick3 brick4
> option auth.addr.brick1.allow * # Allow access to "brick" volume
> option auth.addr.brick2.allow * # Allow access to "brick" volume
> option auth.addr.brick3.allow * # Allow access to "brick" volume
> option auth.addr.brick4.allow * # Allow access to "brick" volume
> option auth.addr.brick5.allow * # Allow access to "brick" volume
> option auth.addr.brick6.allow * # Allow access to "brick" volume
> option auth.addr.brick7.allow * # Allow access to "brick" volume
> option auth.addr.brick8.allow * # Allow access to "brick" volume
> end-volume
>
> the client configuration:
>
> gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
> volume client1
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client2
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client3
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client4
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> volume client5
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick1 # name of the remote volume
> end-volume
> volume client6
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick2 # name of the remote volume
> end-volume
> volume client7
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick3 # name of the remote volume
> end-volume
> volume client8
> type protocol/client
> option transport-type tcp
> option remote-host 172.20.92.240 # IP address of the remote brick2
> option remote-port 6996
> #option transport-timeout 10 # seconds to wait for a reply
> option remote-subvolume brick4 # name of the remote volume
> end-volume
> #volume afr3
> # type cluster/afr
> # subvolumes client3 client6
> #end-volume
> volume dht
> type cluster/dht
> option lookup-unhashed yes
> subvolumes client1 client2 client3 client4
> end-volume
>
> Could you help me ?
>
>
>
> 2009-07-06
> ________________________________
> eagleeyes
> ________________________________
> ·¢ŒþÈË£º Sachidananda
> ·¢ËÍʱŒä£º 2009-07-04 11:39:03
> ÊÕŒþÈË£º eagleeyes
> ³­ËÍ£º gluster-users
> Ö÷Ì⣺ Re: [Gluster-users] HELP : Files lost after DHT expansion
> Hi,
> eagleeyes wrote:
> > When i update to gluster2.0.3 ,after dht expansion ,double directorys
> > appear in the gluster directory £¬why £¿
> >
> > client configure
> > volume dht
> > type cluster/dht
> > option lookup-unhashed yes
> > option min-free-disk 10%
> > subvolumes client1 client2 client3 client4 client5 client6 client7
> client8
> > #subvolumes client1 client2 client3 client4
> > end-volume
> >
> >
> Can you please send us your server/client volume files?
> --
> Sachidananda.
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>

_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Harshavardhana
2009-07-06 09:59:28 UTC
Permalink
Yes problem is that you having a fuse 2.8.0 version on client 2 which is of
API incompatible with fuse 2.7.x version.

And also using fuse 2.8.0 version on an older kernel wouldn't help as the
low level kernel api are also incompatible. Like in your case on 2.6.16 on
client 2

Regards
--
Harshavardhana
Z Research Inc http://www.zresearch.com/


2009/7/6 eagleeyes <***@126.com>

> I use the gluster only on itself space ,not a share pool . ,how to solve
> it ?
>
> In your word ,if some different client use the same share pool space ,their
> fuse must the same version ?
>
> 2009-07-06
> ------------------------------
> eagleeyes
> ------------------------------
> *·¢ŒþÈË£º* Harshavardhana
> *·¢ËÍʱŒä£º* 2009-07-06 17:34:58
> *ÊÕŒþÈË£º* eagleeyes
> *³­ËÍ£º* gluster-users
> *Ö÷Ì⣺* Re: Re: [Gluster-users] Error : gluster2.0.3rc1 with
> fuse2.8inkernel2.6.30 ,help !!!!!
> If you can see fuse version on client 2 is different than fuse version
> of client 1. Install the fuse -2.7.x release and recompile glusterfs on
> client 2 machine. It should work.
>
> Regards
> --
> Harshavardhana
> Z Research Inc http://www.zresearch.com/
>
>
> 2009/7/6 eagleeyes <***@126.com>
>
>> How could i remove one of fuse ?
>>
>>
>> client 1
>> 2.6.30
>> gfs1:~ # dmesg | grep -i fuse
>> fuse init (API version 7.11)
>>
>> gfs1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
>> #define FUSE_MINOR_VERSION 7
>>
>> #define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
>> # undef FUSE_MINOR_VERSION
>> # define FUSE_MINOR_VERSION 5
>> # define FUSE_MINOR_VERSION 4
>> # define FUSE_MINOR_VERSION 1
>> # define FUSE_MINOR_VERSION 1
>>
>>
>> client 2
>> 2.6.16.21-0.8-smp
>> linux-2ca1:~ # dmesg | grep -i fuse
>> fuse: module not supported by Novell, setting U taint flag.
>> fuse init (API version 7.6)
>>
>> linux-2ca1:~ # grep -i FUSE_MINOR_VERSION /usr/local/include/fuse/fuse_common.h
>> #define FUSE_MINOR_VERSION 8
>>
>> #define FUSE_VERSION FUSE_MAKE_VERSION(FUSE_MAJOR_VERSION, FUSE_MINOR_VERSION)
>> # undef FUSE_MINOR_VERSION
>> # define FUSE_MINOR_VERSION 5
>> # define FUSE_MINOR_VERSION 4
>> # define FUSE_MINOR_VERSION 1
>> # define FUSE_MINOR_VERSION 1
>>
>>
>>
>>
>>
>>
>> 2009-07-06
>> ------------------------------
>> eagleeyes
>> ------------------------------
>> *·¢ŒþÈË£º* Harshavardhana
>> *·¢ËÍʱŒä£º* 2009-07-06 17:18:34
>> *ÊÕŒþÈË£º* eagleeyes
>> *³­ËÍ£º* Anand Avati; gluster-users
>> *Ö÷Ì⣺* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8
>> inkernel2.6.30 ,help !!!!!
>> Eagleyes,
>>
>> I think you are using glusterfs with two different versions of fuse API
>> versions. API versions for 2.6.30 kernel are not compatible with 2.6.16-21
>> version. I would suggest you to use same fuse API versions for glusterfs.
>> Can i have a few details
>>
>> 1. dmesg | grep -i fuse (on each clients)
>> 2. grep -i FUSE_MINOR_VERSION /usr/include/fuse/fuse_common.h (on each
>> clients)
>>
>> Regards
>> --
>> Harshavardhana
>> Z Research Inc http://www.zresearch.com/
>>
>>
>> On Mon, Jul 6, 2009 at 12:31 PM, eagleeyes <***@126.com> wrote:
>>
>>> HI
>>>
>>> 1. I use gluster2.0.3rc2 with fuse init (API version 7.11) in SUSE
>>> sp10 ,kernel 2.6.30.
>>> There were some error log :
>>> pending frames:
>>> frame : type(1) op(WRITE)
>>> frame : type(1) op(READ)
>>> frame : type(1) op(READ)
>>> frame : type(1) op(READ)
>>> patchset: 65524f58b29f0b813549412ba6422711a505f5d8
>>> signal received: 11
>>> configuration details:argp 1
>>> backtrace 1
>>> dlfcn 1
>>> fdatasync 1
>>> libpthread 1
>>> llistxattr 1
>>> setfsid 1
>>> spinlock 1
>>> epoll.h 1
>>> xattr.h 1
>>> st_atim.tv_nsec 1
>>> package-string: glusterfs 2.0.3rc2
>>> [0xffffe400]
>>> /usr/local/lib/libfuse.so.2(fuse_session_process+0x26)[0xb752fb56]
>>> /lib/glusterfs/2.0.3rc2/xlator/mount/fuse.so[0xb755de25]
>>> /lib/libpthread.so.0[0xb7f0d2ab]
>>> /lib/libc.so.6(__clone+0x5e)[0xb7ea4a4e]
>>> ---------
>>> 2. Use glusterfs 2.0.3rc2 with fuse init (API version 7.6) in suse
>>> sp10, kernel 2.6.16.21-0.8-smp ,
>>> when i expanded dht volumes from four to six ,then i "rm *" in gluster
>>> directory , there were some error :
>>>
>>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1636: RMDIR() /scheduler => -1 (No such file or directory)
>>>
>>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1643: RMDIR() /transport => -1 (No such file or directory)
>>>
>>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1655: RMDIR() /xlators/cluster => -1 (No such file or directory)
>>>
>>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1666: RMDIR() /xlators/debug => -1 (No such file or directory)
>>> [2009-07-06 22:56:23] W [fuse-bridge.c:921:fuse_unlink_cbk] glusterfs-fuse: 1677: RMDIR() /xlators/mount => -1 (No such file or directory)
>>>
>>>
>>> and new files didn't write into the new volumes after expansion .
>>>
>>>
>>>
>>>
>>> 2009-07-06
>>> ------------------------------
>>> eagleeyes
>>> ------------------------------
>>> *·¢ŒþÈË£º* Anand Avati
>>> *·¢ËÍʱŒä£º* 2009-07-06 12:09:13
>>> *ÊÕŒþÈË£º* eagleeyes
>>> *³­ËÍ£º* gluster-users
>>> *Ö÷Ì⣺* Re: [Gluster-users] Error : gluster2.0.3rc1 with fuse2.8 in
>>> kernel2.6.30 ,help !!!!!
>>> Please use 2.0.3 stable, or upgrade to the next rc2 until then. This
>>> has been fixed in rc2.
>>> Avati
>>> On Mon, Jul 6, 2009 at 8:31 AM, eagleeyes<***@126.com> wrote:
>>> > HI
>>> > I use gluster2.0.3rc1 with fuse 2.8 in kernel
>>>
>>> > 2.6.30(SUSE Linux Enterprise Server 10 SP1 with kernel 2.6.30 ) . the mount
>>> > message was :
>>> >
>>> > /dev/hda4 on /data type reiserfs (rw,user_xattr)
>>>
>>> > glusterfs-client.vol.dht on /home type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
>>> >
>>> >
>>> >
>>>
>>> > There was some error when i "touce 111" in gluster directory ,the error was
>>> > :
>>> > /home: Transport endpoint is not connected
>>> >
>>> > pending frames:
>>> > patchset: e0db4ff890b591a58332994e37ce6db2bf430213
>>> > signal received: 11
>>> > configuration details:argp 1
>>> > backtrace 1
>>> > dlfcn 1
>>> > fdatasync 1
>>> > libpthread 1
>>> > llistxattr 1
>>> > setfsid 1
>>> > spinlock 1
>>> > epoll.h 1
>>> > xattr.h 1
>>> > st_atim.tv_nsec 1
>>> > package-string: glusterfs 2.0.3rc1
>>> > [0xffffe400]
>>> > /lib/glusterfs/2.0.3rc1/xlator/mount/fuse.so[0xb75c6288]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/performance/write-behind.so(wb_create_cbk+0xa7)[0xb75ccad7]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/performance/io-cache.so(ioc_create_cbk+0xde)[0xb7fbe8ae]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/performance/read-ahead.so(ra_create_cbk+0x167)[0xb7fc78b7]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/cluster/dht.so(dht_create_cbk+0xf7)[0xb75e25b7]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(client_create_cbk+0x2ad)[0xb76004ad]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_interpret+0x1ef)[0xb75ef8ff]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xb75efaef]
>>>
>>> > /lib/glusterfs/2.0.3rc1/xlator/protocol/client.so(notify+0x1ec)[0xb75f6ddc]
>>>
>>> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_poll_in+0x3b)[0xb75b775b]
>>>
>>> > /lib/glusterfs/2.0.3rc1/transport/socket.so(socket_event_handler+0xae)[0xb75b7b8e]
>>> > /lib/libglusterfs.so.0[0xb7facbda]
>>> > /lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7fabac1]
>>> > glusterfs(main+0xc2e)[0x804b6ae]
>>> > /lib/libc.so.6(__libc_start_main+0xdc)[0xb7e6087c]
>>> > glusterfs[0x8049c11]
>>> > ---------
>>> >
>>> > the server configuration
>>> >
>>> > gfs1:/ # cat /etc/glusterfs/glusterfsd-sever.vol
>>> > volume posix1
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data1 # Export this directory
>>> > end-volume
>>> > volume posix2
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data2 # Export this directory
>>> > end-volume
>>> > volume posix3
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data3 # Export this directory
>>> > end-volume
>>> > volume posix4
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data4 # Export this directory
>>> > end-volume
>>> > volume posix5
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data5 # Export this directory
>>> > end-volume
>>> > volume posix6
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data6 # Export this directory
>>> > end-volume
>>> > volume posix7
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data7 # Export this directory
>>> > end-volume
>>> > volume posix8
>>> > type storage/posix # POSIX FS translator
>>> > option directory /data/data8 # Export this directory
>>> > end-volume
>>> > volume brick1
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix1
>>> > end-volume
>>> > volume brick2
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix2
>>> > end-volume
>>> > volume brick3
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix3
>>> > end-volume
>>> > volume brick4
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix4
>>> > end-volume
>>> > volume brick5
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix5
>>> > end-volume
>>> > volume brick6
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix6
>>> > end-volume
>>> > volume brick7
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix7
>>> > end-volume
>>> > volume brick8
>>> > type features/posix-locks
>>>
>>> > option mandatory-locks on # enables mandatory locking on all files
>>> > subvolumes posix8
>>> > end-volume
>>> > ### Add network serving capability to above brick.
>>> > volume server
>>> > type protocol/server
>>> > option transport-type tcp
>>>
>>> > option transport.socket.bind-address 172.20.92.240 # Default is to listen on all interfaces
>>>
>>> > option transport.socket.listen-port 6996 # Default is 6996
>>> > subvolumes brick1 brick2 brick3 brick4
>>> > option auth.addr.brick1.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick2.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick3.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick4.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick5.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick6.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick7.allow * # Allow access to "brick" volume
>>> > option auth.addr.brick8.allow * # Allow access to "brick" volume
>>> > end-volume
>>> >
>>> > the client configuration:
>>> >
>>> > gfs1:/ # cat /etc/glusterfs/glusterfs-client.vol.dht
>>> > volume client1
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > option remote-subvolume brick1 # name of the remote volume
>>> > end-volume
>>> > volume client2
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick2 # name of the remote volume
>>> > end-volume
>>> > volume client3
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick3 # name of the remote volume
>>> > end-volume
>>> > volume client4
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick4 # name of the remote volume
>>> > end-volume
>>> > volume client5
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick1 # name of the remote volume
>>> > end-volume
>>> > volume client6
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick2 # name of the remote volume
>>> > end-volume
>>> > volume client7
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick3 # name of the remote volume
>>> > end-volume
>>> > volume client8
>>> > type protocol/client
>>> > option transport-type tcp
>>>
>>> > option remote-host 172.20.92.240 # IP address of the remote brick2
>>> > option remote-port 6996
>>> > #option transport-timeout 10 # seconds to wait for a reply
>>> > option remote-subvolume brick4 # name of the remote volume
>>> > end-volume
>>> > #volume afr3
>>> > # type cluster/afr
>>> > # subvolumes client3 client6
>>> > #end-volume
>>> > volume dht
>>> > type cluster/dht
>>> > option lookup-unhashed yes
>>> > subvolumes client1 client2 client3 client4
>>> > end-volume
>>> >
>>> > Could you help me ?
>>> >
>>> >
>>> >
>>> > 2009-07-06
>>> > ________________________________
>>> > eagleeyes
>>> > ________________________________
>>> > ·¢ŒþÈË£º Sachidananda
>>> > ·¢ËÍʱŒä£º 2009-07-04 11:39:03
>>> > ÊÕŒþÈË£º eagleeyes
>>> > ³­ËÍ£º gluster-users
>>> > Ö÷Ì⣺ Re: [Gluster-users] HELP : Files lost after DHT expansion
>>> > Hi,
>>> > eagleeyes wrote:
>>>
>>> > > When i update to gluster2.0.3 ,after dht expansion ,double directorys
>>> > > appear in the gluster directory £¬why £¿
>>> > >
>>> > > client configure
>>> > > volume dht
>>> > > type cluster/dht
>>> > > option lookup-unhashed yes
>>> > > option min-free-disk 10%
>>>
>>> > > subvolumes client1 client2 client3 client4 client5 client6 client7
>>> > client8
>>> > > #subvolumes client1 client2 client3 client4
>>> > > end-volume
>>> > >
>>> > >
>>> > Can you please send us your server/client volume files?
>>> > --
>>> > Sachidananda.
>>> > _______________________________________________
>>> > Gluster-users mailing list
>>> > Gluster-***@gluster.org
>>> > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>> >
>>> >
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-***@gluster.org
>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>
Loading...