On Wed, Apr 10, 2019 at 8:34 AM <bbk(a)nocloud.ch> wrote:
Dear Ganesha(-UserList),
I am trying to setup nfs-ganesha with FSAL_CEPH but i lack knowledge on how to export
multiple directories. When i only configure one export everything works as expected, but
as soon as i have two of it, the ceph client gets blacklisted.
The following documentation states that it is possible to export multiple directories,
but i don't know how to configure it correctly.
*
http://docs.ceph.com/docs/master/cephfs/nfs/
```
Per running ganesha daemon, FSAL_CEPH can only export one Ceph filesystem although
multiple directories in a Ceph filesystem may be exported.
```
From what i get out of the following message, is that it should be possible, but each
export will use it's own ceph client:
*
https://lists.nfs-ganesha.org/archives/list/support@lists.nfs-ganesha.org...
My nfs-server setup has the following software versions installed:
* NFS-Ganesha Release = V2.7.3
* ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)
On my cephfs i have created the folder /scratch, in the ganesha configuration i use the
```rados_cluster``` recovery backend and i have the following two exports defined:
```
EXPORT
{
Export_id=100;
Path = "/";
Pseudo = /cephfs;
FSAL
{
Name = CEPH;
User_Id = "ganesha";
filesystem = "cephfs";
}
CLIENT
{
Clients = @root-rw;
Squash = "No_Root_Squash";
Access_type = RW;
}
}
EXPORT
{
Export_Id = 300;
Path = "/scratch";
Pseudo = /scratch;
FSAL
{
Name = CEPH;
User_Id = "ganesha";
filesystem = "cephfs";
}
CLIENT
{
Clients = @client-rw;
Squash = "Root_Squash";
Access_type = RW;
}
}
```
When i restart the nfs-ganesha service i'll get the following errors:
Ganesha Log:
```
ganesha.nfsd-13554[main] posix2fsal_error :FSAL :CRIT :Mapping 108(default) to
ERR_FSAL_SERVERFAULT
```
Ceph Log:
```
mds.0.43 Evicting (and blacklisting) client session 122567
log_channel(cluster) log [INF] : Evicting (and blacklisting) client session 122567
```
At the end i only have one export (ceph client) which is working.
Yours,
bbk
I think this is probably a bug in FSAL_CEPH. We are doing this:
ceph_status = ceph_start_reclaim(export->cmount, nodeid,
CEPH_RECLAIM_RESET);
...and the nodeid is invariant between exports. What we probably need
to do is compose a unique nodeid per export. I'll spin up a patch and
test it soon.
Thanks,
--
Jeff Layton <jlayton(a)poochiereds.net>