This might work, but keep in mind that *any* change to the client (or
it's associated state) needs to be persisted to synchronously rados,
which will introduce huge latency to most ops. This isn't even memory
-> disk latency (100x - 1000x) but memory to network latency (10000x or
higher).
Daniel
On 4/2/19 10:46 PM, fanzi2009(a)hotmail.com wrote:
How about I persist all the session information into backend(rados
for example)? If so, even though the client connect to another server, the server can
reconstruct session from backend. Client don't need to re-create session and don't
need to reclaim.
> On Tue, Apr 2, 2019 at 9:21 AM Daniel Gryniewicz <dang(a)redhat.com>
wrote:
>
> I mostly agree with everything Dan said here. Clustered environments
> require extra care, and we don't currently have a way to migrate
> clients between different ganesha heads that are exporting the same
> clustered fs. I've done some experimentation with v4 migration here,
> but there's nothing in stock ganesha today for this.
>
> You can, however, avoid putting all of the other cluster nodes into
> the grace period if you know that none of the other cluster nodes can
> acquire state that will need to be reclaimed by a clients of the node
> that is restarting. We have some tentative plans to implement this for
> FSAL_CEPH + the RADOS recovery backends someday, but it requires
> support in the Ceph MDS and userland client libraries that does not
> yet exist.
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org