From: Suhrud Patankar [mailto:suhrudpatankar@gmail.com]
On Tue, May 21, 2019 at 1:42 PM Michel Buijsman
<michelb(a)bit.nl> wrote:
>
> On Tue, May 21, 2019 at 12:47:32PM +0530, Suhrud Patankar wrote:
> > We want to run two separate Ganesha processes on one node. (Using
> > 2.7.1 code level)
> >
> > The config params "Bind_Addr" and "Dbus_Name_Prefix" work
as
> > expected to isolate the two instances.
> > So we have each process binding to separate network interface and
> > gets its own DBus bus-name.
> > They need to serve different file systems, so not much worry about
> > the cache consistency between them.
> >
> > Is there any specific area we need to worry about? Has anyone tried
> > similar setup?
>
> Yes, I tried it in a cluster of storage gateways with one ganesha
> daemon per virtual ip and automatic failover via ctdb, against a ceph backend.
>
> Short version: Locking will be a problem.
> (It's a problem to begin with, but beyond that.)
>
> NFSv3 locking is hopeless, the rpc daemons are not aware of multiple
> processes, they will mess each other up and no amount of namespacing
> or containerizing will convince it otherwise.
>
Yes, can imagine problems with statd.
Long time back there was some discussion on possibility of getting statd code
inside Ganesha.
Has this ever considered/tried?
NFSv3 is always going to be a bit tricky because it relies on rpcbind which isn't
prepared to give more than one port number. I think these days an NFSv3 mount can work by
specifying both the NFS port and the MNT port, but I don’t know if there's a way for
the client to specify the NLM port.
If we had a custom statd, it could treat each server IP separately and properly make it
appear to the client that each server IP is a totally independent server.
A custom rpcbind could solve the problem of the NLM port (and NFS and MNT ports to
boot)...
> NFSv4 depends on the client. We had all kinds of vague issues
> depending on type of workload. A vmware client had no apparent
> problems whatsoever, others did.
>
> I built this on ganesha 2.7.1. Currently running 2.7.3 on a few test
> vips but we couldn't get it stable enough for production use.
I'd love to understand what all the issues were, they may be issues that affect single
instance also.
Frank
> There was the odd crashing ganesha daemon if I was really
hammering
> it, but other than the locking issue it was looking pretty good.
> Performed well, too.
>
Thanks for the quick response!
> --
> Michel Buijsman
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org To unsubscribe send
> an email to devel-leave(a)lists.nfs-ganesha.org
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org To unsubscribe send an email to
devel-leave(a)lists.nfs-ganesha.org