On Tue, May 28, 2019 at 3:30 PM Michel Buijsman <michelb(a)bit.nl> wrote:
On Tue, May 21, 2019 at 07:03:50AM -0700, Frank Filz wrote:
> From: Suhrud Patankar [mailto:suhrudpatankar@gmail.com]
> > On Tue, May 21, 2019 at 1:42 PM Michel Buijsman <michelb(a)bit.nl> wrote:
> > > Short version: Locking will be a problem.
> > > (It's a problem to begin with, but beyond that.)
> > >
> > > NFSv3 locking is hopeless, the rpc daemons are not aware of multiple
> > > processes, they will mess each other up and no amount of namespacing
> > > or containerizing will convince it otherwise.
> > >
> > Yes, can imagine problems with statd.
> > Long time back there was some discussion on possibility of getting statd code
> > inside Ganesha.
> > Has this ever considered/tried?
>
> NFSv3 is always going to be a bit tricky because it relies on rpcbind which
isn't prepared to give more than one port number. I think these days an NFSv3 mount
can work by specifying both the NFS port and the MNT port, but I don’t know if there's
a way for the client to specify the NLM port.
Yes I tried setting fixed ports across all instances and it seemed to work
for a while but it's fragile. As soon as you'd migrate one instance off a
host the RPC stuff would shut down and the remaining instances would break.
Yes. I tried sending a signal to Ganesha (which is not going away) to
re-register with port mapper and then things work again.
Did you single statd for both the Ganesha instances?
I see that keeping the state number file same for both the Ganesha
processes could be tricky in case one instance restarts.
Does it break the specs if Ganesha talks to statd on public interface
instead of the local one?
> Unfortunately the world still very much wants/expects v3...
>
> > > > NFSv4 depends on the client. We had all kinds of vague issues
> > > > depending on type of workload. A vmware client had no apparent
> > > > problems whatsoever, others did.
> > > >
> > > > I built this on ganesha 2.7.1. Currently running 2.7.3 on a few test
> > > > vips but we couldn't get it stable enough for production use.
> >
> > I'd love to understand what all the issues were, they may be issues that
affect single instance also.
>
> I never really got on top of what the exact issues were, unfortunately we
> were on a tight deadline so there wasn't a lot of time to debug all the
> issues before switching to plan B, kernel-nfs on separate VMs (and taking
> a performance hit).
>
> --
> Michel Buijsman
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org