Another question..
On 1/28/19 9:58 AM, Daniel Gryniewicz wrote:
> On 1/28/19 12:45 PM, Jeff Becker wrote:
>> One more question below...
>>
>> On 1/28/19 9:37 AM, Daniel Gryniewicz wrote:
>>> A lot of that looks very out-of-date. The description of what goes
>>> in the Path statement is correct, but there is no longer an
>>> NFSv4_Proxy section. Instead, configuration goes in the FSAL
>>> subsection of the export. You can look at the sample proxy.conf in
>>> the repo here:
>>>
>>>
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samp
>>> les/proxy.conf
>>>
>>>
>>> Or you can look at the man page for FSAL PROXY here:
>>>
>>>
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/doc/man/gan
>>> esha-proxy-config.rst
>>>
>>>
>>> (And sorry for writing PSEUDO in my last email, I meant PROXY)
>>>
>>> Daniel
>>>
>>> On 1/28/19 12:29 PM, Jeff Becker wrote:
>>>> Hi,
>>>>
>>>> On 1/25/19 5:36 AM, Daniel Gryniewicz wrote:
>>>>> The path given to PSEUDO needs to be the full path exported by the
>>>>> other NFS server, not any kind of local mount. So, when you mount
>>>>> the other server directly, you do this:
>>>>>
>>>>> mount -t nfs <server>:<remotepath> <localpath>
>>>>>
>>>>> What needs to go in that Path config statement is <remotepath>
not
>>>>> <localpath>. PSEUDO isn't accessing a mount (and, indeed,
you
>>>>> probably should not have the remote server mounted on the Ganesha
>>>>> box), it's acting as it's own NFS client, and translating
NFS
>>>>> calls directly.
>>
>> Do you mean on the client mounting the Ganesha server, I should not
>> also be directly mounting the remote filesystem for which the Ganesha
>> server is proxying access? Thanks.
>>
>
> Hmm... I think a full explanation is necessary. There are three
> machines involved here. First, the original NFS server, hosting the
> base filesystem. Lets call this server O, for original. Next, there's
> the Ganesha server, which we'll call G. Finally, there's the Client,
> which we'll call C. They're likely connected something like this:
>
> O -------- G -------- C
>
> O has one path, when it comes to NFS: it's export path. Let's call
> this /exportO. If you were to mount this on a client directly, you
> would use a command like this:
>
> mount -t nfs O:/exportO /mnt
This is helpful. However, it's not clear how to specify specific clients or client
ranges in the Export block. I don't want it to be the case that anyone can mount
the Ganesha test server. Thanks.
So in your EXPORT block, make sure the Access is something you want all clients to have
(maybe none, could be read-only).
Add at least one CLIENT sub-block, here's a sample from my development test config:
CLIENT
{
Access_Type=RW;
Squash=None;
Clients=simple1*,127.0.0.1,local*,192.168.0.119,192.168.0.111;
SecType = sys;
Anonymous_gid = -5;
}
The clients list can list individual host names, host name wild cards, individual IP
addresses, netgroups, and IP net masks (in CIDR form, for example 127.0.0.0/8). IPv6
addresses should be supported...
The CLIENT blocks are considered in order (thus they form an ACL, you could exclude part
of a sub-net by listing it with Access_Type=none; first, and then the sub-net with
Access_Type=RW; second). The permissions in the EXPORT are considered next, and then the
global EXPORT_DEFAULTS block and finally the code defaults.
Any option that is not specified at one layer will pick it's value of from the first
layer specified (finally the code defaults if not specified at all).
Frank