Change in ...nfs-ganesha[next]: Allow EXPORT pseudo path to be changed during export update
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334 )
Change subject: Allow EXPORT pseudo path to be changed during export update
......................................................................
Allow EXPORT pseudo path to be changed during export update
This also fully allows adding or removing NFSv4 support from an export
since we can now handle the PseudoFS swizzing that occurs.
Note that an explicit PseudoFS export may be removed or added, though
you can not change it from export_id 0 because we currently don't allow
changing the export_id.
Note that this patch doesn't handle DBUS add or remove export though
that is an option to improve. I may add them to this patch (it wouldn't
be that hard) but I want to get this reviewed as is right now.
There are implications to a client of changing the PseudoFS. I have
tested moving an export in the PseudoFS with a client mounted. The
client will be able to continue accessing the export, though it may
see an ESTALE error if it navigates out of the export. The current
working directory will go bad and the pwd comment will fail indicating
a disconnected mount. I have also seen referencing .. from the root of
the export wrapping around back to the root (I believe this is how
disconnected mounts are set up).
FSAL_PSEUDO lookups and create handles (PUTFH or any use of an NFSv3
handle where the inode isn't cached) which fail during an export update
are instead turned into ERR_FSAL_DELAY which turns into NFS4ERR_DELAY or
NFS3ERR_JUKEBOX to force the client to retry under the completed update.
Change-Id: I507dc17a651936936de82303ff1291677ce136be
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_PSEUDO/handle.c
M src/MainNFSD/libganesha_nfsd.ver
M src/Protocols/NFS/nfs4_pseudo.c
M src/include/export_mgr.h
M src/include/nfs_proto_functions.h
M src/support/export_mgr.c
M src/support/exports.c
7 files changed, 560 insertions(+), 203 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/34/490334/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I507dc17a651936936de82303ff1291677ce136be
Gerrit-Change-Number: 490334
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
10 months
Monitoring in Ganesha?
by Bjorn Leffler
Apart from the counters that you can access through dbus, is there any
other monitoring built into Ganesha?
I'm thinking of adding it with this higher level plan:
- Exporting metrics from Ganesha to Prometheus.
- Aggregate data in Prometheus.
- Display monitoring consoles and graphs with Grafana.
- Package up Prometheus, Grafana and the preconfigured rules/dashboards as
a Docker image.
- This makes it straightforward to deploy monitoring alongside a Gansha
binary.
My rough coding plan for the code is to:
- Add a USE_MONITORING directive to the CMakeLists.txt files.
- Add a build dependency to the Prometheus C client
<https://github.com/digitalocean/prometheus-client-c>.
- Create a src/monitoring directory for the new source files and templates.
- Increment counters and timers throughout the code.
- Use histograms to compute latency percentiles, heatmaps, etc.
Is this a good idea? Any objections or suggestions?
Thanks,
Bjorn
3 years
Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
Issues seen with krb5i and krb5p mounts
by Trishali Nayar
Hi all,
There are a few clients eg- Ubuntu 18.04.3 (4.15.0-55-generic) and RH7.8
(3.10.0-1127.el7.x86_64) for which we have observed... simple command like
'dd' either hangs or returns EIO. This is happening only on krb5i and krb5p
mounts. It seems to happen for file sizes eg- 100MB and larger mostly. But
sometimes even a 30 MB file sees failures.
A client eg- RH7.6 (3.10.0-957.el7.x86_64) does not seem to hit this
issue...so might be with more recent kernels?
We fixed the issue with check-in
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490802 The idea was to
let clients know that Ganesha denied the request VS just dropping the
request.
This fix did seem to help and hangs/errors stopped completely... but for
larger file sizes eg- 1000MB we started seeing "Permission Denied" errors.
This was different than the EIO errors seen earlier. Reason could be we are
now sending an "AUTH DENIED" error so clients translate it to this new
error.
We tried to add more logging into Ganesha and observe that these particular
clients seem to send a lot of requests together. When we process same, the
sequence no. is pretty much out or order and we drop the requests outside
the sequence window, as per the RFC 2203 Section 7.2.1. The sequence window
that we have is 32.
Testing these clients with kNFS does not hit the issue...The kNFS sequence
window seems to be larger and is 128.
So, tried to increase the sequence window as well to 128 for ganesha. That
does not seem to help fix the issue.
We also have below additional 'seqmask' check and many of the requests went
into that category as well and were dropped.
"libntirpc/src/svc_auth_gss.c":
} else if (offset >= gd->win || (gd->seqmask & (1 << offset))) {
*no_dispatch = true;
goto gd_free;
}
Also observed that now these clients sent many requests above the 128
window...which we would again drop.
Wondering what is the proper way to fix this and any idea on what these
clients are doing different.
Thanks and regards,
Trishali.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trishali Nayar
IBM Systems
ETZ, Pune.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4 years, 2 months
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
NFSV4 (FSAL_VFS) multi-clients copy large file: Input/output error with version V3-stable
by Leave
Hi,
When I use the nfs-ganesha with V3-stable, I found that if we use multiple clients(like 5 clients on different PC based on Vmware machine) to copy big file(like 1G) at the same time by the NFSv4, the "Input/output error" is occured on the client. We cannot find any log about the error in the log of the server.
But if we copy the file one by one, this error is not seen. we tried many times, include xfs or ext4, the error occured with high probability when: 1.multi-clients copy large files at the same time 2.mount with NFSV4
And if we use NFSv3, this error is not seen.
Could you please give us some clue about this problem?
ps: we used FSAL_VFS.
error(some clients may didnot occured, but some did with high probability):
[root@localhost kpfadmin]# cp 1G t2/ cp: error writing 't2/1G': Input/output error cp: failed to close 't2/1G': Input/output error
OS:
Centos 8.1-1911
Linux localhost.localdomain 4.18.0-147.el8.x86_64 #1 SMP Wed Dec 4 21:51:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
client mount like:
10.10.1.184: mount -t nfs4 10.10.1.183:/t3 t4/ 10.10.1.185: mount -t nfs4 10.10.1.183:/t4 t4/ 10.10.1.186: mount -t nfs4 10.10.1.183:/t5 t5/ 10.10.1.187: mount -t nfs4 10.10.1.183:/t5 t5/
server(ganesha.conf):
################################################### # # Ganesha Config Example # # This is a commented example configuration file for Ganesha. It is not # complete, but only has some common configuration options. See the man pages # for complete documentation. # ################################################### ## These are core parameters that affect Ganesha as a whole. #NFS_CORE_PARAM { ## Allow NFSv3 to mount paths with the Pseudo path, the same as NFSv4, ## instead of using the physical paths. #mount_path_pseudo = true; ## Configure the protocols that Ganesha will listen for. This is a hard ## limit, as this list determines which sockets are opened. This list ## can be restricted per export, but cannot be expanded. #Protocols = 3,4,9P; #} ## These are defaults for exports. They can be overridden per-export. #EXPORT_DEFAULTS { ## Access type for clients. Default is None, so some access must be ## given either here or in the export itself. #Access_Type = RW; #} ## Configure settings for the object handle cache #MDCACHE { ## The point at which object cache entries will start being reused. #Entries_HWMark = 100000; #} EXPORT { ## Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 1001; ## Exported path (mandatory) Path = /home/kpfadmin/t1; ## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true) Pseudo = /t1; ## Restrict the protocols that may use this export. This cannot allow ## access that is denied in NFS_CORE_PARAM. Protocols = 3,4; ## Access type for clients. Default is None, so some access must be ## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block Access_Type = RW; ## Whether to squash various users. Squash = root_squash; ## Allowed security types for this export #Sectype = sys,krb5,krb5i,krb5p; ## Exporting FSAL FSAL { Name = VFS; } } ## Configure an export for some file tree EXPORT { ## Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 2; ## Exported path (mandatory) Path = /home/kpfadmin/t2; ## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true) Pseudo = /t2; ## Restrict the protocols that may use this export. This cannot allow ## access that is denied in NFS_CORE_PARAM. Protocols = 3,4; ## Access type for clients. Default is None, so some access must be ## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block Access_Type = RW; ## Whether to squash various users. Squash = root_squash; ## Allowed security types for this export #Sectype = sys,krb5,krb5i,krb5p; ## Exporting FSAL FSAL { Name = VFS; } } ## Configure an export for some file tree EXPORT { ## Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 3; ## Exported path (mandatory) Path = /home/kpfadmin/t3; ## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true) Pseudo = /t3; ## Restrict the protocols that may use this export. This cannot allow ## access that is denied in NFS_CORE_PARAM. Protocols = 3,4; ## Access type for clients. Default is None, so some access must be ## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block Access_Type = RW; ## Whether to squash various users. Squash = root_squash; ## Allowed security types for this export #Sectype = sys,krb5,krb5i,krb5p; ## Exporting FSAL FSAL { Name = VFS; } } ## Configure an export for some file tree EXPORT { ## Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 4; ## Exported path (mandatory) Path = /home/kpfadmin/t4; ## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true) Pseudo = /t4; ## Restrict the protocols that may use this export. This cannot allow ## access that is denied in NFS_CORE_PARAM. Protocols = 3,4; ## Access type for clients. Default is None, so some access must be ## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block Access_Type = RW; ## Whether to squash various users. Squash = root_squash; ## Allowed security types for this export #Sectype = sys,krb5,krb5i,krb5p; ## Exporting FSAL FSAL { Name = VFS; } } ## Configure an export for some file tree EXPORT { ## Export Id (mandatory, each EXPORT must have a unique Export_Id) Export_Id = 5; ## Exported path (mandatory) Path = /home/kpfadmin/t5; ## Pseudo Path (required for NFSv4 or if mount_path_pseudo = true) Pseudo = /t5; ## Restrict the protocols that may use this export. This cannot allow ## access that is denied in NFS_CORE_PARAM. Protocols = 3,4; ## Access type for clients. Default is None, so some access must be ## given. It can be here, in the EXPORT_DEFAULTS, or in a CLIENT block Access_Type = RW; ## Whether to squash various users. Squash = root_squash; ## Allowed security types for this export #Sectype = sys,krb5,krb5i,krb5p; ## Exporting FSAL FSAL { Name = VFS; } } ## Configure logging. Default is to log to Syslog. Basic logging can also be ## configured from the command line LOG { ## Default log level for all components Default_Log_Level = WARN; ## Configure per-component log levels. #Components { #FSAL = INFO; #NFS4 = EVENT; #} ## Where to log Facility { name = FILE; destination = "/var/log/ganesha.log"; enable = active; } }
4 years, 4 months
Announce Push of V4-dev.28
by Frank Filz
Branch next
Tag:V4-dev.28
Merge Highlights
* Remove use of "master"
* Fix typo in nfs3_write.c
* Drop useless param descriptions.
* Do not let crit error seen while processing an export block affect further
exports
* RPM - gpfs-epoch is conditional
* Fix a regression in 9p implementation
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
c25f26f Frank S. Filz V4-dev.28
db83820 Philippe DENIEL Fix a regression in 9p implementation
dea3366 Daniel Gryniewicz RPM - gpfs-epoch is conditional
9686074 Madhu Thorat Do not let crit error seen while processing an export
block affect further exports
79eb3ee Xi Jinyu Drop useless param descriptions.
b792afd Xi Jinyu Fix typo in nfs3_write.c
463a8f3 Frank S. Filz Remove reference to no longer existing master branch
4cdb992 Frank S. Filz tools/multilock - remove references to "master"
4 years, 4 months
Change in ...nfs-ganesha[next]: Add USE_RQUOTA compile option to code and cmake
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/498969 )
Change subject: Add USE_RQUOTA compile option to code and cmake
......................................................................
Add USE_RQUOTA compile option to code and cmake
Change-Id: I8420fa924459e112f470a71c0e0e54900a480660
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/CMakeLists.txt
M src/MainNFSD/CMakeLists.txt
M src/MainNFSD/nfs_rpc_dispatcher_thread.c
M src/MainNFSD/nfs_worker_thread.c
M src/Protocols/CMakeLists.txt
M src/Protocols/XDR/CMakeLists.txt
M src/RPCAL/nfs_dupreq.c
M src/include/config-h.in.cmake
M src/include/gsh_config.h
M src/include/nfs_init.h
M src/include/nfs_proto_functions.h
M src/include/server_stats_private.h
M src/support/nfs_read_conf.c
M src/support/server_stats.c
14 files changed, 100 insertions(+), 3 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/69/498969/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/498969
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I8420fa924459e112f470a71c0e0e54900a480660
Gerrit-Change-Number: 498969
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 4 months
Change in ...nfs-ganesha[next]: Make cmake USE_NFS3=OFF actually disable and compile out ALL NFSv3 code
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/498968 )
Change subject: Make cmake USE_NFS3=OFF actually disable and compile out ALL NFSv3 code
......................................................................
Make cmake USE_NFS3=OFF actually disable and compile out ALL NFSv3 code
Well, except the XDR code that is also used by FSAL_PROXY_V3 (though
ideally that code should be excluded if there is no PROXY either).
Change-Id: Ic3018504370dec3040f922fe106fd19cc08d39a1
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/CMakeLists.txt
M src/MainNFSD/CMakeLists.txt
M src/MainNFSD/nfs_init.c
M src/MainNFSD/nfs_rpc_dispatcher_thread.c
M src/MainNFSD/nfs_worker_thread.c
M src/Protocols/CMakeLists.txt
M src/Protocols/NFS/CMakeLists.txt
M src/RPCAL/nfs_dupreq.c
M src/SAL/nfs4_recovery.c
M src/include/gsh_config.h
M src/include/nfs_init.h
M src/include/nfs_proto_functions.h
M src/include/server_stats_private.h
M src/support/client_mgr.c
M src/support/export_mgr.c
M src/support/nfs_read_conf.c
M src/support/server_stats.c
17 files changed, 591 insertions(+), 154 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/68/498968/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/498968
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ic3018504370dec3040f922fe106fd19cc08d39a1
Gerrit-Change-Number: 498968
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 4 months