Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Crash in dupreq
by katcherw@gmail.com
We saw a crash in version 2.8.2 running an iozone crash:
Program terminated with signal 11, Segmentation fault.
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
384 return __atomic_sub_fetch(var, sub, __ATOMIC_SEQ_CST);
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-12.el7.x86_64 dbus-libs-1.11.16-1.fc26.x86_64 elfutils-libelf-0.158-3.el7.x86_64 elfutils-libs-0.158-3.el7.x86_64 glibc-2.17-55.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-16.el7.x86_64 libcap-2.22-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcrypt-1.5.3-4.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libnfsidmap-0.25-9.el7.x86_64 libselinux-2.5-11.el7.x86_64 libuuid-2.23.2-16.el7.x86_64 libwbclient-4.8.3-4.el7.x86_64 pcre-8.32-12.el7.x86_64 samba-client-libs-4.8.3-4.el7.x86_64 samba-winbind-modules-4.8.3-4.el7.x86_64 sssd-client-1.11.2-65.el7.x86_64 systemd-libs-219-42.el7_4.1.x86_64 userspace-rcu-0.7.16-1.el7.x86_64 xz-libs-5.1.2-8alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
#1 0x00007ff028dce265 in atomic_dec_uint32_t (var=0x104) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:405
#2 0x00007ff028dd1b65 in dupreq_entry_put (dv=0x0) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:912
#3 0x00007ff028dd4630 in nfs_dupreq_rele (req=0x7fec44a269f0, func=0x7ff029159d60 <nfs3_func_desc+672>) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:1382
#4 0x00007ff028d74bc5 in free_args (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:693
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
#6 0x00007ff028d77a74 in nfs_rpc_valid_NFS (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1601
#7 0x00007ff028b2632d in svc_vc_decode (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:829
#8 0x00007ff028b227bf in svc_request (xprt=0x7fee0c202ad0, xdrs=0x7fec3c987a70) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:793
#9 0x00007ff028b2623e in svc_vc_recv (xprt=0x7fee0c202ad0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:802
#10 0x00007ff028b22740 in svc_rqst_xprt_task (wpe=0x7fee0c202cf0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:774
#11 0x00007ff028b23048 in svc_rqst_epoll_loop (wpe=0x108d570) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:1089
#12 0x00007ff028b2baff in work_pool_thread (arg=0x7fed44052070) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/work_pool.c:184
#13 0x00007ff0270c8df3 in start_thread () from /lib64/libpthread.so.0
#14 0x00007ff0267cd3dd in clone () from /lib64/libc.so.6
(gdb) frame 5
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
1509 free_args(reqdata);
(gdb) p *reqdata
$2 = {
svc = {
rq_xprt = 0x7fee0c202ad0,
rq_clntname = 0x0,
rq_svcname = 0x0,
rq_xdrs = 0x7fec3c987a70,
rq_u1 = 0x0,
rq_u2 = 0x0,
rq_cksum = 14781944753697519387,
rq_auth = 0x7ff028d45cd0 <svc_auth_none>,
rq_ap1 = 0x0,
rq_ap2 = 0x0,
...
Notice rq_u1 == 0. nfs_rpc_process_request is calling free_args which is calling nfs_dupreq_rele. There is a comment at the beginning of nfs_dupreq_rele:
* We assert req->rq_u1 now points to the corresponding duplicate request
* cache entry (dv).
But this assumption doesn't seem to be true, and eventually this will be dereferenced leading to a crash. We're trying to reproduce the problem with NIV_FULL_DEBUG.
Has anyone seen this problem before, or know what would cause rq_u1 to be 0? Should there be a check for NULL in nfs_dupreq_rele in addition to DUPREQ_NOCACHE?
Bill
4 years, 8 months
NFSv3 Filebench tests hang
by des@vmware.com
We are trying to execute Filebench tests with NFS Ganesha and it most of the time hangs (filebench process stuck in D state) and does not complete the test.
I have observed the following errors in ganesha.log:
2020-02-25T02:43:49Z : epoch 5e543710 : fsvm23 : ganesha.nfsd-34[::ffff:172.30.0.111] [svc_598] 405 :fsal_close :FSAL :open_fd_count is negative: -1
2020-02-25T02:46:27Z : epoch 5e543710 : fsvm23 : ganesha.nfsd-34[::ffff:172.30.0.111] [svc_731] 2144 :mdcache_lru_fds_available :INODE LRU :FD Hard Limit Exceeded, waking LRU thread.
2020-02-25T02:46:28Z : epoch 5e543710 : fsvm23 : ganesha.nfsd-34[none] [cache_lru] 1388 :lru_run :INODE LRU :Futility count exceeded. Client load is opening FDs faster than the LRU thread can close them.
I was searching in the ganesha community for any similar issues and found these:
https://bugzilla.redhat.com/show_bug.cgi?id=1713261
which is fixed by Frank in :
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455570/
This patch is not in present in our repo yet and I was thinking of cherry-picking this change into our FSAL_VFS which is causing the issue.
Frank/Soumya,
Can you confirm if this fix is going to resolve this issue and if I cherry pick this alone or is there other changes required to resolve this issue please?
4 years, 9 months
nfs-ganesha exits with "Error: couldn't complete write to the log file " error on screen
by Satish Chandra Kilaru
Error: couldn't complete write to the log file /tmp/ganesha.log status=13 (Permission denied) message was:
27/02/2020 13:37:28 : epoch 5e583698 : m4hcadev1.commvault.com : ganesha.nfsd-32262[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha, built at Oct 29 2019 17:39:50 on localhost
it successfully created ganesha.log as below. But failed to write to it with Permission Denied error.
-rw-rw-r--. 1 root root 0 Feb 27 13:31 ganesha.log
What could be the problem?
4 years, 9 months
Announce Push of V4-dev.7
by Frank Filz
Branch next
Tag:V4-dev.7
Merge Highlights
* main(){fatal_die: } - Fix leaks in nfs_main.c.
* Fix ambiguous description in nfs_init.h.
* Allow to build with DBUS an no GSSAPI
* Provision to get MDCACHE resource utilization
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
869c6f8 Frank S. Filz V4-dev.7
b5428a8 Sachin Punadikar Provision to get MDCACHE resource utilization
c95574f Daniel Gryniewicz Allow to build with DBUS an no GSSAPI
6d7feb3 Xi Jinyu Fix ambiguous description in nfs_init.h.
0660aea Xi Jinyu main(){fatal_die: } - Fix leaks in nfs_main.c.
4 years, 10 months
Change in ...nfs-ganesha[next]: Supporting protocol update as part of UpdateExport
by Sriram Patil (GerritHub)
Sriram Patil has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/486007 )
Change subject: Supporting protocol update as part of UpdateExport
......................................................................
Supporting protocol update as part of UpdateExport
UpdateExport DBUS does not handle protocol update. For example, when the
protocol is changes from NFSv3 to NFSv4 for an export, after UpdateExport
none of hte mounts worked.
This fix allows updating the protocols as part of UpdateExport. Updating to
NFSv4 was the problem because the new path was not added to pseudo FS
Change-Id: I6a78ec85fd2a48064161420d9459ecedebfad68f
Signed-off-by: Sriram Patil <sriramp(a)vmware.com>
---
M src/include/export_mgr.h
M src/support/export_mgr.c
M src/support/exports.c
3 files changed, 44 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/07/486007/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/486007
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I6a78ec85fd2a48064161420d9459ecedebfad68f
Gerrit-Change-Number: 486007
Gerrit-PatchSet: 1
Gerrit-Owner: Sriram Patil <sriramp(a)vmware.com>
Gerrit-MessageType: newchange
4 years, 10 months
回复:Re: ganesha hang more than five hours
by QR
Not yet. Because the docker did not enable core dump.
Will try to create a full backtrace for this, thanks.
--------------------------------
----- 原始邮件 -----
发件人:Daniel Gryniewicz <dgryniew(a)redhat.com>
收件人:devel(a)lists.nfs-ganesha.org
主题:[NFS-Ganesha-Devel] Re: ganesha hang more than five hours
日期:2020年02月25日 21点20分
No, definitely not a known issue. Do you have a full backtrace of one
(or several, if they're different) hung threads?
Daniel
On 2/24/20 9:28 PM, QR wrote:
> Hi Dang,
>
> Ganesha hangs more than five hours. It seems that 198 svc threads hang
> on nfs3_readdirplus.
> Is there a known issue about this? Thanks in advance.
>
> Ganesha server info
> ganesha version: V2.7.6
> FSAL : In house
> nfs client info
> nfs version : nfs v3
> client info : CentOS 7.4
>
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
4 years, 10 months
ganesha hang more than five hours
by QR
Hi Dang,
Ganesha hangs more than five hours. It seems that 198 svc threads hang on nfs3_readdirplus.Is there a known issue about this? Thanks in advance.
Ganesha server info ganesha version: V2.7.6 FSAL : In housenfs client info nfs version : nfs v3 client info : CentOS 7.4
4 years, 10 months