Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Crash in dupreq
by katcherw@gmail.com
We saw a crash in version 2.8.2 running an iozone crash:
Program terminated with signal 11, Segmentation fault.
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
384 return __atomic_sub_fetch(var, sub, __ATOMIC_SEQ_CST);
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-12.el7.x86_64 dbus-libs-1.11.16-1.fc26.x86_64 elfutils-libelf-0.158-3.el7.x86_64 elfutils-libs-0.158-3.el7.x86_64 glibc-2.17-55.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-16.el7.x86_64 libcap-2.22-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcrypt-1.5.3-4.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libnfsidmap-0.25-9.el7.x86_64 libselinux-2.5-11.el7.x86_64 libuuid-2.23.2-16.el7.x86_64 libwbclient-4.8.3-4.el7.x86_64 pcre-8.32-12.el7.x86_64 samba-client-libs-4.8.3-4.el7.x86_64 samba-winbind-modules-4.8.3-4.el7.x86_64 sssd-client-1.11.2-65.el7.x86_64 systemd-libs-219-42.el7_4.1.x86_64 userspace-rcu-0.7.16-1.el7.x86_64 xz-libs-5.1.2-8alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
#1 0x00007ff028dce265 in atomic_dec_uint32_t (var=0x104) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:405
#2 0x00007ff028dd1b65 in dupreq_entry_put (dv=0x0) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:912
#3 0x00007ff028dd4630 in nfs_dupreq_rele (req=0x7fec44a269f0, func=0x7ff029159d60 <nfs3_func_desc+672>) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:1382
#4 0x00007ff028d74bc5 in free_args (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:693
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
#6 0x00007ff028d77a74 in nfs_rpc_valid_NFS (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1601
#7 0x00007ff028b2632d in svc_vc_decode (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:829
#8 0x00007ff028b227bf in svc_request (xprt=0x7fee0c202ad0, xdrs=0x7fec3c987a70) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:793
#9 0x00007ff028b2623e in svc_vc_recv (xprt=0x7fee0c202ad0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:802
#10 0x00007ff028b22740 in svc_rqst_xprt_task (wpe=0x7fee0c202cf0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:774
#11 0x00007ff028b23048 in svc_rqst_epoll_loop (wpe=0x108d570) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:1089
#12 0x00007ff028b2baff in work_pool_thread (arg=0x7fed44052070) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/work_pool.c:184
#13 0x00007ff0270c8df3 in start_thread () from /lib64/libpthread.so.0
#14 0x00007ff0267cd3dd in clone () from /lib64/libc.so.6
(gdb) frame 5
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
1509 free_args(reqdata);
(gdb) p *reqdata
$2 = {
svc = {
rq_xprt = 0x7fee0c202ad0,
rq_clntname = 0x0,
rq_svcname = 0x0,
rq_xdrs = 0x7fec3c987a70,
rq_u1 = 0x0,
rq_u2 = 0x0,
rq_cksum = 14781944753697519387,
rq_auth = 0x7ff028d45cd0 <svc_auth_none>,
rq_ap1 = 0x0,
rq_ap2 = 0x0,
...
Notice rq_u1 == 0. nfs_rpc_process_request is calling free_args which is calling nfs_dupreq_rele. There is a comment at the beginning of nfs_dupreq_rele:
* We assert req->rq_u1 now points to the corresponding duplicate request
* cache entry (dv).
But this assumption doesn't seem to be true, and eventually this will be dereferenced leading to a crash. We're trying to reproduce the problem with NIV_FULL_DEBUG.
Has anyone seen this problem before, or know what would cause rq_u1 to be 0? Should there be a check for NULL in nfs_dupreq_rele in addition to DUPREQ_NOCACHE?
Bill
4 years, 8 months
Announce Push of V4-dev.10
by Frank Filz
Branch next
Tag:V4-dev.10
Merge Highlights
* selinux: new policy to allow gluster (glusterd) to start ganesha
* packaging: typos in nfs-ganesha.spec.in.cmake
* In mdc_open2_by_name() after lookup check if obj_handle.type==REGULAR_FILE
* MDCACHE - Close unexport race for used entries
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
091816d Frank S. Filz V4-dev.10
e066494 Daniel Gryniewicz MDCACHE - Close unexport race for used entries
7dd4538 Madhu Thorat In mdc_open2_by_name() after lookup check if
obj_handle.type==REGULAR_FILE
6a1e149 Kaleb S. KEITHLEY packaging: typos in nfs-ganesha.spec.in.cmake
8c31aa2 Kaleb S. KEITHLEY selinux: new policy to allow gluster (glusterd) to
start ganesha
4 years, 8 months
Change in ...nfs-ganesha[next]: Update dupreq after svc_sendreply failure
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/488188 )
Change subject: Update dupreq after svc_sendreply failure
......................................................................
Update dupreq after svc_sendreply failure
If svc_sendreply() fails, we don't update the dupreq. The dupreq state
will be forever in DUPREQ_START! Here after, all retries will be dropped
as the dupreq is found in a processing state leading to endless retries.
svc_sendreply() can fail in UDP transports. TCP transports don't fail
unless there is some kind of RPC encoding failure.
Change-Id: Ib2b0bdd689d57c141448ccfb7c0789f3dfd6fa9c
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/MainNFSD/nfs_worker_thread.c
1 file changed, 0 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/88/488188/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/488188
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ib2b0bdd689d57c141448ccfb7c0789f3dfd6fa9c
Gerrit-Change-Number: 488188
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
4 years, 8 months
Change in ...nfs-ganesha[next]: In mdc_open2_by_name() after lookup check if obj_handle.type==REGULAR...
by Madhu Thorat (GerritHub)
Madhu Thorat has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/488074 )
Change subject: In mdc_open2_by_name() after lookup check if obj_handle.type==REGULAR_FILE
......................................................................
In mdc_open2_by_name() after lookup check if obj_handle.type==REGULAR_FILE
In mdc_open2_by_name() after mdc_lookup() completes successfully
and prior to calling FSALs open2(..) we don't check the
entry->obj_handle.type to verify if the object is for a regular file.
It may be possible that prior to calling mdc_lookup() in
mdc_open2_by_name(), another thread working on a different client request
may have created a symbolic link with the same name. Using an object created
for a symbolic link in nfs4_op_open() code path lead to a crash.
Fixed this by adding a check in mdc_open2_by_name() to verify
entry->obj_handle.type is REGULAR_FILE and then proceed to call a FSALs
open2(..).
Change-Id: Ib86b53f1c7bb6652509ef98575f6b1e8ff2fde82
Signed-off-by: Madhu Thorat <madhu.punjabi(a)in.ibm.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
1 file changed, 13 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/74/488074/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/488074
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ib86b53f1c7bb6652509ef98575f6b1e8ff2fde82
Gerrit-Change-Number: 488074
Gerrit-PatchSet: 1
Gerrit-Owner: Madhu Thorat <madhu.punjabi(a)in.ibm.com>
Gerrit-MessageType: newchange
4 years, 9 months
Issues with Changing pseudo path of an export
by Frank Filz
So some interesting issues arise if we change the pseudo path of an export
or change the NFS versions supported.
What happens to state attached to files where the client will now get a
stale file handle? This may not be an issue if the Pseudo Path to an export
changes, but certainly would be a problem if v3 or v4 were dropped from the
export.
If it really is only a problem for dropping v3 or v4 we could do the version
specific state cleanup for the export.
Frank
4 years, 9 months
Improving mdcache_lru_cleanup_try_push
by Ashish Sangwan
Hi Daniel,
It looks like mdcache_lru_cleanup_try_push() can be improved for the
cases when there are ops executing parallely with an unexport request.
In this case mdcache_lru_cleanup_try_push() is not able to drop the
sentinel ref and mark the entry for cleanup. This is because one extra
ref is taken by the op which is currently executing. The total number
of refs are > 2.
Once the current op execution finishes (after the execution of
mdcache_unexport()) the sentinel ref remains and the entry is also not
marked for cleanup even though no export is referring to it
(entry->first_export_id == -1).
Is it possible to modify mdcache_lru_cleanup_try_push() such that even
if the refcnt is > 2 but if the entry->first_export_id == -1 (the
current export which is going away was the only one referring to this
entry), we should drop the sentinel ref and mark the entry for
cleanup? This will ensure we cleanup the entries when the current op
does put_ref and these are reused faster than other mdcache entries
which are actually pointing to valid exports.
Thanks,
Ashish
4 years, 9 months