Change in ...nfs-ganesha[next]: Allow EXPORT pseudo path to be changed during export update
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334 )
Change subject: Allow EXPORT pseudo path to be changed during export update
......................................................................
Allow EXPORT pseudo path to be changed during export update
This also fully allows adding or removing NFSv4 support from an export
since we can now handle the PseudoFS swizzing that occurs.
Note that an explicit PseudoFS export may be removed or added, though
you can not change it from export_id 0 because we currently don't allow
changing the export_id.
Note that this patch doesn't handle DBUS add or remove export though
that is an option to improve. I may add them to this patch (it wouldn't
be that hard) but I want to get this reviewed as is right now.
There are implications to a client of changing the PseudoFS. I have
tested moving an export in the PseudoFS with a client mounted. The
client will be able to continue accessing the export, though it may
see an ESTALE error if it navigates out of the export. The current
working directory will go bad and the pwd comment will fail indicating
a disconnected mount. I have also seen referencing .. from the root of
the export wrapping around back to the root (I believe this is how
disconnected mounts are set up).
FSAL_PSEUDO lookups and create handles (PUTFH or any use of an NFSv3
handle where the inode isn't cached) which fail during an export update
are instead turned into ERR_FSAL_DELAY which turns into NFS4ERR_DELAY or
NFS3ERR_JUKEBOX to force the client to retry under the completed update.
Change-Id: I507dc17a651936936de82303ff1291677ce136be
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_PSEUDO/handle.c
M src/MainNFSD/libganesha_nfsd.ver
M src/Protocols/NFS/nfs4_pseudo.c
M src/include/export_mgr.h
M src/include/nfs_proto_functions.h
M src/support/export_mgr.c
M src/support/exports.c
7 files changed, 560 insertions(+), 203 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/34/490334/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I507dc17a651936936de82303ff1291677ce136be
Gerrit-Change-Number: 490334
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
10 months
Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Re:Re: [NFS-Ganesha-Devel]回复:[NFS-Ganesha-Devel]回复:Re:_ganesha_hang_more_than_five_hours
by QR
V2.7.6 already contains 11e0e3, will try with eb98f5.
whence_is_name is false, so it's not a whence-is-name FSAL.
Thanks Dang.
--------------------------------
----- 原始邮件 -----
发件人:Daniel Gryniewicz <dang(a)redhat.com>
收件人:zhbingyin(a)sina.com, Daniel Gryniewicz <dgryniew(a)redhat.com>, ganesha-devel <devel(a)lists.nfs-ganesha.org>
主题:[NFS-Ganesha-Devel] Re: [NFS-Ganesha-Devel]回复:[NFS-Ganesha-Devel]回复:Re:_ganesha_hang_more_than_five_hours
日期:2020年04月24日 21点51分
Nothing directly related to the content lock. However, there have been
several use-after-free races fixed, and it's not impossible that trying
to interacte with a freed lock will cause a hang. The commits in
question are:
11e0e375e40658267cbf449afacaa53a136f7097
eb98f5b855147f44e79fe08dcec8d5057b05ea30
There have been quite a few fixes to readdir since 2.7.6 (and
specifically to whence-is-name; I believe your FSAL is a whence-is-name
FSAL?) so you might want to look into updating to a newer version.
2.8.4 should come out next week, and 3.3 soon after that.
Daniel
On 4/23/20 11:17 AM, QR wrote:
> Hi Dang, I generate a core dump for this.
>
> It seems something wrong with "entry->content_lock".
> Is there a known issue for this? Thanks in advance.
>
> Ganesha server info
> ganesha version: V2.7.6
> FSAL : In house
> nfs client info
> nfs version : nfs v3
> client info : Centos 7.4
>
> =======================================================================================================================================================================
> (gdb) thread 204
> [Switching to thread 204 (Thread 0x7fa7932f2700 (LWP 348))]
> #0 0x00007faa5ba2cf4d in __lll_lock_wait () from /lib64/libpthread.so.0
> (gdb) bt
> #0 0x00007faa5ba2cf4d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1 0x00007faa5ba28d02 in _L_lock_791 () from /lib64/libpthread.so.0
> #2 0x00007faa5ba28c08 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3 0x0000000000528657 in _mdcache_lru_unref_chunk
> (chunk=0x7fa7e03e31d0, func=0x59abd0 <__func__.20247>
> "mdcache_clean_dirent_chunks", line=579)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:2066
> #4 0x000000000053894c in mdcache_clean_dirent_chunks (entry=0x7fa740035400)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:578
> #5 0x0000000000538a30 in mdcache_dirent_invalidate_all
> (entry=0x7fa740035400)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:603
> #6 0x00000000005376fc in mdc_clean_entry (entry=0x7fa740035400) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:302
> #7 0x0000000000523559 in mdcache_lru_clean (entry=0x7fa740035400) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:592
> #8 0x00000000005278b1 in mdcache_lru_get (sub_handle=0x7fa7383cdf40) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1821
> #9 0x0000000000536e91 in _mdcache_alloc_handle (export=0x1b4eed0,
> sub_handle=0x7fa7383cdf40, fs=0x0, reason=MDC_REASON_DEFAULT,
> func=0x59ac10 <__func__.20274> "mdcache_new_entry", line=691)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:174
> #10 0x0000000000538c50 in mdcache_new_entry (export=0x1b4eed0,
> sub_handle=0x7fa7383cdf40, attrs_in=0x7fa7932f00a0,
> attrs_out=0x7fa7932f0760, new_directory=false, entry=0x7fa7932f0018,
> state=0x0, reason=MDC_REASON_DEFAULT) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:690
> #11 0x000000000052d156 in mdcache_alloc_and_check_handle
> (export=0x1b4eed0, sub_handle=0x7fa7383cdf40, new_obj=0x7fa7932f01b0,
> new_directory=false, attrs_in=0x7fa7932f00a0,
> attrs_out=0x7fa7932f0760, tag=0x5999a4 "lookup ",
> parent=0x7fa878288cb0, name=0x7fa738229270 "cer_7_0.5",
> invalidate=0x7fa7932f009f, state=0x0)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:100
> #12 0x000000000053aea1 in mdc_lookup_uncached
> (mdc_parent=0x7fa878288cb0, name=0x7fa738229270 "cer_7_0.5",
> new_entry=0x7fa7932f02c8, attrs_out=0x7fa7932f0760)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1391
> #13 0x000000000053ab27 in mdc_lookup (mdc_parent=0x7fa878288cb0,
> name=0x7fa738229270 "cer_7_0.5", uncached=true,
> new_entry=0x7fa7932f02c8, attrs_out=0x7fa7932f0760)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1325
> #14 0x000000000052d5bb in mdcache_lookup (parent=0x7fa878288ce8,
> name=0x7fa738229270 "cer_7_0.5", handle=0x7fa7932f0878,
> attrs_out=0x7fa7932f0760)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:181
> #15 0x0000000000431a60 in fsal_lookup (parent=0x7fa878288ce8,
> name=0x7fa738229270 "cer_7_0.5", obj=0x7fa7932f0878,
> attrs_out=0x7fa7932f0760)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/fsal_helper.c:683
> #16 0x000000000048e83e in nfs3_lookup (arg=0x7fa7381bdc88,
> req=0x7fa7381bd580, res=0x7fa73839dab0)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/Protocols/NFS/nfs3_lookup.c:104
> #17 0x000000000045703b in nfs_rpc_process_request
> (reqdata=0x7fa7381bd580) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1329
> #18 0x000000000045781f in nfs_rpc_process_request_slowio
> (reqdata=0x7fa7381bd580, slowio_check_cb=0x553b4a <nfs3_timeout_proc>)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1542
> #19 0x0000000000457955 in nfs_rpc_valid_NFS (req=0x7fa7381bd580) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1586
> #20 0x00007faa5c901435 in svc_vc_decode (req=0x7fa7381bd580) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_vc.c:829
> #21 0x000000000044a497 in nfs_rpc_decode_request (xprt=0x7fa8840027d0,
> xdrs=0x7fa7384befa0)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
> #22 0x00007faa5c901346 in svc_vc_recv (xprt=0x7fa8840027d0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_vc.c:802
> #23 0x00007faa5c8fda92 in svc_rqst_xprt_task (wpe=0x7fa8840029e8) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:769
> #24 0x00007faa5c8fdeea in svc_rqst_epoll_events (sr_rec=0x1b62c60,
> n_events=1) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:941
> #25 0x00007faa5c8fe17f in svc_rqst_epoll_loop (sr_rec=0x1b62c60) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1014
> #26 0x00007faa5c8fe232 in svc_rqst_run_task (wpe=0x1b62c60) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1050
> #27 0x00007faa5c906dbb in work_pool_thread (arg=0x7fa7540008c0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/work_pool.c:181
> #28 0x00007faa5ba26dc5 in start_thread () from /lib64/libpthread.so.0
> #29 0x00007faa5b33321d in clone () from /lib64/libc.so.6
> (gdb) frame 3
> #3 0x0000000000528657 in _mdcache_lru_unref_chunk
> (chunk=0x7fa7e03e31d0, func=0x59abd0 <__func__.20247>
> "mdcache_clean_dirent_chunks", line=579)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:2066
> 2066in
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
> (gdb) p qlane->mtx
> $42 = {__data = {__lock = 2, __count = 0, *__owner = 225*, __nusers = 1,
> __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next =
> 0x0}},
> __size = "\002\000\000\000\000\000\000\000\341\000\000\000\001",
> '\000' <repeats 26 times>, __align = 2}
> =======================================================================================================================================================================
> (gdb) info threads
> 84 Thread 0x7fa9a68e8700 (LWP 228) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 83 Thread 0x7fa9a69e9700 (LWP 227) 0x00007faa5ba29e24 in
> pthread_rwlock_rdlock () from /lib64/libpthread.so.0
> *82 Thread 0x7fa9a6aea700 (LWP 225)* 0x00007faa5ba2cf4d in
> __lll_lock_wait () from /lib64/libpthread.so.0
> 81 Thread 0x7fa9a6beb700 (LWP 226) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 80 Thread 0x7fa9a6cec700 (LWP 224) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 79 Thread 0x7fa9a6ded700 (LWP 223) 0x00007faa5ba29e24 in
> pthread_rwlock_rdlock () from /lib64/libpthread.so.0
> 78 Thread 0x7fa9a6eee700 (LWP 222) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 77 Thread 0x7fa9a6fef700 (LWP 221) 0x00007faa5ba29e24 in
> pthread_rwlock_rdlock () from /lib64/libpthread.so.0
> 76 Thread 0x7fa9a70f0700 (LWP 220) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 75 Thread 0x7fa9a71f1700 (LWP 219) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> 74 Thread 0x7fa9a72f2700 (LWP 218) 0x00007faa5ba29e24 in
> pthread_rwlock_rdlock () from /lib64/libpthread.so.0
> 73 Thread 0x7fa9a73f3700 (LWP 217) 0x00007faa5ba2a03e in
> pthread_rwlock_wrlock () from /lib64/libpthread.so.0
> =======================================================================================================================================================================
> (gdb) *thread 82*
> [Switching to thread 82 (Thread 0x7fa9a6aea700 (LWP 225))]
> #0 0x00007faa5ba2cf4d in __lll_lock_wait () from /lib64/libpthread.so.0
> (gdb) bt
> #0 0x00007faa5ba2cf4d in __lll_lock_wait () from /lib64/libpthread.so.0
> #1 0x00007faa5ba2a307 in _L_lock_14 () from /lib64/libpthread.so.0
> #2 0x00007faa5ba2a2b3 in pthread_rwlock_trywrlock () from
> /lib64/libpthread.so.0
> #3 0x0000000000524547 in *lru_reap_chunk_impl* (qid=LRU_ENTRY_L2,
> parent=0x7fa7f8122810)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:809
> #4 0x0000000000524a01 in mdcache_get_chunk (parent=0x7fa7f8122810,
> prev_chunk=0x0, whence=0)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:886
> #5 0x000000000053f0b6 in mdcache_populate_dir_chunk
> (directory=0x7fa7f8122810, whence=0, dirent=0x7fa9a6ae7f80,
> prev_chunk=0x0, eod_met=0x7fa9a6ae7f7f)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2554
> #6 0x0000000000540aff in mdcache_readdir_chunked
> (directory=0x7fa7f8122810, whence=0, dir_state=0x7fa9a6ae8130,
> cb=0x43215e <populate_dirent>, attrmask=122830, eod_met=0x7fa9a6ae884b)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2991
> #7 0x000000000052ee71 in mdcache_readdir (dir_hdl=0x7fa7f8122848,
> whence=0x7fa9a6ae8110, dir_state=0x7fa9a6ae8130, cb=0x43215e
> <populate_dirent>, attrmask=122830, eod_met=0x7fa9a6ae884b)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
> #8 0x0000000000432a85 in fsal_readdir (directory=0x7fa7f8122848,
> cookie=0, nbfound=0x7fa9a6ae884c, eod_met=0x7fa9a6ae884b,
> attrmask=122830, cb=0x491b0f <nfs3_readdirplus_callback>,
> opaque=0x7fa9a6ae8800) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/fsal_helper.c:1164
> #9 0x0000000000491968 in nfs3_readdirplus (arg=0x7fa91470b9f8,
> req=0x7fa91470b2f0, res=0x7fa91407b8f0)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/Protocols/NFS/nfs3_readdirplus.c:310
> #10 0x000000000045703b in nfs_rpc_process_request
> (reqdata=0x7fa91470b2f0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1329
> #11 0x000000000045781f in nfs_rpc_process_request_slowio
> (reqdata=0x7fa91470b2f0, slowio_check_cb=0x553b4a <nfs3_timeout_proc>)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1542
> #12 0x0000000000457955 in nfs_rpc_valid_NFS (req=0x7fa91470b2f0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_worker_thread.c:1586
> #13 0x00007faa5c901435 in svc_vc_decode (req=0x7fa91470b2f0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_vc.c:829
> #14 0x000000000044a497 in nfs_rpc_decode_request (xprt=0x7fa8840027d0,
> xdrs=0x7fa9142a1370)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
> #15 0x00007faa5c901346 in svc_vc_recv (xprt=0x7fa8840027d0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_vc.c:802
> #16 0x00007faa5c8fda92 in svc_rqst_xprt_task (wpe=0x7fa8840029e8) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:769
> #17 0x00007faa5c8fdeea in svc_rqst_epoll_events (sr_rec=0x1b62c60,
> n_events=1) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:941
> #18 0x00007faa5c8fe17f in svc_rqst_epoll_loop (sr_rec=0x1b62c60) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1014
> #19 0x00007faa5c8fe232 in svc_rqst_run_task (wpe=0x1b62c60) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1050
> #20 0x00007faa5c906dbb in work_pool_thread (arg=0x7fa9380008c0) at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/libntirpc/src/work_pool.c:181
> #21 0x00007faa5ba26dc5 in start_thread () from /lib64/libpthread.so.0
> #22 0x00007faa5b33321d in clone () from /lib64/libc.so.6
> (gdb) frame 3
> #3 0x0000000000524547 in lru_reap_chunk_impl (qid=LRU_ENTRY_L2,
> parent=0x7fa7f8122810)
> at
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:809
> 809in
> /export/jcloud-zbs/src/jd.com/zfs/FSAL_SkyFS/nfs-ganesha/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
> (gdb) p entry->content_lock
> $43 = {__data = {__lock = 2, __nr_readers = *32681*, __readers_wakeup =
> 0, __writer_wakeup = 0, __nr_readers_queued = 0, __nr_writers_queued =
> 0, __writer = -598294472, __shared = 32680,
> __pad1 = 140366914447067, __pad2 = 0, __flags = 0},
> __size = "\002\000\000\000\251\177", '\000' <repeats 18 times>,
> "\070\300V\334\250\177\000\000\333\346\022\270\251\177", '\000' <repeats
> 17 times>, __align = 140363826200578}
> =======================================================================================================================================================================
>
>
> --------------------------------
>
>
> ----- 原始邮件 -----
> 发件人:"QR" <zhbingyin(a)sina.com>
> 收件人:"Daniel Gryniewicz" <dgryniew(a)redhat.com>, "ganesha-devel"
> <devel(a)lists.nfs-ganesha.org>,
> 主题:[NFS-Ganesha-Devel]回复:Re:_ganesha_hang_more_than_five_hours
> 日期:2020年02月26日 18点21分
>
> Not yet. Because the docker did not enable core dump.
>
> Will try to create a full backtrace for this, thanks.
>
>
> --------------------------------
>
>
> ----- 原始邮件 -----
> 发件人:Daniel Gryniewicz <dgryniew(a)redhat.com>
> 收件人:devel(a)lists.nfs-ganesha.org
> 主题:[NFS-Ganesha-Devel] Re: ganesha hang more than five hours
> 日期:2020年02月25日 21点20分
>
> No, definitely not a known issue. Do you have a full backtrace of one
> (or several, if they're different) hung threads?
> Daniel
> On 2/24/20 9:28 PM, QR wrote:
> > Hi Dang,
> >
> > Ganesha hangs more than five hours. It seems that 198 svc threads hang
> > on nfs3_readdirplus.
> > Is there a known issue about this? Thanks in advance.
> >
> > Ganesha server info
> > ganesha version: V2.7.6
> > FSAL : In house
> > nfs client info
> > nfs version : nfs v3
> > client info : CentOS 7.4
> >
> > _______________________________________________
> > Devel mailing list -- devel(a)lists.nfs-ganesha.org
> > To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
>
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
>
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
4 years, 7 months
Change in ...nfs-ganesha[next]: Move put_gsh_export(op_ctx->ctx_export) into op_context functions
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491405 )
Change subject: Move put_gsh_export(op_ctx->ctx_export) into op_context functions
......................................................................
Move put_gsh_export(op_ctx->ctx_export) into op_context functions
To help enforce the lifetime of a gsh_export reference associated
with an op context, take care of releasing the reference in the
op context management functions.
Note that some of the removed put_gsh_export() calls do not have
op_ctx->ctx_export as the parameter, but looking at the code above
the export being put is in ctx_export anyway.
All of this paves the way for much simpler handling of refstr when
added to the export.
Change-Id: Iec5c3c44d9b3a5e35a7e49a60054468d1d3ac3f1
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
M src/FSAL/commonlib.c
M src/FSAL/fsal_helper.c
M src/FSAL_UP/fsal_up_top.c
M src/MainNFSD/nfs_worker_thread.c
M src/Protocols/9P/9p_proto_tools.c
M src/Protocols/NFS/mnt_Export.c
M src/Protocols/NFS/mnt_Mnt.c
M src/Protocols/NFS/nfs4_Compound.c
M src/Protocols/NFS/nfs4_op_free_stateid.c
M src/Protocols/NFS/nfs4_op_layoutreturn.c
M src/Protocols/NFS/nfs4_op_lookup.c
M src/Protocols/NFS/nfs4_op_lookupp.c
M src/Protocols/NFS/nfs4_op_putfh.c
M src/Protocols/NFS/nfs4_op_putrootfh.c
M src/Protocols/NFS/nfs4_op_readdir.c
M src/Protocols/NFS/nfs4_op_restorefh.c
M src/Protocols/NFS/nfs4_op_savefh.c
M src/Protocols/NFS/nfs4_op_secinfo.c
M src/Protocols/NFS/nfs4_op_secinfo_no_name.c
M src/Protocols/NFS/nfs4_pseudo.c
M src/Protocols/RQUOTA/rquota_getquota.c
M src/Protocols/RQUOTA/rquota_setquota.c
M src/SAL/nfs4_state.c
M src/SAL/state_async.c
M src/SAL/state_deleg.c
M src/SAL/state_layout.c
M src/SAL/state_lock.c
M src/gtest/gtest.hh
M src/support/ds.c
M src/support/export_mgr.c
M src/support/exports.c
32 files changed, 66 insertions(+), 137 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/05/491405/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491405
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Iec5c3c44d9b3a5e35a7e49a60054468d1d3ac3f1
Gerrit-Change-Number: 491405
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months
Change in ...nfs-ganesha[next]: EXPORT: Cleanup iniital refcounting and alloc/free of exports
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491404 )
Change subject: EXPORT: Cleanup iniital refcounting and alloc/free of exports
......................................................................
EXPORT: Cleanup iniital refcounting and alloc/free of exports
Make alloc_export() return a reference. In all case, put_gsh_export can
safely be used to dispose of this reference and cause the export to be
ultimately freed.
With that, insert_gsh_export only adds the sentinel reference.
Move the init_op_context_simple in build_default_root to once the
export is as set up as much as if it was being built by the config
parser.
Since there is no longer a reason to explicitly call free_export just
pull the code into _put_gsh_export.
Also make sure that free_export_resources sets up an op context if
the export being released is not the one in op_ctx. Since that export
is already on it's way to death, just remove the export from the
op context which will have the effect of release_op_context for
a temporary op context NOT calling put_gsh_export in the future - it
doesn't need to since the refcount is already 0. This also assures that
if releasing the final refcount on the export in op_ctx, that op_ctx
is now poisoned.
Change-Id: I0dcbbc4a138b0c673d1ac14f89c1a63ba832d06b
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/include/export_mgr.h
M src/support/export_mgr.c
M src/support/exports.c
3 files changed, 55 insertions(+), 56 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/04/491404/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491404
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I0dcbbc4a138b0c673d1ac14f89c1a63ba832d06b
Gerrit-Change-Number: 491404
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months
Change in ...nfs-ganesha[next]: Add struct saved_export_context to standardize saving op_ctx bits
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491403 )
Change subject: Add struct saved_export_context to standardize saving op_ctx bits
......................................................................
Add struct saved_export_context to standardize saving op_ctx bits
There are times when a process just wants to make a temporary export
change in the op_context but doesn't want to craft a whole new
op_context. This addition presents a standard way of saving the
export related bits of an op_context in order to be able to restore it
after the temporary change.
Change-Id: Ifad0c091c10687e75e7a2c39eabaf9a5b7d61986
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/commonlib.c
M src/FSAL/fsal_helper.c
M src/Protocols/NFS/nfs4_op_free_stateid.c
M src/Protocols/NFS/nfs4_op_readdir.c
M src/Protocols/NFS/nfs4_op_savefh.c
M src/Protocols/NFS/nfs4_op_secinfo.c
M src/SAL/nfs4_state.c
M src/SAL/state_layout.c
M src/SAL/state_lock.c
M src/include/fsal.h
M src/include/fsal_api.h
11 files changed, 115 insertions(+), 63 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/03/491403/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491403
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ifad0c091c10687e75e7a2c39eabaf9a5b7d61986
Gerrit-Change-Number: 491403
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months
Change in ...nfs-ganesha[next]: Assure and clarify references held for op_ctx->ctx_export
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491402 )
Change subject: Assure and clarify references held for op_ctx->ctx_export
......................................................................
Assure and clarify references held for op_ctx->ctx_export
Some places were not holding a gsh_export reference for an export
referenced in op_ctx->ctx_export. Make sure that is true. Also
clarify the scope, using clear_op_context_export() if appropriate.
Change-Id: I854458adb56e670435aab80a88f8bf1d7abaf580
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/Protocols/NFS/mnt_Export.c
M src/Protocols/NFS/nfs4_op_layoutreturn.c
M src/Protocols/NFS/nfs4_op_lookupp.c
M src/Protocols/NFS/nfs4_op_savefh.c
M src/SAL/state_lock.c
5 files changed, 30 insertions(+), 31 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/02/491402/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491402
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I854458adb56e670435aab80a88f8bf1d7abaf580
Gerrit-Change-Number: 491402
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months
Change in ...nfs-ganesha[next]: Introduce set_op_context_export and clear_op_context_export
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491401 )
Change subject: Introduce set_op_context_export and clear_op_context_export
......................................................................
Introduce set_op_context_export and clear_op_context_export
This allows centralized management of op_ctx->ctx_export.
Change-Id: I8954daa5a8b3534792634e570bc2e53f62abafa3
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/commonlib.c
M src/FSAL/fsal_helper.c
M src/FSAL_UP/fsal_up_top.c
M src/MainNFSD/libganesha_nfsd.ver
M src/MainNFSD/nfs_worker_thread.c
M src/Protocols/9P/9p_attach.c
M src/Protocols/9P/9p_proto_tools.c
M src/Protocols/NFS/mnt_Export.c
M src/Protocols/NFS/mnt_Mnt.c
M src/Protocols/NFS/nfs4_Compound.c
M src/Protocols/NFS/nfs4_op_free_stateid.c
M src/Protocols/NFS/nfs4_op_layoutreturn.c
M src/Protocols/NFS/nfs4_op_lookup.c
M src/Protocols/NFS/nfs4_op_lookupp.c
M src/Protocols/NFS/nfs4_op_putfh.c
M src/Protocols/NFS/nfs4_op_putrootfh.c
M src/Protocols/NFS/nfs4_op_readdir.c
M src/Protocols/NFS/nfs4_op_restorefh.c
M src/Protocols/NFS/nfs4_op_savefh.c
M src/Protocols/NFS/nfs4_op_secinfo.c
M src/Protocols/NFS/nfs4_op_secinfo_no_name.c
M src/Protocols/NFS/nfs4_pseudo.c
M src/Protocols/NLM/nlm_Granted_Res.c
M src/SAL/nfs4_state.c
M src/SAL/state_layout.c
M src/SAL/state_lock.c
M src/gtest/gtest.hh
M src/include/fsal.h
M src/include/nfs_proto_data.h
M src/support/export_mgr.c
30 files changed, 181 insertions(+), 221 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/01/491401/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491401
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I8954daa5a8b3534792634e570bc2e53f62abafa3
Gerrit-Change-Number: 491401
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months
Change in ...nfs-ganesha[next]: Eliminate extra function - unexport
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491400 )
Change subject: Eliminate extra function - unexport
......................................................................
Eliminate extra function - unexport
The function unexport() is a thin wrapper around release_export().
Clean it up and simplify op_context handling.
Change-Id: Iafe656d8f260abb60969f39d01e2cbe44fee5f5b
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/include/nfs_exports.h
M src/support/export_mgr.c
M src/support/exports.c
3 files changed, 10 insertions(+), 35 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/00/491400/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/491400
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Iafe656d8f260abb60969f39d01e2cbe44fee5f5b
Gerrit-Change-Number: 491400
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
4 years, 7 months