NFSv4 open reclaim and access check
by Suhrud Patankar
Hello,
There is one issue with git clone from V4 client and Ganesha getting
restarted in between.
What git clone does is -
create+open file in RW mode
set_attr for mode as 0444
keep writing to the file.
Issue is when Ganesha gets restarted in between, we get open reclaim
with no_create.
In this case the owner_skip flag is not set and hence the open fails
with NFS4ERR_ACCESS.
I think we should not do any access checks for open reclaim.
@Malahal, the patch
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449537 removed the
open flag to indicate if it is reclaim.
Can we revert this and then skip test_access for any open reclaims?
something like -
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461556
Thanks & Regards,
Suhrud
5 years, 5 months
Change in ...nfs-ganesha[next]: Skip access checks for open reclaims.
by Suhrud Patankar (GerritHub)
Suhrud Patankar has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461556 )
Change subject: Skip access checks for open reclaims.
......................................................................
Skip access checks for open reclaims.
Test case:
- create+open file in RW mode
- setattr on file as 0444
- Restart Ganesha service
The open reclaim sent by the client fails as the owner does not have
write access.
git clone operation uses this flow.
Change-Id: I97e0e323fde14ea97d7af5c6ddafba843e1b80e9
Signed-off-by: Suhrud Patankar <suhrudpatankar(a)gmail.com>
---
M src/FSAL/fsal_helper.c
1 file changed, 7 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/56/461556/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461556
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I97e0e323fde14ea97d7af5c6ddafba843e1b80e9
Gerrit-Change-Number: 461556
Gerrit-PatchSet: 1
Gerrit-Owner: Suhrud Patankar <suhrudpatankar(a)gmail.com>
Gerrit-MessageType: newchange
5 years, 5 months
Change in ...nfs-ganesha[next]: Fix lookup_path to avoid Crit message for non-existing paths
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461460 )
Change subject: Fix lookup_path to avoid Crit message for non-existing paths
......................................................................
Fix lookup_path to avoid Crit message for non-existing paths
When someone mouts a path (NFSv3), we do partial path as they might be
mounting deep inside an export. Then the actual path lookup can fail, so
avoid printing such non-existong errors as Crit messages. Looging them
now with LogDebug.
Change-Id: Icda64940274dab6c80335d3f5801d50036853398
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/FSAL/FSAL_GPFS/handle.c
M src/FSAL/FSAL_VFS/handle.c
2 files changed, 4 insertions(+), 4 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/60/461460/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461460
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Icda64940274dab6c80335d3f5801d50036853398
Gerrit-Change-Number: 461460
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
5 years, 5 months
V2.7.5 - Core - Segmentation fault in mdcache_get_chunk
by Rungta, Vandana
Daniel,
prev_chunk->dirents is an empty list. The num_entries say 112, but the next and prev suggest an empty list. I did not find any code that decrements num_entries as dirents are removed. Only a reset back to zero if the whole list is emptied.
1. Should the code below check for empty list? If empty go backwards to the prev_chunk’s prev or just set reload_ck to whence
2. Does the code in unchunk_dirent that removes from chunk also need to decrement num_entries
3. Will not decrementing the num_entries have an effect on the split logic in place_new_dirent – or at least that code need to handle empty chunks
4. Do empty chunks get cleaned up?
***** Code snippet for 1 and core: mdcache_get_chunk. ****
if (prev_chunk) {
chunk->reload_ck = glist_last_entry(&prev_chunk->dirents,
mdcache_dir_entry_t,
>>>>Line 914 chunk_list)->ck;
/* unref prev_chunk as we had got a ref on prev_chunk
* at the beginning of this function
*/
mdcache_lru_unref_chunk(prev_chunk);
} else {
chunk->reload_ck = whence;
}
Note: This core had a ref_count of 1 for the prev_chunk. A different core had a ref_count of 1 for prev_chunk
******* core *****
(gdb) info locals
lru = 0x0
chunk = 0x75320370
__func__ = "mdcache_get_chunk"
(gdb) info args
parent = 0x2f837f60
prev_chunk = 0x2f70dc50
whence = 18525227
(gdb) print *chunk
$2 = {chunks = {next = 0x2f838200, prev = 0x40ab5700}, dirents = {next = 0x75320380, prev = 0x75320380}, parent = 0x2f837f60,
chunk_lru = {q = {next = 0x0, prev = 0x0}, qid = LRU_ENTRY_NONE, refcnt = 0, flags = 0, lane = 0, cf = 0}, reload_ck = 0,
next_ck = 0, num_entries = 0}
(gdb) print *prev_chunk
$3 = {chunks = {next = 0x39a24aa0, prev = 0x55ed7f00}, dirents = {next = 0x2f70dc60, prev = 0x2f70dc60}, parent = 0x0,
chunk_lru = {q = {next = 0x0, prev = 0x0}, qid = LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 0, cf = 0}, reload_ck = 0,
next_ck = 0, num_entries = 112}
(gdb) bt
#0 mdcache_get_chunk (parent=0x2f837f60, prev_chunk=0x2f70dc50, whence=18525227)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:914
#1 0x0000000000541da9 in mdcache_populate_dir_chunk (directory=0x2f837f60, whence=18525227, dirent=0x7f0edeea3940,
prev_chunk=0x2f70dc50, eod_met=0x7f0edeea393f) at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2554
#2 0x00000000005437cd in mdcache_readdir_chunked (directory=0x2f837f60, whence=18525227, dir_state=0x7f0edeea3af0,
cb=0x4325fd <populate_dirent>, attrmask=0, eod_met=0x7f0edeea3feb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2985
#3 0x0000000000531b60 in mdcache_readdir (dir_hdl=0x2f837f98, whence=0x7f0edeea3ad0, dir_state=0x7f0edeea3af0,
cb=0x4325fd <populate_dirent>, attrmask=0, eod_met=0x7f0edeea3feb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
#4 0x0000000000432f24 in fsal_readdir (directory=0x2f837f98, cookie=18525227, nbfound=0x7f0edeea3fec, eod_met=0x7f0edeea3feb,
attrmask=0, cb=0x491f59 <nfs3_readdir_callback>, opaque=0x7f0edeea3fa0) at /src/src/FSAL/fsal_helper.c:1164
#5 0x0000000000491d41 in nfs3_readdir (arg=0x7534a418, req=0x75349d10, res=0x75289cf0)
at /src/src/Protocols/NFS/nfs3_readdir.c:289
#6 0x0000000000457e26 in nfs_rpc_process_request (reqdata=0x75349d10) at /src/src/MainNFSD/nfs_worker_thread.c:1328
#7 0x00000000004585e5 in nfs_rpc_valid_NFS (req=0x75349d10) at /src/src/MainNFSD/nfs_worker_thread.c:1548
#8 0x00007f0ee83c4034 in svc_vc_decode (req=0x75349d10) at /src/src/libntirpc/src/svc_vc.c:829
#9 0x000000000044afd5 in nfs_rpc_decode_request (xprt=0x1453b20, xdrs=0x75363020)
at /src/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
#10 0x00007f0ee83c3f45 in svc_vc_recv (xprt=0x1453b20) at /src/src/libntirpc/src/svc_vc.c:802
#11 0x00007f0ee83c0689 in svc_rqst_xprt_task (wpe=0x1453d38) at /src/src/libntirpc/src/svc_rqst.c:769
#12 0x00007f0ee83c0ae6 in svc_rqst_epoll_events (sr_rec=0x1434020, n_events=1) at /src/src/libntirpc/src/svc_rqst.c:941
#13 0x00007f0ee83c0d7b in svc_rqst_epoll_loop (sr_rec=0x1434020) at /src/src/libntirpc/src/svc_rqst.c:1014
#14 0x00007f0ee83c0e2e in svc_rqst_run_task (wpe=0x1434020) at /src/src/libntirpc/src/svc_rqst.c:1050
#15 0x00007f0ee83c97f6 in work_pool_thread (arg=0x144d470) at /src/src/libntirpc/src/work_pool.c:181
#16 0x00007f0ee73e7de5 in start_thread () from /lib64/libpthread.so.0
#17 0x00007f0ee6ceef1d in clone () from /lib64/libc.so.6
5 years, 5 months
Change in ...nfs-ganesha[next]: Make sure utf8string is NUL terminated everywhere
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461064 )
Change subject: Make sure utf8string is NUL terminated everywhere
......................................................................
Make sure utf8string is NUL terminated everywhere
Several places assumed NUL termination already, just make it so
everywhere including in XDR decode.
Change-Id: I60b7481afc4948a23a3c286e63291c35719807c7
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_GPFS/handle.c
M src/FSAL/FSAL_PROXY/handle.c
M src/Protocols/NFS/nfs4_Compound.c
M src/Protocols/NFS/nfs4_op_create.c
M src/Protocols/NFS/nfs4_op_link.c
M src/Protocols/NFS/nfs4_op_lookup.c
M src/Protocols/NFS/nfs4_op_open.c
M src/Protocols/NFS/nfs4_op_remove.c
M src/Protocols/NFS/nfs4_op_rename.c
M src/Protocols/NFS/nfs4_op_secinfo.c
M src/Protocols/NFS/nfs4_op_xattr.c
M src/Protocols/NFS/nfs_proto_tools.c
M src/include/nfs_proto_tools.h
M src/include/nfsv41.h
14 files changed, 206 insertions(+), 165 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/64/461064/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/461064
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I60b7481afc4948a23a3c286e63291c35719807c7
Gerrit-Change-Number: 461064
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
5 years, 5 months
2.7.4 + MDCACHE patches still cored
by Rungta, Vandana
Daniel,
I saw the following core again with V2.7.4 and the following patches applied including the last MDCACHE one.
Took about 5 days of hard testing to hit the core. There still seems an edge case that does an extra chunk unref.
You mentioned last time that the chunk cannot be freed by another thread (in between the content lock release and re-acquire) because we have a ref count. But could the code path that removes the “sentinel” ref result in this chunk being re-used by the LRU since it still has the ref count from this path. ( Code that removes the sentinel reference: mdcache_populate_dir_chunk() the section with the comment “Put sentinal ref”. ) Or maybe there is a completely different edge case that does an extra unref.
Minor: The commit heading for 2.7.5 says 2.7.4
https://github.com/nfs-ganesha/nfs-ganesha/commit/6407d3d133c9346ae586cec...
Thanks,
Vandana
Patches applied:
https://github.com/nfs-ganesha/nfs-ganesha/commit/c7e7d24877085dbb0424542...
https://github.com/nfs-ganesha/nfs-ganesha/commit/136df4f262c3f9bc29df456...
https://github.com/nfs-ganesha/nfs-ganesha/commit/c98ad1e238f5db9db5ab8db...
https://github.com/nfs-ganesha/nfs-ganesha/commit/f0d5b8d4f6dcce4597459c3...
https://github.com/nfs-ganesha/nfs-ganesha/commit/e11cd5b9f8d6ffbb46061e7...
https://github.com/nfs-ganesha/nfs-ganesha/commit/5439f63b8a1e00f6ff56a4d...
https://github.com/nfs-ganesha/nfs-ganesha/commit/e306c01685b74970ada2259...
Here are the code snippets around the lines of code in the core, after mdcache_find_keyed_reason gets an error.
if (!has_write) {
/* We will have to re-find this dirent after we
* re-acquire the lock.
*/
look_ck = dirent->ck;
PTHREAD_RWLOCK_unlock(&directory->content_lock);
PTHREAD_RWLOCK_wrlock(&directory->content_lock);
has_write = true;
/* Dropping the content_lock may have
* invalidated some or all of the dirents and/or
* chunks in this directory. We need to start
* over from this point. look_ck is now correct
* if the dirent is still cached, and we haven't
* changed next_ck, so it's still correct for
* reloading the chunk.
*/
first_pass = true;
>>>>>>>>>> Line 3133 mdcache_lru_unref_chunk(chunk);
chunk = NULL;
/* Now we need to look for this dirent again.
* We haven't updated next_ck for this dirent
* yet, so it is the right whence to use for a
* repopulation readdir if the chunk is
* discarded.
*/
goto again;
(gdb) print lru_state
$1 = {entries_hiwat = 500000, entries_used = 518407, chunks_hiwat = 100000, chunks_used = 99952,
fds_system_imposed = 400000, fds_hard_limit = 396000, fds_hiwat = 360000, fds_lowat = 200000,
futility = 0, per_lane_work = 50, biggest_window = 160000, prev_fd_count = 244632,
prev_time = 1562643400, fd_state = 2}
(gdb) bt
#0 0x00007fdf573e5c40 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x000000000052af4a in _mdcache_lru_unref_chunk (chunk=0x56035bc0,
func=0x598a00 <__func__.23678> "mdcache_readdir_chunked", line=3133)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:2058
#2 0x0000000000542d84 in mdcache_readdir_chunked (directory=0x1a244a20, whence=35907657,
dir_state=0x7fdf4ba7baf0, cb=0x4323ed <populate_dirent>, attrmask=0, eod_met=0x7fdf4ba7bfeb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:3133
#3 0x000000000053054c in mdcache_readdir (dir_hdl=0x1a244a58, whence=0x7fdf4ba7bad0,
dir_state=0x7fdf4ba7baf0, cb=0x4323ed <populate_dirent>, attrmask=0, eod_met=0x7fdf4ba7bfeb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
#4 0x0000000000432d14 in fsal_readdir (directory=0x1a244a58, cookie=35907657,
nbfound=0x7fdf4ba7bfec, eod_met=0x7fdf4ba7bfeb, attrmask=0, cb=0x491d35 <nfs3_readdir_callback>,
opaque=0x7fdf4ba7bfa0) at /src/src/FSAL/fsal_helper.c:1164
#5 0x0000000000491b1d in nfs3_readdir (arg=0x3a08e098, req=0x3a08d990, res=0x20466d70)
at /src/src/Protocols/NFS/nfs3_readdir.c:289
#6 0x0000000000457c16 in nfs_rpc_process_request (reqdata=0x3a08d990)
at /src/src/MainNFSD/nfs_worker_thread.c:1328
#7 0x00000000004583d5 in nfs_rpc_valid_NFS (req=0x3a08d990)
at /src/src/MainNFSD/nfs_worker_thread.c:1548
#8 0x00007fdf583c0034 in svc_vc_decode (req=0x3a08d990) at /src/src/libntirpc/src/svc_vc.c:829
#9 0x000000000044adc5 in nfs_rpc_decode_request (xprt=0x3dd4adf0, xdrs=0x3cfd0940)
at /src/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
#10 0x00007fdf583bff45 in svc_vc_recv (xprt=0x3dd4adf0) at /src/src/libntirpc/src/svc_vc.c:802
---Type <return> to continue, or q <return> to quit---q
Quit
(gdb) select-frame 2
(gdb) info locals
status = {major = ERR_FSAL_NOENT, minor = 0}
cb_result = DIR_CONTINUE
entry = 0x0
attrs = {request_mask = 0, valid_mask = 1433550, supported = 1433582, type = REGULAR_FILE,
filesize = 1059147, fsid = {major = 0, minor = 0}, acl = 0x0, fileid = 35907648, mode = 493,
numlinks = 1, owner = 4294967294, group = 4294967294, rawdev = {major = 0, minor = 0}, atime = {
tv_sec = 1562643413407000, tv_nsec = 0}, creation = {tv_sec = 0, tv_nsec = 0}, ctime = {
tv_sec = 1562643413407000, tv_nsec = 0}, mtime = {tv_sec = 1562624796, tv_nsec = 478000000},
chgtime = {tv_sec = 1562643413407000, tv_nsec = 0}, spaceused = 1059147,
change = 1562643413407000000, generation = 0, expire_time_attr = 60, fs_locations = 0x0,
sec_label = {slai_lfs = {lfs_lfs = 0, lfs_pi = 0}, slai_data = {slai_data_len = 0,
slai_data_val = 0x0}}}
dirent = 0x3a584590
has_write = true
set_first_ck = false
next_ck = 35907651
look_ck = 35907664
chunk = 0x56035bc0
first_pass = true
eod = false
reload_chunk = false
__func__ = "mdcache_readdir_chunked"
__PRETTY_FUNCTION__ = "mdcache_readdir_chunked"
(gdb)
status = {major = ERR_FSAL_NOENT, minor = 0}
cb_result = DIR_CONTINUE
entry = 0x0
attrs = {request_mask = 0, valid_mask = 1433550, supported = 1433582, type = REGULAR_FILE,
filesize = 1059147, fsid = {major = 0, minor = 0}, acl = 0x0, fileid = 35907648, mode = 493,
numlinks = 1, owner = 4294967294, group = 4294967294, rawdev = {major = 0, minor = 0}, atime = {
tv_sec = 1562643413407000, tv_nsec = 0}, creation = {tv_sec = 0, tv_nsec = 0}, ctime = {
tv_sec = 1562643413407000, tv_nsec = 0}, mtime = {tv_sec = 1562624796, tv_nsec = 478000000},
chgtime = {tv_sec = 1562643413407000, tv_nsec = 0}, spaceused = 1059147,
change = 1562643413407000000, generation = 0, expire_time_attr = 60, fs_locations = 0x0,
sec_label = {slai_lfs = {lfs_lfs = 0, lfs_pi = 0}, slai_data = {slai_data_len = 0,
slai_data_val = 0x0}}}
dirent = 0x3a584590
has_write = true
set_first_ck = false
next_ck = 35907651
look_ck = 35907664
chunk = 0x56035bc0
first_pass = true
eod = false
reload_chunk = false
__func__ = "mdcache_readdir_chunked"
__PRETTY_FUNCTION__ = "mdcache_readdir_chunked"
(gdb) print *dirent
$2 = {chunk_list = {next = 0x7fdf585dd380 <xdr_ioq_ops>, prev = 0x34247680}, chunk = 0x0,
node_name = {left = 0x0, right = 0x0, parent = 457566472}, node_ck = {left = 0x3b255758,
right = 0x1b45e8a0, parent = 457566368}, node_sorted = {left = 0x1b45e908, right = 0x1b45e908,
parent = 25769803776}, ck = 1, eod = false, namehash = 0, ckey = {hk = 0, fsal = 0x0, kv = {
addr = 0x0, len = 0}}, flags = 0, entry = 0x3dd4b088,
name = 0x10000000000 <Address 0x10000000000 out of bounds>, name_buffer = 0x3a584640 ""}
(gdb) print *chunk
$3 = {chunks = {next = 0x24b1c303, prev = 0xa386010002000000}, dirents = {next = 0x300000003000000,
prev = 0x1c00000001000000}, parent = 0x7000000aaaaaaaa, chunk_lru = {q = {
next = 0x6e776f6e6b6e75, prev = 0xfefffffffeffffff}, qid = LRU_ENTRY_NONE, refcnt = 0,
flags = 0, lane = 268435456, cf = 16777283}, reload_ck = 1099511627778,
next_ck = 7236828793636126720, num_entries = 858680687}
5 years, 5 months
MDCACHE Crash with V2.7.4
by Rungta, Vandana
Nfs-ganesha V2.7.4 with the following MDCACHE patches applied:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/457970
https://github.com/nfs-ganesha/nfs-ganesha/commit/c98ad1e238f5db9db5ab8db...
https://github.com/nfs-ganesha/nfs-ganesha/commit/136df4f262c3f9bc29df456...
Crash when freeing a dirent.
(gdb) bt
#0 0x00007fe0a2084277 in raise () from /lib64/libc.so.6
#1 0x00007fe0a2085968 in abort () from /lib64/libc.so.6
#2 0x00007fe0a20c6d97 in __libc_message () from /lib64/libc.so.6
#3 0x00007fe0a20cf4f9 in _int_free () from /lib64/libc.so.6
#4 0x0000000000543e9c in gsh_free (p=0x1b72140) at /src/src/include/abstract_mem.h:246
#5 0x0000000000544b0c in mdcache_avl_remove (parent=0x1b17000, dirent=0x1b72140)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_avl.c:240
#6 0x0000000000545e4d in mdcache_avl_clean_trees (parent=0x1b17000)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_avl.c:614
#7 0x000000000053a11b in mdcache_dirent_invalidate_all (entry=0x1b17000)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:606
#8 0x0000000000541b15 in mdcache_readdir_chunked (directory=0x1b17000, whence=5012127, dir_state=0x7fe097debaf0,
cb=0x4323ed <populate_dirent>, attrmask=0, eod_met=0x7fe097debfeb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2864
#9 0x000000000053054c in mdcache_readdir (dir_hdl=0x1b17038, whence=0x7fe097debad0, dir_state=0x7fe097debaf0,
cb=0x4323ed <populate_dirent>, attrmask=0, eod_met=0x7fe097debfeb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
#10 0x0000000000432d14 in fsal_readdir (directory=0x1b17038, cookie=5012127, nbfound=0x7fe097debfec,
eod_met=0x7fe097debfeb, attrmask=0, cb=0x491d35 <nfs3_readdir_callback>, opaque=0x7fe097debfa0)
at /src/src/FSAL/fsal_helper.c:1164
#11 0x0000000000491b1d in nfs3_readdir (arg=0x1e12508, req=0x1e11e00, res=0x1be41e0)
at /src/src/Protocols/NFS/nfs3_readdir.c:289
#12 0x0000000000457c16 in nfs_rpc_process_request (reqdata=0x1e11e00) at /src/src/MainNFSD/nfs_worker_thread.c:1328
#13 0x00000000004583d5 in nfs_rpc_valid_NFS (req=0x1e11e00) at /src/src/MainNFSD/nfs_worker_thread.c:1548
#14 0x00007fe0a3821034 in svc_vc_decode (req=0x1e11e00) at /src/src/libntirpc/src/svc_vc.c:829
#15 0x000000000044adc5 in nfs_rpc_decode_request (xprt=0x1b040e0, xdrs=0x1be4000)
at /src/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
#16 0x00007fe0a3820f45 in svc_vc_recv (xprt=0x1b040e0) at /src/src/libntirpc/src/svc_vc.c:802
#17 0x00007fe0a381d689 in svc_rqst_xprt_task (wpe=0x1b042f8) at /src/src/libntirpc/src/svc_rqst.c:769
#18 0x00007fe0a381dae6 in svc_rqst_epoll_events (sr_rec=0x1ae45f0, n_events=1)
at /src/src/libntirpc/src/svc_rqst.c:941
#19 0x00007fe0a381dd7b in svc_rqst_epoll_loop (sr_rec=0x1ae45f0) at /src/src/libntirpc/src/svc_rqst.c:1014
---Type <return> to continue, or q <return> to quit---q
Quit
(gdb) select-frame 5
(gdb) info args
parent = 0x1b17000
dirent = 0x1b72140
(gdb) info locals
chunk = 0x1b4b510
__func__ = "mdcache_avl_remove"
(gdb) print *chunk
$1 = {chunks = {next = 0x1b172a0, prev = 0x1b172a0}, dirents = {next = 0x1b29930, prev = 0x1bc2840},
parent = 0x1b17000, chunk_lru = {q = {next = 0x0, prev = 0x0}, qid = LRU_ENTRY_L1, refcnt = 0, flags = 0,
lane = 16, cf = 0}, reload_ck = 5012127, next_ck = 0, num_entries = 297}
(gdb) print *dirent
$2 = {chunk_list = {next = 0x0, prev = 0x0}, chunk = 0x0, node_name = {left = 0x0, right = 0x0, parent = 28852730},
node_ck = {left = 0x1b78ff0, right = 0x1b68970, parent = 28718322}, node_sorted = {left = 0x0, right = 0x0,
parent = 0}, ck = 5019809, eod = false, namehash = 6207075547366218433, ckey = {hk = 15362064496009940491,
fsal = 0x7fe09f949080 <FOO>, kv = {addr = 0x0, len = 0}}, flags = 0, entry = 0x0, name = 0x1b721f0 "random.37",
name_buffer = 0x1b721f0 "random.37"}
(gdb)
I am happy to provide any additional debug info you want from the core.
Thanks,
Vandana
5 years, 5 months