Change in ...nfs-ganesha[next]: Remove chgtime attribute in favor of change attribute
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444912
Change subject: Remove chgtime attribute in favor of change attribute
......................................................................
Remove chgtime attribute in favor of change attribute
The chgtime attribute really isn't used (other than in one place
which should just use the change attribute).
Change-Id: I6ee9f2ca335f87796ecce26a24ae68fa16c5a67c
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_CEPH/internal.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.c
M src/FSAL/FSAL_GPFS/fsal_convert.c
M src/FSAL/FSAL_GPFS/fsal_up.c
M src/FSAL/FSAL_MEM/mem_handle.c
M src/FSAL/FSAL_MEM/mem_up.c
M src/FSAL/FSAL_PSEUDO/handle.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_up.c
M src/FSAL/fsal_convert.c
M src/Protocols/NFS/nfs3_readdir.c
M src/Protocols/NFS/nfs3_readdirplus.c
M src/Protocols/NFS/nfs4_op_readdir.c
M src/Protocols/NFS/nfs_proto_tools.c
M src/include/fsal.h
M src/include/fsal_types.h
M src/include/fsal_up.h
17 files changed, 101 insertions(+), 110 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/12/444912/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444912
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I6ee9f2ca335f87796ecce26a24ae68fa16c5a67c
Gerrit-Change-Number: 444912
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: Update the cache slot when a new client is inserted
by Name of user not set (GerritHub)
ntrishal(a)in.ibm.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444828
Change subject: Update the cache slot when a new client is inserted
......................................................................
Update the cache slot when a new client is inserted
When a client is inserted into the AVL tree the cache slot
for it should also be updated.
Change-Id: I39f94fcfb86504bb68c29f91ddafb074763edc48
Signed-off-by: Trishali Nayar <ntrishal(a)in.ibm.com>
---
M src/support/client_mgr.c
1 file changed, 2 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/28/444828/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444828
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I39f94fcfb86504bb68c29f91ddafb074763edc48
Gerrit-Change-Number: 444828
Gerrit-PatchSet: 1
Gerrit-Owner: ntrishal(a)in.ibm.com
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: MDCACHE - Fix dirent chunk linking
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444688
Change subject: MDCACHE - Fix dirent chunk linking
......................................................................
MDCACHE - Fix dirent chunk linking
We can't depend on the order of the chunk list for finding a previous
chunk. This is for several reasons. 1. chunks are reaped out of the
middle of the list. 2. A readdir can start on a missing chunk, meaning
there's no way to find the previous chunk, even if it exists.
Remove the ordered aspect of the list, since we can't depend on it, and
keep a prev_chunk pointer in the state. This allows us to set next_ck
going forward from whatever chunk we start with. There will be fewer
cases when next_ck is set correctly, but we won't end up with cycles
where next_ck points further back into the directory.
Change-Id: I9d48c2937f2deaf3dd3614da6b9e1df1fd5e67e3
Signed-off-by: Daniel Gryniewicz <dang(a)redhat.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_int.h
2 files changed, 65 insertions(+), 121 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/88/444688/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444688
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I9d48c2937f2deaf3dd3614da6b9e1df1fd5e67e3
Gerrit-Change-Number: 444688
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: MDCACHE - Add refcounting for dirent chunks
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444687
Change subject: MDCACHE - Add refcounting for dirent chunks
......................................................................
MDCACHE - Add refcounting for dirent chunks
Rather than depending on the content lock to protect chunks, add a
refcount that we can take across dropping the lock or reaping and use to
ensure that a chunk stays around when we need it.
Change-Id: Ia93ac0becb29648b8f9fc64eb486146198bfb8e0
Signed-off-by: Daniel Gryniewicz <dang(a)redhat.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.h
3 files changed, 70 insertions(+), 61 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/87/444687/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444687
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ia93ac0becb29648b8f9fc64eb486146198bfb8e0
Gerrit-Change-Number: 444687
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 10 months
Crash in mdcache_readdir_chunked - invalid entry
by Rungta, Vandana
In this code path has_write is false, so the entry was found in cache - mdcache_avl_lookup_ck successfully found dirent and mdcache_find_keyed_reason successfully returned entry and should have increased the refcount. Current refcount is 0. We crashed because obj_ops is 0 when trying to call gettarrs “status = entry->obj_handle.obj_ops->getattrs()”
Crash is reproducible.
Unfortunately I can’t reproduce with debug flags for COMPONENT_CACHE_INODE and COMPONENT_NFS_READDIR enabled
Test conditions:
Windows client using robocopy. The test creates a set of local files. Uses robocopy to sync the local directory to the NFS file share. Deletes the folder from the file share and then uses robocopy to sync to a different folder on the NFS file share.
Ganesha Version 2.7.1 + commits:
https://github.com/nfs-ganesha/nfs-ganesha/commit/25320e6544f6c5a045f20c5...
https://github.com/nfs-ganesha/nfs-ganesha/commit/03ee21eae53f33e49a993f1...
(gdb) bt
#0 0x00000000005418a1 in mdcache_readdir_chunked (directory=0x32ce1490, whence=121480190, dir_state=0x7f237b2a2af0,
cb=0x43217c <populate_dirent>, attrmask=0, eod_met=0x7f237b2a2feb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:3136
#1 0x000000000052e8c3 in mdcache_readdir (dir_hdl=0x32ce14c8, whence=0x7f237b2a2ad0, dir_state=0x7f237b2a2af0,
cb=0x43217c <populate_dirent>, attrmask=0, eod_met=0x7f237b2a2feb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
#2 0x0000000000432a76 in fsal_readdir (directory=0x32ce14c8, cookie=121480190, nbfound=0x7f237b2a2fec,
eod_met=0x7f237b2a2feb, attrmask=0, cb=0x4912a2 <nfs3_readdir_callback>, opaque=0x7f237b2a2fa0)
at /src/src/FSAL/fsal_helper.c:1158
#3 0x000000000049108a in nfs3_readdir (arg=0x6cf58738, req=0x6cf58030, res=0x6cc27720)
at /src/src/Protocols/NFS/nfs3_readdir.c:289
#4 0x00000000004574d1 in nfs_rpc_process_request (reqdata=0x6cf58030) at /src/src/MainNFSD/nfs_worker_thread.c:1329
#5 0x0000000000457c90 in nfs_rpc_valid_NFS (req=0x6cf58030) at /src/src/MainNFSD/nfs_worker_thread.c:1549
#6 0x00007f238335ae75 in svc_vc_decode (req=0x6cf58030) at /src/src/libntirpc/src/svc_vc.c:825
#7 0x000000000044a688 in nfs_rpc_decode_request (xprt=0x1c28880, xdrs=0x6cf92980)
at /src/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1341
#8 0x00007f238335ad86 in svc_vc_recv (xprt=0x1c28880) at /src/src/libntirpc/src/svc_vc.c:798
#9 0x00007f23833574d3 in svc_rqst_xprt_task (wpe=0x1c28a98) at /src/src/libntirpc/src/svc_rqst.c:767
#10 0x00007f238335794d in svc_rqst_epoll_events (sr_rec=0x1bfb260, n_events=1) at /src/src/libntirpc/src/svc_rqst.c:939
#11 0x00007f2383357be2 in svc_rqst_epoll_loop (sr_rec=0x1bfb260) at /src/src/libntirpc/src/svc_rqst.c:1012
#12 0x00007f2383357c95 in svc_rqst_run_task (wpe=0x1bfb260) at /src/src/libntirpc/src/svc_rqst.c:1048
#13 0x00007f23833605f6 in work_pool_thread (arg=0x6cc0580) at /src/src/libntirpc/src/work_pool.c:181
#14 0x00007f2382367de5 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f2381c6fbad in clone () from /lib64/libc.so.6
(gdb) print *entry
$1 = {attr_lock = {__data = {__lock = 0, __nr_readers = 0, __readers_wakeup = 848659816, __writer_wakeup = 0,
__nr_readers_queued = 8205728, __nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 8205696,
__pad2 = 8206032, __flags = 0},
__size = "\000\000\000\000\000\000\000\000h\205\225\062\000\000\000\000\240\065}", '\000' <repeats 13 times>, "\200\065}\000\000\000\000\000\320\066}", '\000' <repeats 12 times>, __align = 0}, obj_handle = {handles = {next = 0x0,
prev = 0x0}, fs = 0x0, fsal = 0x0, obj_ops = 0x0, obj_lock = {__data = {__lock = 0, __nr_readers = 0,
__readers_wakeup = 1, __writer_wakeup = 0, __nr_readers_queued = 0, __nr_writers_queued = 0, __writer = 0,
__shared = 0, __pad1 = 4542671, __pad2 = 1812466792, __flags = 1753052544},
__size = "\000\000\000\000\000\000\000\000\001", '\000' <repeats 23 times>, "\317PE\000\000\000\000\000h\f\bl\000\000\000\000\200u}h\000\000\000", __align = 0}, type = 1433550, fsid = {major = 1433550, minor = 1433582}, fileid = 1,
state_hdl = 0x400}, sub_handle = 0x0, attrs = {request_mask = 0, valid_mask = 0, supported = 4542671, type = 438,
filesize = 65534, fsid = {major = 65534, minor = 0}, acl = 0x0, fileid = 1549686770, mode = 225000000, numlinks = 0,
owner = 0, group = 0, rawdev = {major = 1549686770, minor = 225000000}, atime = {tv_sec = 1549686770,
tv_nsec = 225000000}, creation = {tv_sec = 1549686770, tv_nsec = 225000000}, ctime = {tv_sec = 1024,
tv_nsec = 1549686770225}, mtime = {tv_sec = 0, tv_nsec = 60}, chgtime = {tv_sec = 0, tv_nsec = 0}, spaceused = 0,
change = 697563970, generation = 10661591424062854996, expire_time_attr = 2142117152, fs_locations = 0x6cf4a550},
fh_hk = {node_k = {left = 0xa, right = 0x1, parent = 1}, key = {hk = 1550089231, fsal = 0x0, kv = {addr = 0x0,
len = 933111888}}, inavl = 96}, mde_flags = 1, attr_time = 8589934592, acl_time = 0,
fs_locations_time = 1828650080, lru = {q = {next = 0x6cfefc60, prev = 0x1}, qid = LRU_ENTRY_NONE, refcnt = 0,
flags = 0, lane = 0, cf = 0}, export_list = {next = 0x0, prev = 0x0}, first_export_id = 0, content_lock = {__data = {
__lock = 0, __nr_readers = 0, __readers_wakeup = 0, __writer_wakeup = 0, __nr_readers_queued = 0,
__nr_writers_queued = 0, __writer = 0, __shared = 0, __pad1 = 0, __pad2 = 0, __flags = 0},
__size = '\000' <repeats 55 times>, __align = 0}, fsobj = {hdl = {state_lock = {__data = {__lock = 0,
__nr_readers = 0, __readers_wakeup = 0, __writer_wakeup = 0, __nr_readers_queued = 1812466200,
__nr_writers_queued = 0, __writer = 1812466864, __shared = 0, __pad1 = 1812466864, __pad2 = 1812466880,
__flags = 1812466880},
__size = '\000' <repeats 16 times>, "\030\n\bl\000\000\000\000\260\f\bl\000\000\000\000\260\f\bl\000\000\000\000\300\f\bl\000\000\000\000\300\f\bl\000\000\000", __align = 0}, no_cleanup = 208, {file = {obj = 0x6c080cd0,
list_of_states = {next = 0x6c080ce0, prev = 0x6c080ce0}, layoutrecall_list = {next = 0x0, prev = 0x0},
lock_list = {next = 0x0, prev = 0x0}, nlm_share_list = {next = 0x0, prev = 0x0}, write_delegated = false,
fdeleg_stats = {fds_curr_delegations = 0, fds_deleg_type = OPEN_DELEGATE_NONE, fds_delegation_count = 0,
---Type <return> to continue, or q <return> to quit---
fds_recall_count = 0, fds_avg_hold = 0, fds_last_delegation = 0, fds_last_recall = 0, fds_num_opens = 0,
fds_first_open = 0}, anon_ops = 0}, dir = {junction_export = 0x6c080cd0, export_roots = {next = 0x6c080ce0,
prev = 0x6c080ce0}, exp_root_refcount = 0}}}, fsdir = {chunks = {next = 0x0, prev = 0x0}, detached = {
next = 0x6c080a18, prev = 0x6c080cb0}, spin = 1812466864, detached_count = 0, dhdl = {state_lock = {__data = {
__lock = 1812466880, __nr_readers = 0, __readers_wakeup = 1812466880, __writer_wakeup = 0,
__nr_readers_queued = 1812466896, __nr_writers_queued = 0, __writer = 1812466896, __shared = 0,
__pad1 = 1812466912, __pad2 = 1812466912, __flags = 0},
__size = "\300\f\bl\000\000\000\000\300\f\bl\000\000\000\000\320\f\bl\000\000\000\000\320\f\bl\000\000\000\000\340\f\bl\000\000\000\000\340\f\bl", '\000' <repeats 11 times>, __align = 1812466880}, no_cleanup = false, {file = {
obj = 0x0, list_of_states = {next = 0x0, prev = 0x0}, layoutrecall_list = {next = 0x0, prev = 0x0},
lock_list = {next = 0x0, prev = 0x0}, nlm_share_list = {next = 0x0, prev = 0x0}, write_delegated = false,
fdeleg_stats = {fds_curr_delegations = 0, fds_deleg_type = OPEN_DELEGATE_NONE, fds_delegation_count = 0,
fds_recall_count = 0, fds_avg_hold = 0, fds_last_delegation = 0, fds_last_recall = 0, fds_num_opens = 0,
fds_first_open = 0}, anon_ops = 0}, dir = {junction_export = 0x0, export_roots = {next = 0x0, prev = 0x0},
exp_root_refcount = 0}}}, parent = {addr = 0x0, len = 0}, first_ck = 0, avl = {t = {root = 0x0,
cmp_fn = 0x0, height = 0, first = 0x0, last = 0x0, size = 0}, ck = {root = 0x0, cmp_fn = 0x0, height = 0,
first = 0x0, last = 0x0, size = 0}, sorted = {root = 0x0, cmp_fn = 0x0, height = 49, first = 0x6bf94870,
last = 0x7f2381f377d8 <main_arena+120>, size = 0}, collisions = 0}}}}
(gdb) info locals
status = {major = ERR_FSAL_NO_ERROR, minor = 0}
cb_result = DIR_CONTINUE
entry = 0x6c080a10
attrs = {request_mask = 0, valid_mask = 0, supported = 0, type = NO_FILE_TYPE, filesize = 0, fsid = {major = 0,
minor = 0}, acl = 0x0, fileid = 0, mode = 0, numlinks = 0, owner = 0, group = 0, rawdev = {major = 0, minor = 0},
atime = {tv_sec = 0, tv_nsec = 0}, creation = {tv_sec = 0, tv_nsec = 0}, ctime = {tv_sec = 0, tv_nsec = 0}, mtime = {
tv_sec = 0, tv_nsec = 0}, chgtime = {tv_sec = 0, tv_nsec = 0}, spaceused = 0, change = 0, generation = 0,
expire_time_attr = 0, fs_locations = 0x0}
dirent = 0x6aef2150
has_write = false
set_first_ck = false
next_ck = 121480231
look_ck = 121480190
chunk = 0x6c96c8b0
first_pass = true
eod = false
reload_chunk = false
__func__ = "mdcache_readdir_chunked"
__PRETTY_FUNCTION__ = "mdcache_readdir_chunked"
(gdb) print *dirent
$2 = {chunk_list = {next = 0x6be81080, prev = 0x6b1af310}, chunk = 0x6c96c8b0, node_name = {left = 0x68d85358,
right = 0x685443e8, parent = 1817635595}, node_ck = {left = 0x0, right = 0x0, parent = 1810370738}, node_sorted = {
left = 0x0, right = 0x0, parent = 0}, ck = 121480231, eod = false, namehash = 13944437367817932926, ckey = {
hk = 13666917134750151872, fsal = 0x7f237fae1d20 <FOO>, kv = {addr = 0x6ce752b0, len = 10}}, flags = 0,
name = 0x6aef21f8 "random.348", name_buffer = 0x6aef21f8 "random.348"}
(gdb) print *chunk
$3 = {chunks = {next = 0x32ce1718, prev = 0x32ce1718}, dirents = {next = 0x5ee461c0, prev = 0x6ce55210},
parent = 0x32ce1490, chunk_lru = {q = {next = 0x7e1920 <CHUNK_LRU+672>, prev = 0x6bae17c8}, qid = LRU_ENTRY_L1,
refcnt = 0, flags = 0, lane = 3, cf = 0}, reload_ck = 121480068, next_ck = 0, num_entries = 480}
(gdb)
5 years, 10 months
ABBA deadlock while server initialization
by Sachin Punadikar
Hello All,
Observed a deadlock situation while starting of Ganesha. Below are the
findings.
deadlock between thread 1 & thread 3 as below:
(gdb) where
#0 0x00003fff80c95408 in raise () from /lib64/libpthread.so.0
#1 0x0000000010071b70 in crash_handler (signo=11, info=0x3fffc499d7d8,
ctx=0x3fffc499ca60) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/MainNFSD/nfs_init.c:225
#2 <signal handler called>
#3 0x00003fff80c92c18 in __lll_lock_wait () from /lib64/libpthread.so.0
#4 0x00003fff80c8b69c in pthread_mutex_lock () from /lib64/libpthread.so.0
#5 0x00000000101a6140 in lru_insert_entry (entry=0x1000886a460,
q=0x10266c20 <LRU>, edge=LRU_LRU)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:420
#6 0x00000000101ac024 in mdcache_lru_insert (entry=0x1000886a460) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1779
#7 0x00000000101c23f8 in mdcache_new_entry (export=0x10007c6e960,
sub_handle=0x1000886a130, attrs_in=0x3fffc499e590, attrs_out=0x0,
new_directory=false, entry=0x3fffc499e670, state=0x0)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:730
#8 0x00000000101b8d9c in mdcache_lookup_path (exp_hdl=0x10007c6e960,
path=0x10007c6e840 "/gpfs/gpfs0/nfs/nfs720", handle=0x3fffc499e750,
attrs_out=0x0)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1899
#9 0x0000000010165c00 in init_export_root (export=0x10007c6e698) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/exports.c:2263
#10 0x00000000101653bc in init_export_cb (exp=0x10007c6e698,
state=0x3fffc499ea30) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/exports.c:2127
#11 0x0000000010181510 in foreach_gsh_export (cb=0x10165388
<init_export_cb>, wrlock=true, state=0x3fffc499ea30)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/export_mgr.c:750
#12 0x000000001016544c in exports_pkginit () at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/exports.c:2146
#13 0x0000000010073460 in nfs_Init (p_start_info=0x1025a930
<my_nfs_start_info>) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/MainNFSD/nfs_init.c:629
#14 0x000000001007463c in nfs_start (p_start_info=0x1025a930
<my_nfs_start_info>) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/MainNFSD/nfs_init.c:922
#15 0x000000001001e0fc in main (argc=10, argv=0x3fffc499f658) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/MainNFSD/nfs_main.c:495
(gdb) frame 5
#5 0x00000000101a6140 in lru_insert_entry (entry=0x1000886a460,
q=0x10266c20 <LRU>, edge=LRU_LRU)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:420
420 QLOCK(qlane);
(gdb) p *qlane
$1 = {L1 = {q = {next = 0x100085e6258, prev = 0x100085e6258}, id =
LRU_ENTRY_L1, size = 1}, L2 = {q = {next = 0x10266c40 <LRU+32>, prev =
0x10266c40 <LRU+32>}, id = LRU_ENTRY_L2,
size = 0}, cleanup = {q = {next = 0x10266c60 <LRU+64>, prev =
0x10266c60 <LRU+64>}, id = LRU_ENTRY_CLEANUP, size = 0}, mtx = {__data =
{__lock = 2, __count = 0, __owner = 32651,
__nusers = 1, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next
= 0x0}}, __size = "\002\000\000\000\000\000\000\000\213\177\000\000\001",
'\000' <repeats 26 times>,
__align = 2}, iter = {active = true, glist = 0x100085e6258, glistn =
0x10266c20 <LRU>}, __pad0 = '\000' <repeats 127 times>}
(gdb) p (qlane)->mtx
$3 = {__data = {__lock = 2, __count = 0, __owner = 32651, __nusers = 1,
__kind = 0, __spins = 0, __list = {__prev = 0x0, __next = 0x0}},
__size = "\002\000\000\000\000\000\000\000\213\177\000\000\001", '\000'
<repeats 26 times>, __align = 2}
The lock is owned by LWP:32651 (i.e. thread 3) in function lru_run_lane
(frame 2)
(gdb) t 3
[Switching to thread 3 (Thread 0x3fff8075e830 (LWP 32651))]
#0 0x00003fff80c8d004 in pthread_rwlock_rdlock () from
/lib64/libpthread.so.0
(gdb) bt
#0 0x00003fff80c8d004 in pthread_rwlock_rdlock () from
/lib64/libpthread.so.0
#1 0x000000001017f7e0 in get_gsh_export (export_id=2744) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/export_mgr.c:351
#2 0x00000000101a90dc in lru_run_lane (lane=0, totalclosed=0x3fff8075dd58)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1088
#3 0x00000000101aa1c8 in lru_run (ctx=0x10006facc20) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1330
#4 0x000000001016c16c in fridgethr_start_routine (arg=0x10006facc20) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/fridgethr.c:550
#5 0x00003fff80c88af4 in start_thread () from /lib64/libpthread.so.0
#6 0x00003fff80ac4ef4 in clone () from /lib64/libc.so.6
(gdb) frame 1
#1 0x000000001017f7e0 in get_gsh_export (export_id=2744) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/export_mgr.c:351
351 PTHREAD_RWLOCK_rdlock(&export_by_id.lock);
(gdb) p export_by_id.lock
$4 = {__data = {__lock = 0, __nr_readers = 0, __readers_wakeup = 0,
__writer_wakeup = 0, __nr_readers_queued = 1, __nr_writers_queued = 0,
__writer = 30981, __shared = 0, __pad1 = 0,
__pad2 = 0, __flags = 0}, __size = '\000' <repeats 16 times>,
"\001\000\000\000\000\000\000\000\005y", '\000' <repeats 29 times>, __align
= 0}
(gdb) p &export_by_id.lock
$9 = (pthread_rwlock_t *) 0x10260268 <export_by_id>
Note thread 3 is waiting for a lock which held by LWP:30981 (i.e. thread 1)
in function foreach_gsh_export (frame 11):
(gdb) t 1
[Switching to thread 1 (Thread 0x3fff8115bb50 (LWP 30981))]
#0 0x00003fff80c95408 in raise () from /lib64/libpthread.so.0
(gdb) frame 11
#11 0x0000000010181510 in foreach_gsh_export (cb=0x10165388
<init_export_cb>, wrlock=true, state=0x3fffc499ea30)
at
/usr/src/debug/nfs-ganesha-2.5.3-ibm028.00-0.1.1-Source/support/export_mgr.c:750
750 rc = cb(export, state);
(gdb) p export_by_id.lock
$5 = {__data = {__lock = 0, __nr_readers = 0, __readers_wakeup = 0,
__writer_wakeup = 0, __nr_readers_queued = 1, __nr_writers_queued = 0,
__writer = 30981, __shared = 0, __pad1 = 0,
__pad2 = 0, __flags = 0}, __size = '\000' <repeats 16 times>,
"\001\000\000\000\000\000\000\000\005y", '\000' <repeats 29 times>, __align
= 0}
(gdb) p &export_by_id.lock
$7 = (pthread_rwlock_t *) 0x10260268 <export_by_id>
To address above issue I posted a patch
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444414
--
with regards,
Sachin Punadikar
5 years, 10 months
Change in ...nfs-ganesha[next]: MDCACHE: lru_run should wait for server initialization
by Sachin Punadikar (GerritHub)
Sachin Punadikar has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444414
Change subject: MDCACHE: lru_run should wait for server initialization
......................................................................
MDCACHE: lru_run should wait for server initialization
Situation like ABBA deadlock happens in the starting phase of Ganesha.
For a huge list of exports (more than a thousand), exports_pkginit takes
more time to process all the exports. Meanwhile "lru_run" thread starts
its execution.
At certain stage, init_export_root waits for lru lane lock and lru_run_lane
waits for export lock, leading to dead-lock.
Ideally lru_run should wait processing of all exports.
Change-Id: I94757062c9d54ec1fccc7caf26f5f0e0628ddf47
Signed-off-by: Sachin Punadikar <psachin(a)in.ibm.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
1 file changed, 8 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/14/444414/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444414
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I94757062c9d54ec1fccc7caf26f5f0e0628ddf47
Gerrit-Change-Number: 444414
Gerrit-PatchSet: 1
Gerrit-Owner: Sachin Punadikar <psachin(a)in.ibm.com>
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: Add a new chunk at the tail of the list of chunks
by Name of user not set (GerritHub)
madhu.punjabi(a)in.ibm.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444240
Change subject: Add a new chunk at the tail of the list of chunks
......................................................................
Add a new chunk at the tail of the list of chunks
With commits 5dc6a70ed42275a4f6772b9802e79f23dc25fa73
and 654dd706d22663c6ae6029e0c8c5814fe0d6ff6a in
mdcache_readdir_chunked() after releasing the content_lock and
reacquiring the same content_lock for writing, we don't trust
the previous chunk and set 'chunk = NULL'. With this the
'prev_chunk' pointer received in mdcache_get_chunk()
may be NULL and then we add the newly created chunk to the
head of the chunk list for the directory.
But for a dirent, to set the 'eod = true' the
mdcache_populate_dir_chunk() checks if the its associated chunk
is the last chunk. This check fails when the chunk is added
to the head of the chunk list, and leads to 'ls' getting hang
at the NFS client.
To fix this we now add a newly created chunk at the tail
of the chunk list for a directory.
Change-Id: If2cc55f9f8771a762adfa0b68807615d0a440e48
Signed-off-by: Madhu Thorat <madhu.punjabi(a)in.ibm.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c
1 file changed, 2 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/40/444240/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444240
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: If2cc55f9f8771a762adfa0b68807615d0a440e48
Gerrit-Change-Number: 444240
Gerrit-PatchSet: 1
Gerrit-Owner: madhu.punjabi(a)in.ibm.com
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: Print correct error message in nlm_send_async()
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444113
Change subject: Print correct error message in nlm_send_async()
......................................................................
Print correct error message in nlm_send_async()
Save the retval in the client error, so that the standard print
functions can print it correctly. Before, the retval was not printed at
all, and the error message would always say "RPC: Success".
Change-Id: I5fcbd275a162ed04c0dd3590adb10634f9344627
Signed-off-by: Daniel Gryniewicz <dang(a)redhat.com>
---
M src/Protocols/NLM/nlm_async.c
1 file changed, 6 insertions(+), 5 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/13/444113/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444113
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I5fcbd275a162ed04c0dd3590adb10634f9344627
Gerrit-Change-Number: 444113
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: Update change time when modifying pseudo fs directory
by Sriram Patil (GerritHub)
Sriram Patil has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444089
Change subject: Update change time when modifying pseudo fs directory
......................................................................
Update change time when modifying pseudo fs directory
When some directory entry is deleted from a pseudo fs directory, we need to
update the change time of the parent pseudo fs directory so that the client
side cached dentry and attributes, if any, are invalidated.
Change-Id: I03ff1ff9e7bb0392183f32a01f5169e9f6c76d3a
Signed-off-by: Sriram Patil <sriramp(a)vmware.com>
---
M src/FSAL/FSAL_PSEUDO/handle.c
1 file changed, 7 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/89/444089/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/444089
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I03ff1ff9e7bb0392183f32a01f5169e9f6c76d3a
Gerrit-Change-Number: 444089
Gerrit-PatchSet: 1
Gerrit-Owner: Sriram Patil <sriramp(a)vmware.com>
Gerrit-MessageType: newchange
5 years, 10 months