Change in ...nfs-ganesha[next]: Allow EXPORT pseudo path to be changed during export update
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334 )
Change subject: Allow EXPORT pseudo path to be changed during export update
......................................................................
Allow EXPORT pseudo path to be changed during export update
This also fully allows adding or removing NFSv4 support from an export
since we can now handle the PseudoFS swizzing that occurs.
Note that an explicit PseudoFS export may be removed or added, though
you can not change it from export_id 0 because we currently don't allow
changing the export_id.
Note that this patch doesn't handle DBUS add or remove export though
that is an option to improve. I may add them to this patch (it wouldn't
be that hard) but I want to get this reviewed as is right now.
There are implications to a client of changing the PseudoFS. I have
tested moving an export in the PseudoFS with a client mounted. The
client will be able to continue accessing the export, though it may
see an ESTALE error if it navigates out of the export. The current
working directory will go bad and the pwd comment will fail indicating
a disconnected mount. I have also seen referencing .. from the root of
the export wrapping around back to the root (I believe this is how
disconnected mounts are set up).
FSAL_PSEUDO lookups and create handles (PUTFH or any use of an NFSv3
handle where the inode isn't cached) which fail during an export update
are instead turned into ERR_FSAL_DELAY which turns into NFS4ERR_DELAY or
NFS3ERR_JUKEBOX to force the client to retry under the completed update.
Change-Id: I507dc17a651936936de82303ff1291677ce136be
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_PSEUDO/handle.c
M src/MainNFSD/libganesha_nfsd.ver
M src/Protocols/NFS/nfs4_pseudo.c
M src/include/export_mgr.h
M src/include/nfs_proto_functions.h
M src/support/export_mgr.c
M src/support/exports.c
7 files changed, 560 insertions(+), 203 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/34/490334/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I507dc17a651936936de82303ff1291677ce136be
Gerrit-Change-Number: 490334
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
8 months, 2 weeks
Help needed on FD
by Alok Sinha
FD created at below path does not have correct link in /proc/self/fd.
With vfs_readlink_by_handle , I can get a relative path but cannot get the
absolute path of the FD.
I need a suggestion for one of the below to get around a production bug:
- How to get the full path of the FD?
- How to get the parent vfs_fsal_obj_handle?
- Anyway to bypass this flow by changing any config file?
#0 vfs_create_handle (exp_hdl=0x55801aca22c0, hdl_desc=0x7f93889b1600,
handle=0x7f93889b13f8, attrs_out=0x7f93889b1430)
at /home/alok/pub/splfs-cache-2.8.3/src/FSAL/FSAL_VFS/handle.c:2020
#1 0x00007f94a1f0e13b in mdcache_locate_host (fh_desc=0x7f93889b1600,
export=0x55801aca1f20, entry=0x7f93889b1578, attrs_out=0x0)
at
/home/alok/pub/splfs-cache-2.8.3/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1109
#2 0x00007f94a1f02455 in mdcache_create_handle (exp_hdl=0x55801aca1f20,
fh_desc=0x7f93889b1600, handle=0x7f93889b15d8, attrs_out=0x0)
at
/home/alok/pub/splfs-cache-2.8.3/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1613
#3 0x00007f94a1ec5ead in nfs4_mds_putfh (data=0x7f93201119e0)
at
/home/alok/pub/splfs-cache-2.8.3/src/Protocols/NFS/nfs4_op_putfh.c:211
#4 0x00007f94a1ec60d3 in nfs4_op_putfh (op=0x7f93201186d0,
data=0x7f93201119e0, resp=0x7f932010ab20)
at
/home/alok/pub/splfs-cache-2.8.3/src/Protocols/NFS/nfs4_op_putfh.c:281
#5 0x00007f94a1ea96ae in process_one_op (data=0x7f93201119e0,
status=0x7f93889b1b48)
at
/home/alok/pub/splfs-cache-2.8.3/src/Protocols/NFS/nfs4_Compound.c:920
#6 0x00007f94a1eaa8c5 in nfs4_Compound (arg=0x7f9320138d78,
req=0x7f93201385a0, res=0x7f9320106840)
at
/home/alok/pub/splfs-cache-2.8.3/src/Protocols/NFS/nfs4_Compound.c:1329
#7 0x00007f94a1de8567 in nfs_rpc_process_request (reqdata=0x7f93201385a0)
---Type <return> to continue, or q <return> to quit---
at
/home/alok/pub/splfs-cache-2.8.3/src/MainNFSD/nfs_worker_thread.c:1484
#8 0x00007f94a1de8854 in nfs_rpc_valid_NFS (req=0x7f93201385a0)
at
/home/alok/pub/splfs-cache-2.8.3/src/MainNFSD/nfs_worker_thread.c:1591
#9 0x00007f94a13b6faf in svc_vc_decode (req=0x7f93201385a0)
at /home/alok/pub/splfs-cache-2.8.3/src/libntirpc/src/svc_vc.c:829
#10 0x00007f94a13b3225 in svc_request (xprt=0x7f9320000ce0,
xdrs=0x7f9320066260)
at /home/alok/pub/splfs-cache-2.8.3/src/libntirpc/src/svc_rqst.c:793
#11 0x00007f94a13b6ec0 in svc_vc_recv (xprt=0x7f9320000ce0)
at /home/alok/pub/splfs-cache-2.8.3/src/libntirpc/src/svc_vc.c:802
#12 0x00007f94a13b31a5 in svc_rqst_xprt_task (wpe=0x7f9320000f00)
at /home/alok/pub/splfs-cache-2.8.3/src/libntirpc/src/svc_rqst.c:774
#13 0x00007f94a13bcab0 in work_pool_thread (arg=0x7f9218003340)
at /home/alok/pub/splfs-cache-2.8.3/src/libntirpc/src/work_pool.c:184
#14 0x00007f94a1badea5 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f94a16ce9fd in clone () from /lib64/libc.so.6
--
Alok Sinha
www.spillbox.io
https://youtu.be/U-YupjLQ9bU
9 months, 1 week
Announce Push of V4.2
by Frank Filz
Branch next
Tag:V4.2
NOTE: This merge includes an ntirpc pullup. Please update your submodule.
Release Notes can be found on the github project wiki:
https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_4
Merge Highlights
* ntirpc: back out a905134fbfd16dc5722a613156877bb857fa35e6 - CID 275286
* check FSAL library path before load it
* ganesha_mgr:Fix show exports/clients "IndexError:"
* Make some XDR non-recursive
* Never reset cid_client_record, keep it valid
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
a759bd9ca Frank S. Filz V4.2
65931838f Daniel Gryniewicz Pullup ntirpc to 4.2
43c28da6c Prabhu Murugesan check FSAL library path before load it
e1256f874 Prabhu Murugesan ganesha_mgr:Fix show exports/clients
"IndexError:"
9dfa7e1bf Frank S. Filz PROXY_V3: Don't use recursion in READDIR3 and
READDIRPLUS3
9776ea75b Frank S. Filz MOUNT: Don't use recursion in XDR for MOUNT protocol
f6a45cbbf Frank S. Filz Never reset cid_client_record, keep it valid
1 year, 11 months
[M] Change in ...nfs-ganesha[next]: PROXY_V3: Don't use recursion in READDIR3 and READDIRPLUS3
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546804 )
Change subject: PROXY_V3: Don't use recursion in READDIR3 and READDIRPLUS3
......................................................................
PROXY_V3: Don't use recursion in READDIR3 and READDIRPLUS3
While this code is in xdr_nfs23.c and looks universal, it's not used
for serving READDIR3 and READDIRPLUS3 because there are special encode
functions that encode each entry into a buffer individually rather than
XDR encoding the entire response. The buffer is also freed as a whole.
So the changed functions would only be used for decode, which would
only apply to FSAL_PROXY_V3.
Change-Id: Ic3d29810f2a5dd4581eca8311057b2f204ba291f
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/Protocols/XDR/xdr_nfs23.c
1 file changed, 95 insertions(+), 8 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/04/546804/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546804
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ic3d29810f2a5dd4581eca8311057b2f204ba291f
Gerrit-Change-Number: 546804
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
1 year, 11 months
nfs-ganesha coredump
by Sagar Singh
We Have seen this core dump twice on our lab environment, and not sure
about the root cause
(gdb) bt
#0 0x00007f2571e9f4fb in raise () from /lib64/libpthread.so.0
#1 0x00007f25737ab337 in crash_handler (signo=6, info=0x7f254e2c6970,
ctx=0x7f254e2c6840) at
/usr/src/debug/nfs-ganesha-4.0.2/MainNFSD/nfs_init.c:256
#2 <signal handler called>
#3 0x00007f25716d6387 in raise () from /lib64/libc.so.6
#4 0x00007f25716d7a78 in abort () from /lib64/libc.so.6
#5 0x00007f2573814227 in _dec_session_ref (session=0x7f254a412e00,
func=0x7f2573922c70 <__func__.22740> "nfs4_op_destroy_session", line=97)
at /usr/src/debug/nfs-ganesha-4.0.2/SAL/nfs41_session_id.c:263
#6 0x00007f257386714b in nfs4_op_destroy_session (op=0x7f2556c380a0,
data=0x7f255302c800, resp=0x7f2556c38000)
at
/usr/src/debug/nfs-ganesha-4.0.2/Protocols/NFS/nfs4_op_destroy_session.c:97
#7 0x00007f257385fa43 in process_one_op (data=0x7f255302c800,
status=0x7f254e2c787c) at
/usr/src/debug/nfs-ganesha-4.0.2/Protocols/NFS/nfs4_Compound.c:924
#8 0x00007f2573860acb in nfs4_Compound (arg=0x7f219c59d880,
req=0x7f219c59d000, res=0x7f254a591100)
at /usr/src/debug/nfs-ganesha-4.0.2/Protocols/NFS/nfs4_Compound.c:1339
#9 0x00007f25737a5f80 in nfs_rpc_process_request (reqdata=0x7f219c59d000,
retry=false)
at /usr/src/debug/nfs-ganesha-4.0.2/MainNFSD/nfs_worker_thread.c:2055
#10 0x00007f25737a65f9 in nfs_rpc_valid_NFS (req=0x7f219c59d000) at
/usr/src/debug/nfs-ganesha-4.0.2/MainNFSD/nfs_worker_thread.c:2293
#11 0x00007f2573b7ead6 in svc_vc_decode (req=0x7f219c59d000) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/svc_vc.c:1126
#12 0x00007f2573b79c59 in svc_request (xprt=0x7f2558813000,
xdrs=0x7f24e63bbe40) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/svc_rqst.c:1229
#13 0x00007f2573b7e9db in svc_vc_recv (xprt=0x7f2558813000) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/svc_vc.c:1099
#14 0x00007f2573b79bbd in svc_rqst_xprt_task_recv (wpe=0x7f25588132f0) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/svc_rqst.c:1209
#15 0x00007f2573b7a840 in svc_rqst_epoll_loop (wpe=0x7f256e91df18) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/svc_rqst.c:1608
#16 0x00007f2573b87cda in work_pool_thread (arg=0x7f255ec32da0) at
/usr/src/debug/nfs-ganesha-4.0.2/libntirpc/src/work_pool.c:190
#17 0x00007f2571e97ea5 in start_thread () from /lib64/libpthread.so.0
#18 0x00007f257179eb0d in clone () from /lib64/libc.so.6
*WE HAVE THESE FINDINGS* :
We destroy the session, if its ref count becomes 0.
Here we are destroying the mutex on line 263, and it gave a segmentation
fault.
On gdb we found that some slots were already destroyed in the session and
others were valid.
239 int32_t _dec_session_ref(nfs41_session_t *session, const char *func, int
line)
240 {
241 int i;
242 int32_t refcnt = atomic_dec_int32_t(&session->refcount);
243 #ifdef USE_LTTNG
244 tracepoint(nfs4, session_unref, func, line, session, refcnt);
245 #endif
246
247 if (refcnt == 0) {
248
249 /* Unlink the session from the client's list of
250 sessions */
251
PTHREAD_MUTEX_lock(&session->clientid_record->cid_mutex);
252 glist_del(&session->session_link);
253
PTHREAD_MUTEX_unlock(&session->clientid_record->cid_mutex);
254
255 /* Decrement our reference to the clientid record */
256 dec_client_id_ref(session->clientid_record);
257 /* Destroy this session's mutexes and condition
variable */
258
259 for (i = 0; i < session->nb_slots; i++) {
260 nfs41_session_slot_t *slot;
261
262 slot = &session->fc_slots[i];
263 PTHREAD_MUTEX_destroy(&slot->lock);
264 release_slot(slot);
265 }
265 }
266
valid and freed slots :
(gdb) p *(session->fc_slots + 53 )
$84 = {sequence = 3, lock = {_data = {_lock = 0, __count = 0, __owner = 0,
__nusers = 0, __kind = 0,
__spins = 0, __elision = 0, _list = {_prev = 0x0, __next = 0x0}},
__size = '\000' <repeats 39 times>,
__align = 0}, cached_result = 0x7f24f5c89dc0}
(gdb) p *(session->fc_slots + 54 )
$85 = {sequence = 0, lock = {_data = {_lock = 0, __count = 0, __owner = 0,
__nusers = 0, __kind = 0,
__spins = 0, __elision = 0, _list = {_prev = 0x0, __next = 0x0}},
__size = '\000' <repeats 39 times>,
__align = 0}, cached_result = 0x0}
That may come from, if one thread is at line 372 and gets session but does
not increment refcount.
And then another thread decrements its session refcount and it becomes 0,
so the session gets freed.
But for the first thread, it increments the refcount now to 373. So now the
first thread has an invalid session.
343 int nfs41_Session_Get_Pointer(char sessionid[NFS4_SESSIONID_SIZE],
344 nfs41_session_t **session_data)
345 {
346 struct gsh_buffdesc key;
347 struct gsh_buffdesc val;
348 struct hash_latch latch;
349 char str[LOG_BUFF_LEN] = "\0";
350 struct display_buffer dspbuf = {sizeof(str), str, str};
351 bool str_valid = false;
352 hash_error_t code;
353
354 if (isFullDebug(COMPONENT_SESSIONS)) {
355 display_session_id(&dspbuf, sessionid);
356 LogFullDebug(COMPONENT_SESSIONS, "Get Session %s", str);
357 str_valid = true;
358 }
359
360 key.addr = sessionid;
361 key.len = NFS4_SESSIONID_SIZE;
362
363 code = hashtable_getlatch(ht_session_id, &key, &val, false,
&latch);
364 if (code != HASHTABLE_SUCCESS) {
365 hashtable_releaselatched(ht_session_id, &latch);
366 if (str_valid)
367 LogFullDebug(COMPONENT_SESSIONS,
368 "Session %s Not Found", str);
369 return 0;
370 }
371
372 *session_data = val.addr;
373 inc_session_ref(*session_data); /* XXX more locks? */
374
375 hashtable_releaselatched(ht_session_id, &latch);
376
377 if (str_valid)
378 LogFullDebug(COMPONENT_SESSIONS, "Session %s Found",
str);
379
380 return 1;
381 }
We can avoid it by taking lock from line 363 to 373, as sessions should not
be freed in between.
1 year, 11 months
[M] Change in ...nfs-ganesha[next]: Never reset cid_client_record, keep it valid
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546512 )
Change subject: Never reset cid_client_record, keep it valid
......................................................................
Never reset cid_client_record, keep it valid
Maintain the validitity and refcount for cid_client_record through the
entire life of any clientid that references the client record.
Change-Id: I16b298e5386c777fa66a7403fed79cf472a49e07
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/MainNFSD/nfs_reaper_thread.c
M src/Protocols/NFS/nfs4_op_destroy_clientid.c
M src/SAL/nfs4_clientid.c
M src/SAL/nfs4_recovery.c
4 files changed, 35 insertions(+), 62 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/12/546512/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546512
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I16b298e5386c777fa66a7403fed79cf472a49e07
Gerrit-Change-Number: 546512
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
1 year, 11 months
[S] Change in ...nfs-ganesha[next]: nfs4_op_open: Allow open with CLAIM_PREVIOUS when FSAL supports grace...
by Name of user not set (GerritHub)
shaharhoch(a)gmail.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546497 )
Change subject: nfs4_op_open: Allow open with CLAIM_PREVIOUS when FSAL supports grace period
......................................................................
nfs4_op_open: Allow open with CLAIM_PREVIOUS when FSAL supports grace period
Currently, Ganesha only allows open with CLAIM_PREVIOUS in case
`cid_allow_reclaim==true`, which requires using the `nfs4_recovery_backed`.
In case we want the FSAL to handle reclaim requests (which is signaled
by fso_grace_method) and we don't use the recovery backend we were
unable to reclaim locks. This patch will allow open with CLAIM_PREVIOUS
in case fso_grace_method is enabled, even if `cid_allow_reclaim` is not
Note that in nfs4_op_lock, in case `fso_grace_method` is enabled
we don't look at `cid_allow_reclaim` and pass it to the FSAL, as
you would expect, but before the client calls the LOCK operation
with reclaim, it opens with CLAIM_PREVOIUS, which failed with
`NFS4ERR_NO_GRACE` and so the client never even sent the reclaim
lock request.
Change-Id: I5973b3358e578f58c20b8fcf4c9c32da5c7877d7
Signed-off-by: Shahar Hochma <shaharhoch(a)gmail.com>
---
M src/Protocols/NFS/nfs4_op_open.c
1 file changed, 28 insertions(+), 3 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/97/546497/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546497
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I5973b3358e578f58c20b8fcf4c9c32da5c7877d7
Gerrit-Change-Number: 546497
Gerrit-PatchSet: 1
Gerrit-Owner: shaharhoch(a)gmail.com
Gerrit-MessageType: newchange
1 year, 11 months
[S] Change in ...nfs-ganesha[next]: nfs4_clientid: On expire- clean up client record after releasing state
by Name of user not set (GerritHub)
shaharhoch(a)gmail.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546427 )
Change subject: nfs4_clientid: On expire- clean up client record after releasing state
......................................................................
nfs4_clientid: On expire- clean up client record after releasing state
We do this because the FSAL might need to use information from the
client record for its operation. Specifically in our FSAL, unlock
operations need the co_owner
Change-Id: Ibd5c0d7dc3c0b565a1f4f061805de8d6228644da
Signed-off-by: Shahar Hochma <shaharhoch(a)gmail.com>
---
M src/SAL/nfs4_clientid.c
1 file changed, 30 insertions(+), 16 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/27/546427/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/546427
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ibd5c0d7dc3c0b565a1f4f061805de8d6228644da
Gerrit-Change-Number: 546427
Gerrit-PatchSet: 1
Gerrit-Owner: shaharhoch(a)gmail.com
Gerrit-MessageType: newchange
1 year, 11 months