I haven't seen anything like this before.  The problem is that the 
resarray in the compound is corrupted somehow (so it's actually a 
Ganesha bug).  Is this reproducible?  Do you have logs?  Do you know the 
workload that triggered this?
Daniel
On 10/22/19 5:57 AM, David C wrote:
 Hi All
 
 I've hit a segfault I've not seen before, seems related to ntirpc, 
 please see backtrace:
 
 (gdb) bt
 #0  xdr_putenum (enumv=<error reading variable: Cannot access memory at 
 address 0x0>, xdrs=0x7fd831761490) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/ntirpc/rpc/xdr.h:584
 #1  xdr_enum (xdrs=0x7fd831761490, ep=0x0) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/ntirpc/rpc/xdr_inline.h:405
 #2  0x0000000000456749 in xdr_nfs_opnum4 (objp=0x0, xdrs=0x7fd831761490) 
 at /usr/src/debug/nfs-ganesha-2.7.3/include/nfsv41.h:8065
 #3  xdr_nfs_resop4 (xdrs=0x7fd831761490, objp=0x0) at 
 /usr/src/debug/nfs-ganesha-2.7.3/include/nfsv41.h:8433
 #4  0x0000000000458afe in xdr_array_encode (cpp=<optimized out>, 
 sizep=<optimized out>, xdr_elem=0x456730 <xdr_nfs_resop4>, selem=160, 
 maxsize=1024, xdrs=0x7fd831761490) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/ntirpc/rpc/xdr_inline.h:851
 #5  xdr_array (xdr_elem=0x456730 <xdr_nfs_resop4>, selem=160, 
 maxsize=1024, sizep=<optimized out>, cpp=<optimized out>, 
 xdrs=0x7fd831761490) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/ntirpc/rpc/xdr_inline.h:894
 #6  xdr_COMPOUND4res (xdrs=0x7fd831761490, objp=<optimized out>) at 
 /usr/src/debug/nfs-ganesha-2.7.3/include/nfsv41.h:8779
 #7  0x00007fdc0cd0f89b in svc_vc_reply (req=0x7fd831777d30) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_vc.c:887
 #8  0x0000000000451337 in nfs_rpc_process_request 
 (reqdata=0x7fd831777d30) at 
 /usr/src/debug/nfs-ganesha-2.7.3/MainNFSD/nfs_worker_thread.c:1384
 #9  0x0000000000450766 in nfs_rpc_decode_request (xprt=0x7fdb1c00a0d0, 
 xdrs=0x7fd831f6e190) at 
 /usr/src/debug/nfs-ganesha-2.7.3/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
 #10 0x00007fdc0cd0d07d in svc_rqst_xprt_task (wpe=0x7fdb1c00a2e8) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:769
 #11 0x00007fdc0cd0d59a in svc_rqst_epoll_events (n_events=<optimized 
 out>, sr_rec=0x53136a0) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:941
 #12 svc_rqst_epoll_loop (sr_rec=<optimized out>) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:1014
 #13 svc_rqst_run_task (wpe=0x53136a0) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:1050
 #14 0x00007fdc0cd15123 in work_pool_thread (arg=0x7fd86000a960) at 
 /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/work_pool.c:181
 #15 0x00007fdc0b2cddd5 in start_thread (arg=0x7fdafffff700) at 
 pthread_create.c:307
 #16 0x00007fdc0a444ead in clone () at 
 ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
 
 Mem usage on the server was quite high at the time so wonder if that's 
 related?
 
 nfs-ganesha-2.7.3-0.1.el7.x86_64
 nfs-ganesha-ceph-2.7.3-0.1.el7.x86_64
 libcephfs2-14.2.1-0.el7.x86_64
 librados2-14.2.1-0.el7.x86_64
 
 Thanks,
 
 
 _______________________________________________
 Support mailing list -- support(a)lists.nfs-ganesha.org
 To unsubscribe send an email to support-leave(a)lists.nfs-ganesha.org