Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Crash in dupreq
by katcherw@gmail.com
We saw a crash in version 2.8.2 running an iozone crash:
Program terminated with signal 11, Segmentation fault.
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
384 return __atomic_sub_fetch(var, sub, __ATOMIC_SEQ_CST);
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-12.el7.x86_64 dbus-libs-1.11.16-1.fc26.x86_64 elfutils-libelf-0.158-3.el7.x86_64 elfutils-libs-0.158-3.el7.x86_64 glibc-2.17-55.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-8.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libblkid-2.23.2-16.el7.x86_64 libcap-2.22-8.el7.x86_64 libcom_err-1.42.9-10.el7.x86_64 libgcrypt-1.5.3-4.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libnfsidmap-0.25-9.el7.x86_64 libselinux-2.5-11.el7.x86_64 libuuid-2.23.2-16.el7.x86_64 libwbclient-4.8.3-4.el7.x86_64 pcre-8.32-12.el7.x86_64 samba-client-libs-4.8.3-4.el7.x86_64 samba-winbind-modules-4.8.3-4.el7.x86_64 sssd-client-1.11.2-65.el7.x86_64 systemd-libs-219-42.el7_4.1.x86_64 userspace-rcu-0.7.16-1.el7.x86_64 xz-libs-5.1.2-8alpha.el7.x86_64 zlib-1.2.7-13.el7.x86_64
(gdb) bt
#0 0x00007ff028dce240 in atomic_sub_uint32_t (var=0x104, sub=1) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:384
#1 0x00007ff028dce265 in atomic_dec_uint32_t (var=0x104) at /usr/src/debug/nfs-ganesha-2.8.2/include/abstract_atomic.h:405
#2 0x00007ff028dd1b65 in dupreq_entry_put (dv=0x0) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:912
#3 0x00007ff028dd4630 in nfs_dupreq_rele (req=0x7fec44a269f0, func=0x7ff029159d60 <nfs3_func_desc+672>) at /usr/src/debug/nfs-ganesha-2.8.2/RPCAL/nfs_dupreq.c:1382
#4 0x00007ff028d74bc5 in free_args (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:693
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
#6 0x00007ff028d77a74 in nfs_rpc_valid_NFS (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1601
#7 0x00007ff028b2632d in svc_vc_decode (req=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:829
#8 0x00007ff028b227bf in svc_request (xprt=0x7fee0c202ad0, xdrs=0x7fec3c987a70) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:793
#9 0x00007ff028b2623e in svc_vc_recv (xprt=0x7fee0c202ad0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_vc.c:802
#10 0x00007ff028b22740 in svc_rqst_xprt_task (wpe=0x7fee0c202cf0) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:774
#11 0x00007ff028b23048 in svc_rqst_epoll_loop (wpe=0x108d570) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/svc_rqst.c:1089
#12 0x00007ff028b2baff in work_pool_thread (arg=0x7fed44052070) at /usr/src/debug/nfs-ganesha-2.8.2/libntirpc/src/work_pool.c:184
#13 0x00007ff0270c8df3 in start_thread () from /lib64/libpthread.so.0
#14 0x00007ff0267cd3dd in clone () from /lib64/libc.so.6
(gdb) frame 5
#5 0x00007ff028d77783 in nfs_rpc_process_request (reqdata=0x7fec44a269f0) at /usr/src/debug/nfs-ganesha-2.8.2/MainNFSD/nfs_worker_thread.c:1509
1509 free_args(reqdata);
(gdb) p *reqdata
$2 = {
svc = {
rq_xprt = 0x7fee0c202ad0,
rq_clntname = 0x0,
rq_svcname = 0x0,
rq_xdrs = 0x7fec3c987a70,
rq_u1 = 0x0,
rq_u2 = 0x0,
rq_cksum = 14781944753697519387,
rq_auth = 0x7ff028d45cd0 <svc_auth_none>,
rq_ap1 = 0x0,
rq_ap2 = 0x0,
...
Notice rq_u1 == 0. nfs_rpc_process_request is calling free_args which is calling nfs_dupreq_rele. There is a comment at the beginning of nfs_dupreq_rele:
* We assert req->rq_u1 now points to the corresponding duplicate request
* cache entry (dv).
But this assumption doesn't seem to be true, and eventually this will be dereferenced leading to a crash. We're trying to reproduce the problem with NIV_FULL_DEBUG.
Has anyone seen this problem before, or know what would cause rq_u1 to be 0? Should there be a check for NULL in nfs_dupreq_rele in addition to DUPREQ_NOCACHE?
Bill
4 years, 8 months
Re: Default KRB Principal name with new rpc.gssd
by Daniel Gryniewicz
I'm sorry, I'm very far from an expert in GSS/krb5. In particular, I've
never used NFS and gssd together, so I have no personal experience.
Maybe someone else on the list can help?
For questions like this, our default has been "what does knfsd do?" Do
you happen to know if knfsd accepts "shorthostname(a)REALM.COM" as a root
user?
Daniel
On 1/18/20 11:50 PM, Pushpesh Sharma wrote:
>
> Hi All,
>
> Any pointers are appreciated..
>
> -pushpesh
>
> *From:* Pushpesh Sharma <pushpeshs(a)vmware.com>
> *Sent:* Thursday, January 16, 2020 7:45 PM
> *To:* devel(a)lists.nfs-ganesha.org
> *Subject:* [NFS-Ganesha-Devel] Default KRB Principal name with new
> rpc.gssd
>
> Hi All,
>
> We are trying to use ganehsa with KRB. On NFS client centos7.5 we are
> joining Active Directory based using sssd(AD based KRB Realm). For
> root user we are getting krb ticket using kinit and valid AD user.
>
> Client principal while joining domain using sssd a default principal
> of shorthostname$(a)REALM.COM <mailto:shorthostname$@REALM.COM> is
> always generated. gss.rpcd by default send this principal as principal
> user name to ganesha server. Client do have other principal like
> nfs/client_fdqn(a)REALM.COM <mailto:nfs/client_fdqn@REALM.COM>. But as
> per rpc.gssd documentation <mailto:rpc.gssd%20documentation> as well
> first choice would be shorthostname$(a)REALM.COM
> <mailto:shorthostname$@REALM.COM>.
>
> Due to this server always recognize root user as someone else i.e.
> shorthostname$(a)REALM.COM <mailto:shorthostname$@REALM.COM>.
>
> We do see in src/idmapper/idmapper.c
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub....>
> handling for mapping three principal patterns nfs/* , root/*, host/*
> to uid=0, gid=0. So this left shorthostname$(a)REALM.COM
> <mailto:horthostname$@REALM.COM> principal not being recognized as root.
>
> We are of the opinion that doing a small fix in above idmapper code to
> add this pattern as well can fix this issue.
>
> But we wanted to know any security concern around it? Or if the client
> behavior can be change in any way, so we don’t need this fix?
>
> If we try removing this shorthostname$(a)REALM.COM
> <mailto:shorthostname$@REALM.COM>principal after domain join, sssd
> cannot be re-loaded and complains about not finding this principal.
>
> Thanks
>
> -pushpesh
>
>
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
4 years, 10 months
Race condition when caching GSS context
by Sriram Patil
Hi,
While setting up some Kerberos shares with ganesha I came across an issue where the client receives RPCSEC_GSS_CREDPROBLEM. The source of this error is is form following code in function _svcauth_gss which is used when authenticating RPCSEC_GSS request,
442 /* Context lookup. */
443 if ((gc->gc_proc == RPCSEC_GSS_DATA)
444 || (gc->gc_proc == RPCSEC_GSS_DESTROY)) {
445
446 /* Per RFC 2203 5.3.3.3, if a valid security context
447 * cannot be found to authorize a request, the
448 * implementation returns RPCSEC_GSS_CREDPROBLEM.
449 * N.B., we are explicitly allowed to discard contexts
450 * for any reason (e.g., to save space). */
451 gd = authgss_ctx_hash_get(gc);
452 if (!gd) {
453 rc = RPCSEC_GSS_CREDPROBLEM;
454 goto cred_free;
455 }
456 if (gc->gc_svc != gd->sec.svc)
457 gd->sec.svc = gc->gc_svc;
458 }
When the GSS proc is GSS_DATA or GSS_DESTROY, ntirpc expects the context to be in the cache. If not it returns RPCSEC_GSS_CREDPROBLEM.
For the above condition to be true, this context should have been inserted in the cache as part of GSS_INIT request. In the same function, the svc_rpc_gss_data object is inserted in the cache but there is a difference in the dispatch code. When the request is RPCSEC_GSS_INIT, ntirpc sends the reply back to the client before caching the context.
533 *no_dispatch = true;
534
535 req->rq_msg.RPCM_ack.ar_results.where = &gr;
536 req->rq_msg.RPCM_ack.ar_results.proc =
537 (xdrproc_t) xdr_rpc_gss_init_res;
538 call_stat = svc_sendreply(req); <-- Sending reply to client here
539
540 /* XXX */
541 gss_release_buffer(&min_stat, &gr.gr_token);
542 gss_release_buffer(&min_stat, &gd->checksum);
543 mem_free(gr.gr_ctx.value, 0);
544
545 if (call_stat >= XPRT_DIED) {
546 rc = AUTH_FAILED;
547 goto gd_free;
548 }
549
550 if (gr.gr_major == GSS_S_COMPLETE) {
……
……
583 (void)authgss_ctx_hash_set(gd); <-- Inserting the svc_rpc_gss_data object into the cache
584 }
In my setup, I observed that, the client can send another request, let's say EXCHANGE_ID with RPCSEC_GSS_DATA and end up returning the CREDPROBLEM error if that work thread gets scheduled first and tries to authenticate the request before the context is inserted in the cache. I believe we should be sending the reply after inserting the context in the cache.
Please find some logs below which confirm the race. Svc_10 is the RPCSEC_GSS_INIT request and svc_4 is RPCSEC_GSS_DATA request. We can observe that rbtree_x_cached_lookup happens before rbtree_x_cached_insert for hash key f8003f00,
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_reply_encode:109 SUCCESS
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_rpc_gss_encode() success (0x7f89fc002360:16)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_rpc_gss_encode() success (0x7f89fc008bb0:155)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_rpc_gss_init_res() encode success (ctx 0x7f89fc002360:16, maj 0, min 0, win 32, token 0x7f89fc008bb0:155)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :svc_ref_it() 0x7f8a04000a90 fd 32 xp_refcnt 5 af 10 port 35536 @svc_ioq_write_now:265
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :svc_release_it() 0x7f8a04000a90 fd 32 xp_refcnt 4 af 10 port 35536 @svc_ioq_write:233
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_ioq_destroy() xioq 0x7f89fc002800
....
....
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_4] 0 :rpc :TIRPC :xdr_rpc_gss_cred() decode success (v 1, proc 0, seq 1, svc 2, ctx 0x7f8a0c001780:16)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_4] 0 :rpc :TIRPC :rbtree_x_cached_lookup: t 0x7f8a0c0021c0 nk 0x7f8a20153e80 nv (nil)( hk f8003f00 slot/offset 56)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_4] 0 :rpc :TIRPC :xdr_rpc_gss_cred() decode success (v 1, proc 0, seq 1, svc 2, ctx 0x7f8a0c001780:16)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_4] 728 :nfs_rpc_process_request :DISP :Could not authenticate request... rejecting with AUTH_STAT=RPCSEC_GSS_CREDPROBLEM
....
....
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :rbtree_x_cached_insert: cix 9 ct 0x7f8a0c0021c0 t 0x7f8a0c0021c0 inserting 0x7f89fc001890 (cache hk f8003f00 slot/offset 56) flags 3
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 0 :rpc :TIRPC :xdr_rpc_gss_cred() decode success (v 1, proc 1, seq 0, svc 1, ctx (nil):0)
2020-01-30T16:52:42Z : ganesha.nfsd-2248[none] [svc_10] 740 :nfs_rpc_process_request :DISP :RPCSEC_GSS no_dispatch=1 gc->gc_proc=(1) RPCSEC_GSS_INIT
Is this a known issue or being worked on already?
Thanks,
Sriram
4 years, 11 months
Change in ...nfs-ganesha[next]: Unref the current chunk rather than the next one!
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/482818 )
Change subject: Unref the current chunk rather than the next one!
......................................................................
Unref the current chunk rather than the next one!
The code unrefs the next chunk that we are going to use. It should
actually unref the current chunk which has all deleted dentries!
Change-Id: I445683314314d9bb85f9b7914fd8fc05c1ab05a6
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_avl.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/18/482818/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/482818
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I445683314314d9bb85f9b7914fd8fc05c1ab05a6
Gerrit-Change-Number: 482818
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
4 years, 11 months
Change in ...nfs-ganesha[next]: Fix ganesha_stats exceptions while the daemon is going down
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/482784 )
Change subject: Fix ganesha_stats exceptions while the daemon is going down
......................................................................
Fix ganesha_stats exceptions while the daemon is going down
The classes RetrieveExportStats and RetrieveClientStats get ganesha
daemon dbus python proxy object under try/except block but fetching the
actual dbus method and its invocation itself is done without any
exception handling.
It is better to handle exceptions in scripts rather than in class
definitions, so moved exception handling from class definitions to
ganesha_stats script in addition to expanding it to cover dbus method
invocation as was well.
ganesha_stats script gets dbus interface object
Change-Id: I5dbde198479922758dc0fab85a0a9a3b41fb925d
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/scripts/ganeshactl/Ganesha/glib_dbus_stats.py
M src/scripts/ganeshactl/ganesha_stats.py
2 files changed, 51 insertions(+), 56 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/84/482784/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/482784
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I5dbde198479922758dc0fab85a0a9a3b41fb925d
Gerrit-Change-Number: 482784
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
4 years, 11 months
Announce Push of V4-dev.3
by Frank Filz
Branch next
Tag:V4-dev.3
Merge Highlights
* FSAL_CEPH - Always use the large handle size
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
fd242dc Frank S. Filz V4-dev.3
18928be Daniel Gryniewicz FSAL_CEPH - Always use the large handle size
4 years, 11 months