Change in ...nfs-ganesha[next]: MDCACHE - Add MDCACHE {} config block
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
Change subject: MDCACHE - Add MDCACHE {} config block
......................................................................
MDCACHE - Add MDCACHE {} config block
Add a config block name MDCACHE that is a copy of CACHEINODE. Both can
be configured, but MDCACHE will override CACHEINODE. This allows us to
deprecate CACHEINODE.
Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_read_conf.c
M src/config_samples/ceph.conf
M src/config_samples/config.txt
M src/config_samples/ganesha.conf.example
M src/doc/man/ganesha-cache-config.rst
M src/doc/man/ganesha-config.rst
6 files changed, 31 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/29/454929/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/454929
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I49012723132ae6105b904a60d1a96bb2bf78d51b
Gerrit-Change-Number: 454929
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
4 years, 1 month
lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Re: [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing
by Daniel Gryniewicz
This is not one I've seen before, and a quick look at the code looks
strange. The only assert in that bit is asserting the parent is a
directory, but the parent directory is not something that was passed in
by Ganesha, but rather something that was looked up internally in
libcephfs. This is beyond my expertise, at this point. Maybe some ceph
logs would help?
Daniel
On 7/15/19 10:54 AM, David C wrote:
> This list has been deprecated. Please subscribe to the new devel list at lists.nfs-ganesha.org.
>
>
> Hi All
>
> I'm running 2.7.3 using the CEPH FSAL to export CephFS (Luminous), it
> ran well for a few days and crashed. I have a coredump, could someone
> assist me in debugging this please?
>
> (gdb) bt
> #0 0x00007f04dcab6207 in raise () from /lib64/libc.so.6
> #1 0x00007f04dcab78f8 in abort () from /lib64/libc.so.6
> #2 0x00007f04d2a9d6c5 in ceph::__ceph_assert_fail(char const*, char
> const*, int, char const*) () from /usr/lib64/ceph/libceph-common.so.0
> #3 0x00007f04d2a9d844 in ceph::__ceph_assert_fail(ceph::assert_data
> const&) () from /usr/lib64/ceph/libceph-common.so.0
> #4 0x00007f04cc807f04 in Client::_lookup_name(Inode*, Inode*, UserPerm
> const&) () from /lib64/libcephfs.so.2
> #5 0x00007f04cc81c41f in Client::ll_lookup_inode(inodeno_t, UserPerm
> const&, Inode**) () from /lib64/libcephfs.so.2
> #6 0x00007f04ccadbf0e in create_handle (export_pub=0x1baff10,
> desc=<optimized out>, pub_handle=0x7f0470fd4718,
> attrs_out=0x7f0470fd4740) at
> /usr/src/debug/nfs-ganesha-2.7.3/FSAL/FSAL_CEPH/export.c:256
> #7 0x0000000000523895 in mdcache_locate_host (fh_desc=0x7f0470fd4920,
> export=export@entry=0x1bafbf0, entry=entry@entry=0x7f0470fd48b8,
> attrs_out=attrs_out@entry=0x0)
> at
> /usr/src/debug/nfs-ganesha-2.7.3/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1011
> #8 0x000000000051d278 in mdcache_create_handle (exp_hdl=0x1bafbf0,
> fh_desc=<optimized out>, handle=0x7f0470fd4900, attrs_out=0x0) at
> /usr/src/debug/nfs-ganesha-2.7.3/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1578
> #9 0x000000000046d404 in nfs4_mds_putfh
> (data=data@entry=0x7f0470fd4ea0) at
> /usr/src/debug/nfs-ganesha-2.7.3/Protocols/NFS/nfs4_op_putfh.c:211
> #10 0x000000000046d8e8 in nfs4_op_putfh (op=0x7f03effaf1d0,
> data=0x7f0470fd4ea0, resp=0x7f03ec1de1f0) at
> /usr/src/debug/nfs-ganesha-2.7.3/Protocols/NFS/nfs4_op_putfh.c:281
> #11 0x000000000045d120 in nfs4_Compound (arg=<optimized out>,
> req=<optimized out>, res=0x7f03ec1de9d0) at
> /usr/src/debug/nfs-ganesha-2.7.3/Protocols/NFS/nfs4_Compound.c:942
> #12 0x00000000004512cd in nfs_rpc_process_request
> (reqdata=0x7f03ee5ed4b0) at
> /usr/src/debug/nfs-ganesha-2.7.3/MainNFSD/nfs_worker_thread.c:1328
> #13 0x0000000000450766 in nfs_rpc_decode_request (xprt=0x7f02180c2320,
> xdrs=0x7f03ec568ab0) at
> /usr/src/debug/nfs-ganesha-2.7.3/MainNFSD/nfs_rpc_dispatcher_thread.c:1345
> #14 0x00007f04df45d07d in svc_rqst_xprt_task (wpe=0x7f02180c2538) at
> /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:769
> #15 0x00007f04df45d59a in svc_rqst_epoll_events (n_events=<optimized
> out>, sr_rec=0x4bb53e0) at
> /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:941
> #16 svc_rqst_epoll_loop (sr_rec=<optimized out>) at
> /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:1014
> #17 svc_rqst_run_task (wpe=0x4bb53e0) at
> /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/svc_rqst.c:1050
> #18 0x00007f04df465123 in work_pool_thread (arg=0x7f044c0008c0) at
> /usr/src/debug/nfs-ganesha-2.7.3/libntirpc/src/work_pool.c:181
> #19 0x00007f04dda05dd5 in start_thread () from /lib64/libpthread.so.0
> #20 0x00007f04dcb7dead in clone () from /lib64/libc.so.6
>
> Package versions:
>
> nfs-ganesha-2.7.3-0.1.el7.x86_64
> nfs-ganesha-ceph-2.7.3-0.1.el7.x86_64
> libcephfs2-14.2.1-0.el7.x86_64
> librados2-14.2.1-0.el7.x86_64
>
> I notice in my Ceph log I have a bunch of slow requests around the time
> it went down, I'm not sure if it's a symptom of Ganesha segfaulting or
> if it was a contributing factor.
>
> Thanks,
> David
>
>
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
5 years, 2 months
GPFS LogCrit on lock_op2
by Frank Filz
On the call, I mentioned I would look at bypassing permission check for the
file owner for the open_func call in fsal_find_fd with open_for_locks.
It turns out there is a difference between FSAL_GPFS and FSAL_VFS
FSAL_VFS makes the ultimate call to open_by_handle as root, and therefor
even a non-owner of the file will not be an issue in opening the file
read/write.
GPFS calls GPFSFSAL_open which calls fsal_set_credentials so if the
permissions do not allow read/write when open_for_locks occurs, then the
file will instead be opened in the same mode as the OPEN stateid.
I think it would be good to evaluate when GPFSFSAL_open actually needs to be
called, and whether open_func should make a more direct call to
fsal_internal_handle2fd.
Frank
5 years, 3 months
Announce Push of V2.9-dev.6
by Frank Filz
Branch next
Tag:V2.9-dev.6
Release Highlights
* 3 updates relating to rados-grace
* GPFS: fix alloc_handle to not access unintilaized space
* During IP giveback mark all clients as stale
* cmake: include CheckSymbolExists on cmake-3.15.x
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
c41f628 Frank S. Filz V2.9-dev.6
f721168 Kaleb S. KEITHLEY cmake: include CheckSymbolExists on cmake-3.15.x
128be7f Suhrud Patankar During IP giveback mark all clients as stale
9b66e95 Malahal Naineni GPFS: fix alloc_handle to not access unintilaized
space
f1f5be2 Jeff Layton rados_grace: remove superfluous warning in
rados_grace_dump
d3ba007 Jeff Layton ganesha-rados-grace: add --cephconf and --userid
command-line options
146dada Jeff Layton doc: clarify that ganesha-rados-grace does not consult
ganesha.conf
5 years, 3 months
No community call next week
by Frank Filz
Let's skip the community call next week. I realized that everything we will
be doing to get our oldest off to her first day of kindergarten will make it
hard for me to break away for even a shortened call.
Catch you all online (I'll be online within 30 minutes after the normal end
time of the call - i.e. about 8:30 PDT).
Thanks
Frank
5 years, 3 months
request from clients
by Alok Sinha
I am new to Ganesha and file system.
I seeing a problem and need advice.
On a client machine , I fork multiple processes
each reading a different file in a same directory.
I expect that Ganesha will see requests for files
coming in parallel. In real world, I see that fsal_lookup
sees one file at a time. Is this expected? For Ganesha,
i see that requests are serial while I expect or rather want to
force parallel.
-alok
5 years, 3 months
nfs-ganesha clients freeze, nfs ganesha daemon dies
by Erik Jacobson
Hello. I'm statring a new thread for this problem.
I have a 3x3 Gluster Volume and I'm trying to use Ganesha for NFS
services.
One of the 9 server nodes, I have enabled Ganesha NFS server on one.
The volume is being used being host clients with NFS roots.
A long separate thread shows how I got to this point but what works on
the client side is:
RHEL 7.6 aarch64 4.14.0-115.el7a.aarch64
RHEL 7.6 x86_64 3.10.0-957.el7.x86_64
OverlayFS - NFS v4 underdir with TMPFS overlay.
The Ganesha server has
Allow_Numeric_Owners = True;
Only_Numeric_Owners = True;
Disable_ACL = TRUE;
Disable_ACL is required for the aarch64 overlay to properly read
non-root files. (However, Disable_ACL must be false for aarch64
if you are using NFS v3 strangely).
The x86_64 node fully boots through full init/systemd startup to the
login prompt.
When I startup the aarch64 node, it gets various degrees of done... then
both NFS clients freeze up 100%.
Restarting nfs-ganesha gets them going for a moment, then they freeze
again. It turned out in some cases the nfs-ganesha daemon was present
during the freeze but no longer serving the nodes. However, a more
common case (and the captured one) is nfs-ganesha is gone.
I will attach a tarball with a bunch of information on the problem
including the config file I used, debugging logs, and some traces.
Ganesha 2.8.2
- Ganesha, Gluster servers x86_64
Since the aarch64 node causes Ganesha to crash early, and the debug
log can get to 2GB quickly, I set up a test case as follows:
Tracing starts...
- x86_64 fully nfs-root-booted, it comes up fine.
* Actively using nfs for root during tests below
- aarch64 node - boot to the miniroot env (a "fat" initrd that has
more tools and from which we do the NFS mount)
- It stops before switching control to the init start to run the tests
like below.
- cp'd /dev/null to ganesha log here
- started the tcpdump to the problem node
- Ran the following. Ganesha died at 'wc -l', also notice the
Input/Output error on the first attempt:
bash-4.2# bash reset4.sh
+ umount /a
umount: /a: not mounted
+ umount /root_ro_nfs
umount: /root_ro_nfs: not mounted
+ umount /rootfs.rw
+ mount -o ro,nolock 172.23.255.249:/cm_shared/image/images_ro_nfs/rhel76-aarch64-newkernel /root_ro_nfs
+ mount -t tmpfs -o mpol=interleave tmpfs /rootfs.rw
+ mkdir /rootfs.rw/upperdir
+ mkdir /rootfs.rw/work
+ mount -t overlay overlay -o lowerdir=/root_ro_nfs,upperdir=/rootfs.rw/upperdir,workdir=/rootfs.rw/work /a
bash-4.2# chroot /a
chroot: failed to run command '/bin/sh': Input/output error
bash-4.2# chroot /a
sh: no job control in this shell
sh-4.2# ls /usr/bin|wc -l
- When the above froze and ganesha died, I stopped tcpdump and collected
the pieces in to a tarball.
See attached.
Erik
5 years, 3 months
Change in ...nfs-ganesha[next]: rados_grace: remove superfluous warning in rados_grace_dump
by Jeff Layton (GerritHub)
Jeff Layton has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/466614 )
Change subject: rados_grace: remove superfluous warning in rados_grace_dump
......................................................................
rados_grace: remove superfluous warning in rados_grace_dump
This routine is only called from ganesha-rados-grace and when the read
operation fails, this just clutters up the output. The caller should be
responsible for handling this error.
Change-Id: I7a522e488750a9ae7610e50756107820c7e5590f
Signed-off-by: Jeff Layton <jlayton(a)redhat.com>
---
M src/support/rados_grace.c
1 file changed, 1 insertion(+), 3 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/14/466614/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/466614
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I7a522e488750a9ae7610e50756107820c7e5590f
Gerrit-Change-Number: 466614
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton <jlayton(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 3 months