lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
Re: Better interop for NFS/SMB file share mode/reservation
by J. Bruce Fields
On Tue, Mar 05, 2019 at 04:47:48PM -0500, J. Bruce Fields wrote:
> On Thu, Feb 14, 2019 at 04:06:52PM -0500, J. Bruce Fields wrote:
> > After this:
> >
> > https://marc.info/?l=linux-nfs&m=154966239918297&w=2
> >
> > delegations would no longer conflict with opens from the same tgid. So
> > if your threads all run in the same process and you're willing to manage
> > conflicts among your own clients, that should still allow you to do
> > multiple opens of the same file without giving up your lease/delegation.
> >
> > I'd be curious to know whether that works with Samba's design.
>
> Any idea whether that would work?
>
> (Easy? Impossible? Possible, but realistically the changes required to
> Samba would be painful enough that it'd be unlikely to get done?)
Volker reminds me off-list that he'd like to see Ganesha and Samba work
out an API in userspace first before commiting to a user<->kernel API.
Jeff, wasn't there some work (on Ceph maybe?) on a userspace delegation
API? Is that close to what's needed?
In any case, my immediate goal is just to get knfsd fixed, which doesn't
really commit us to anything--knfsd only needs kernel internal
interfaces. But it'd be nice to have at least some idea if we're on the
right track, to save having to redo that work later.
--b.
5 years, 7 months
FW: A question about ganesha drc can not be free
by Frank Filz
Forwarding to devel(a)lists.nfs-ganesha.org so others can see also.
Frank
From: 方媛 [mailto:fang_yuan1004@163.com]
Sent: Friday, March 29, 2019 1:46 AM
To: malahal(a)us.ibm.com; ffilzlnx(a)mindspring.com
Cc: yanhuan(a)bwstor.com.cn
Subject: A question about ganesha drc can not be free
I have a question about ganasha drc ref count, can you help me. thank you.
I read the latest ganesha git codes, for the protocol V4.0 case, I find drc ref count will not decrease to 0, so can not been inserted into the tcp_drc_recycle_q, I wonder how to free expired drc?
1. At the begining of the request process, nfs_dupreq_start will call nfs_dupreq_get_drc, either a new drc be allocated or an old drc be reused, the drc.ref is set to 0 at first and then added to 1 as below
/* Avoid double reference of drc,
* setting xp_u2 under DRC_ST_LOCK */
req->rq_xprt->xp_u2 = (void *)drc;
(void)nfs_dupreq_ref_drc(drc); /* xprt ref */
DRC_ST_UNLOCK();
drc->d_u.tcp.recycle_time = 0;
then at the end of the nfs_dupreq_get_drc, drc.ref is added to 2 as below
/* call path ref */
(void)nfs_dupreq_ref_drc(drc);
PTHREAD_MUTEX_unlock(&drc->mtx);
if (drc_check_expired)
drc_free_expired();
2. After finish the request process, drc ref count will not decrease 1. Because I see nfs_dupreq_finish will not decrease ref count every time, just when the drc->size > drc->maxsize, will call nfs_dupreq_put_drc to decrease ref count.
/* conditionally retire entries */
dq_again:
if (drc_should_retire(drc)) { /* if drc->sixe < drc->maxsize, will not call nfs_dupreq_put_drc to decrease the drc ref count */
......
TAILQ_REMOVE(&drc->dupreq_q, ov, fifo_q);
--(drc->size);
/* release dv's ref */
nfs_dupreq_put_drc(drc, DRC_FLAG_LOCKED);
/* drc->mtx gets unlocked in the above call! */
rbtree_x_cached_remove(&drc->xt, t, &ov->rbt_k, ov->hk);
}
so drc ref count will not decrease to 1 forever.
3. Then umount the connection, drc ref count will not decrease to 0,
static enum xprt_stat nfs_rpc_free_user_data(SVCXPRT *xprt)
{
if (xprt->xp_u2) {
nfs_dupreq_put_drc(xprt->xp_u2, DRC_FLAG_RELEASE); /* drc ref count is more then 1 */
xprt->xp_u2 = NULL;
}
return XPRT_DESTROYED;
}
so drc can not insert to the tcp_drc_recycle_q, then how to free expired drc?
I will waiting for your reply, thanks very much.
5 years, 9 months
Client owner collisions
by Sriram Patil
Hi,
The client owner generation logic on knfs client is very simple and tries to differentiate on the basis of NFS version and hostname. Here’s a snippet,
scnprintf(str, len, "Linux NFSv%u.%u %s",
clp->rpc_ops->version, clp->cl_minorversion,
clp->cl_rpcclient->cl_nodename);
clp->cl_owner_id = str;
This causes a problem in ganesha in case there are multiple clients connecting to ganesha with same hostname using same NFS version. When comparing client owners, ganesha only takes the client owner from the NFS client into consideration. But if we consider client owner and client IP address to decide if it is the same client before expiring the state, it will treat them as different clients and avoid unnecessary expiry.
Any thoughts?
Thanks,
Sriram
5 years, 9 months
Announce Push of V2.8-dev.23
by Frank Filz
Branch next
Tag:V2.8-dev.23
Release Highlights
* FSAL_VFS: Freeing FSAL handle when unable to get fs locations
* Fix the mounted_on_fileid attribute for referral directories
* Switch to python3-sphinx on Fedora
* FSAL_GLUSTER: fill acl attribute at lookup and readdir when request
* FSAL_GLUSTER: freeing sec label xattr pointer
* FSAL_CEPH: don't abort the connection if we're deleting the export
* build: (lib)ganesha_nfsd as a DSO
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
74222f6 Frank S. Filz V2.8-dev.23
5db5467 Kaleb S. KEITHLEY build: (lib)ganesha_nfsd as a DSO
0d62e92 Jeff Layton FSAL_CEPH: don't abort the connection if we're deleting
the export
231f6a7 Arjun Sharma FSAL_GLUSTER: freeing sec label xattr pointer
d2619ce Kinglong Mee FSAL_GLUSTER: fill acl attribute at lookup and readdir
when request
0c4688c Daniel Gryniewicz Switch to python3-sphinx on Fedora
609f26e Sriram Patil Fix the mounted_on_fileid attribute for referral
directories
0c440e2 Sriram Patil FSAL_VFS: Freeing FSAL handle when unable to get fs
locations
5 years, 9 months
nautilus ceph-container images have a nfs-ganesha that is too old
by Jeff Layton
We have support for an NFS gateway over cephfs in nautilus, but we're
currently building nfs-ganesha packages from the V2.7-stable branch.
That branch lacks quite a few patches that will be in V2.8 that we
require, but some of those patches are probably a bit much to take in
the upstream V2.7-stable branch.
I'd like to change the nfs-ganesha-stable builds for nautilus to pull
from a branch in our own fork, at least temporarily. We already have a
fork of nfs-ganesha in the ceph github organization. I can push a
branch there that would be suitable.
Would anyone have objections to doing that?
--
Jeff Layton <jlayton(a)redhat.com>
5 years, 9 months
Change in ...nfs-ganesha[next]: Remove unused openflags
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449537
Change subject: Remove unused openflags
......................................................................
Remove unused openflags
Change-Id: Id2d2573acf82f555c093041a7f82dc45520855d2
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/Protocols/NFS/nfs4_op_open.c
M src/Protocols/NLM/nlm_Share.c
M src/Protocols/NLM/nlm_Unshare.c
M src/SAL/state_lock.c
M src/SAL/state_share.c
M src/include/fsal_types.h
M src/include/sal_functions.h
7 files changed, 5 insertions(+), 15 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/37/449537/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449537
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Id2d2573acf82f555c093041a7f82dc45520855d2
Gerrit-Change-Number: 449537
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
5 years, 9 months
Change in ...nfs-ganesha[next]: FSAL_CEPH: don't abort the connection if we're deleting the export
by Jeff Layton (GerritHub)
Jeff Layton has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449459
Change subject: FSAL_CEPH: don't abort the connection if we're deleting the export
......................................................................
FSAL_CEPH: don't abort the connection if we're deleting the export
We only want to abort the connection if we're tearing down the export
during a clean shutdown. Otherwise, when doing a deliberate unexport,
we'll end up leaving the client hanging out there until it gets
blacklisted.
Change-Id: I836fd85525e1a1ef57140f150639d948bbcdcc8b
Reported-by: Ricardo Dias <rdias(a)suse.com>
Signed-off-by: Jeff Layton <jlayton(a)redhat.com>
---
M src/FSAL/FSAL_CEPH/export.c
1 file changed, 6 insertions(+), 5 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/59/449459/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449459
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I836fd85525e1a1ef57140f150639d948bbcdcc8b
Gerrit-Change-Number: 449459
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton <jlayton(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 9 months
Change in ...nfs-ganesha[next]: rpc_callback: don't issue CB_NULL on new v4.1 channels
by Jeff Layton (GerritHub)
Jeff Layton has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449458
Change subject: rpc_callback: don't issue CB_NULL on new v4.1 channels
......................................................................
rpc_callback: don't issue CB_NULL on new v4.1 channels
This is generally not necessary for v4.1, and it causes a lot of extra
chatter when running pynfs against the server.
Change-Id: I6b8e8434b579e35e6532798081af10d5a290b0f9
Signed-off-by: Jeff Layton <jlayton(a)redhat.com>
---
M src/MainNFSD/nfs_rpc_callback.c
1 file changed, 1 insertion(+), 8 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/58/449458/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/449458
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I6b8e8434b579e35e6532798081af10d5a290b0f9
Gerrit-Change-Number: 449458
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton <jlayton(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 9 months