lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 3 months
IO Error on NFS v4.0 but not on v4.1
by Boris Faure
Hi everyone,
I'm running a simple test with ganesha-next (but the issue exists at least
in latest V2.5.5-stable):
$ sudo mount -t nfs -o v4.0,users,nolock,soft,timeo=2,retrans=3 127.0.0.1:/
/mnt/nfs
$ dd if=/dev/zero of=/mnt/nfs/dump/pw.1 bs=1024k count=2048 conv=fdatasync
dd: error writing '/mnt/nfs/dump/pw.1': Input/output error
1616+0 records in
1615+0 records out
1693450240 bytes (1,7 GB, 1,6 GiB) copied, 8,28076 s, 205 MB/s
$ sudo mount -t nfs -o users,nolock,soft,timeo=2,retrans=3 127.0.0.1:/
/mnt/nfs
$ dd if=/dev/zero of=/mnt/nfs/dump/pw.1 bs=1024k count=2048 conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2,1 GB, 2,0 GiB) copied, 6,27708 s, 342 MB/s
The second mount is using nfs v4.1.
I don't have anything special in my configuration.
If I do the dd with a blocksize of 512k, it works fine.
If I set rsize/wsize to 524288 as mount options, the dd with bs=1024k still
fails.
The only log line I get is the following:
25/09/2018 13:47:25 : epoch 5baa2036 : caipirinha :
ganesha.nfsd-2381[svc_182] rpc :TIRPC :EVENT :svc_ioq_flushv() writev
failed (104)
Thanks for your help.
Best Regards
--
Boris Faure
Software Engineer
--
<https://www.scality.com/download-idc-marketscape/?utm_source=ScalityEmail...>
6 years, 2 months
Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: set "cache-posix-acl" option when export supports acl
by GerritHub
From Kinglong Mee <kinglongmee(a)gmail.com>:
Kinglong Mee has uploaded this change for review. ( https://review.gerrithub.io/427305
Change subject: FSAL_GLUSTER: set "cache-posix-acl" option when export supports acl
......................................................................
FSAL_GLUSTER: set "cache-posix-acl" option when export supports acl
Change-Id: I9d1b9fb22a30b44f1dbaa90e1f7e5f342a6bb0f3
Signed-off-by: Kinglong Mee <mijinlong(a)open-fs.com>
---
M src/FSAL/FSAL_GLUSTER/export.c
1 file changed, 11 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/05/427305/1
--
To view, visit https://review.gerrithub.io/427305
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9d1b9fb22a30b44f1dbaa90e1f7e5f342a6bb0f3
Gerrit-Change-Number: 427305
Gerrit-PatchSet: 1
Gerrit-Owner: Kinglong Mee <kinglongmee(a)gmail.com>
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: namespace cleanup: some poorly named and/or non-static symbols
by GerritHub
From <kaleb(a)redhat.com>:
kaleb(a)redhat.com has uploaded this change for review. ( https://review.gerrithub.io/427287
Change subject: namespace cleanup: some poorly named and/or non-static symbols
......................................................................
namespace cleanup: some poorly named and/or non-static symbols
In Gluster are seeing a collision between the variable named 'options'
in the gluster xlators and the variable named 'options' in ganesha.nfsd.
ISTR a recent change made in Fedora to RT linker that affect this but
am unable to find a reference to the discussion at this time. It is
unclear to me whether this is a side effect of the introduction of the
-z defs linker flags in Fedora 28. This probably affects RHEL 8 too.
In a nutshell, references within a gluster xlator .so used to always be
resolved to the "closest" definition and/or resolved at static link time;
either way to the one in the xlator .so.
Now though we see them resolved at runtime to the first definition of
the symbol; in this case the one in ganesha.nfsd.
In addition to 'options' there are a number of other variables in
ganesha.nfsd that a) should be static, or b) should be renamed so as
not to pollute the namespace. I picked a few that stood out. There may
be others.
Note too that gluster is working to clean up its own scribbling in the
namespace. And, e.g., 'options' are exported from xlator .so files
so that each xlators' options can be enumerated for display.
Signed-off-by: Kaleb S. KEITHLEY <kkeithle(a)redhat.com>
Change-Id: I0a5893ac4d54fce84e71f38997e1370f39d42ad3
---
M src/FSAL/FSAL_PROXY/handle.c
M src/MainNFSD/9p_dispatcher.c
M src/MainNFSD/9p_rdma_callbacks.c
M src/MainNFSD/nfs_admin_thread.c
M src/MainNFSD/nfs_init.c
M src/MainNFSD/nfs_lib.c
M src/MainNFSD/nfs_main.c
M src/MainNFSD/nfs_rpc_callback.c
M src/MainNFSD/nfs_rpc_dispatcher_thread.c
M src/MainNFSD/nfs_worker_thread.c
M src/Protocols/NFS/nfs4_Compound.c
M src/SAL/nfs4_clientid.c
M src/SAL/nfs4_state_id.c
M src/include/nfs_core.h
M src/include/nfs_lib.h
M src/log/log_functions.c
M src/support/client_mgr.c
M src/support/export_mgr.c
M src/support/server_stats.c
19 files changed, 114 insertions(+), 109 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/87/427287/1
--
To view, visit https://review.gerrithub.io/427287
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I0a5893ac4d54fce84e71f38997e1370f39d42ad3
Gerrit-Change-Number: 427287
Gerrit-PatchSet: 1
Gerrit-Owner: Anonymous Coward <kaleb(a)redhat.com>
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: for seek2, offset == filesize is "beyond the end of file"
by GerritHub
From Frank Filz <ffilzlnx(a)mindspring.com>:
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/427278
Change subject: FSAL_GLUSTER: for seek2, offset == filesize is "beyond the end of file"
......................................................................
FSAL_GLUSTER: for seek2, offset == filesize is "beyond the end of file"
Change-Id: I5df7fdd166f1ec929033717c11575a65f7a8c9f2
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_GLUSTER/handle.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/78/427278/1
--
To view, visit https://review.gerrithub.io/427278
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I5df7fdd166f1ec929033717c11575a65f7a8c9f2
Gerrit-Change-Number: 427278
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
6 years, 3 months
Scalability issue with VFS FSAL and large amounts of read i/o in flight
by Kropelin, Adam
Hello,
I am observing a scalability issue with recent-ish versions of nfs-ganesha (including -next) when NFS clients have a significant amount of in-flight read requests.
My test setup has a ganesha server with a single export on the VFS FSAL. I have multiple Linux clients, all mounting that export with NFSv4.0. On the clients I run a simple read workload using dd: 'dd if=/mnt/test/testfile of=/dev/null bs=1M'. All clients read the same 1 GB file. Each client is bandwidth-limited to 1 Gbps while the server has 10 Gbps available. A single client achieves ~100 MB/sec. Adding a second client brings the aggregate throughput up to ~120 MB/sec. A third client gets the aggregate to ~130 MB/sec, and it pretty much plateaus at that point. Clearly this is well below the aggregate bandwidth the server is capable of.
Additionally, and this is the behavior that made me originally discover this issue in production, while the clients are performing their read test, the server becomes extremely slow to respond to mount requests. By "extremely slow" I mean it takes 60 seconds or more to perform a simple mount while 8 clients are running the read test.
I've ruled out external bottlenecks -- disk i/o on the server is essentially zero during the test (as would be expected since that 1 GB file will most certainly be in page cache). The server shows no significant CPU load at all. Using the in-kernel NFS server with the same clients I can easily saturate the 10 Gpbs network link from 8-10 clients with no effect on mount times, so network is not a bottleneck here.
Other things of interest:
* -next and V2.5 both exhibit the issue, but V2.2 does not
* By observation on the wire I see that the Linux NFS client is submitting 16 or more 1 MB READ RPCs at once. If I prevent that behavior by adding 'iflag=direct' to the dd command, suddenly scalability is back where it should be. Something about having a lot of read i/o in flight seems to matter here.
* I grabbed several core dumps of ganesha during a period where 8 clients were hitting it. Every single thread is idle (typically pthread_cond_wait'ing for some work) except for one rpc worker which is in writev. This is true repeatedly throughout the test. It is as if somehow a single rpc worker thread is doing all of the network i/o to every client.
Thanks in advance for any ideas...
--Adam
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: Usage: fix typo '<epoch<'
by GerritHub
From <zhbingyin(a)sina.com>:
zhbingyin(a)sina.com has uploaded this change for review. ( https://review.gerrithub.io/427181
Change subject: Usage: fix typo '<epoch<'
......................................................................
Usage: fix typo '<epoch<'
Change-Id: Ifade2d3c60b1f6aebb14571c18ece25fd68d2a03
Signed-off-by: Bingyin Zhang <zhbingyin(a)sina.com>
---
M src/MainNFSD/nfs_main.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/81/427181/1
--
To view, visit https://review.gerrithub.io/427181
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ifade2d3c60b1f6aebb14571c18ece25fd68d2a03
Gerrit-Change-Number: 427181
Gerrit-PatchSet: 1
Gerrit-Owner: Anonymous Coward <zhbingyin(a)sina.com>
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Drop redundant information in construct_handle
by GerritHub
From <zhbingyin(a)sina.com>:
zhbingyin(a)sina.com has uploaded this change for review. ( https://review.gerrithub.io/427177
Change subject: FSAL_GLUSTER: Drop redundant information in construct_handle
......................................................................
FSAL_GLUSTER: Drop redundant information in construct_handle
Drop useless argument 'len' and variable 'buffxstat';
Change-Id: Ib235152331ead24cf1f27446bc91305a45bded15
Signed-off-by: Bingyin Zhang <zhbingyin(a)sina.com>
---
M src/FSAL/FSAL_GLUSTER/export.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.c
M src/FSAL/FSAL_GLUSTER/gluster_internal.h
M src/FSAL/FSAL_GLUSTER/handle.c
4 files changed, 10 insertions(+), 15 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/77/427177/1
--
To view, visit https://review.gerrithub.io/427177
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib235152331ead24cf1f27446bc91305a45bded15
Gerrit-Change-Number: 427177
Gerrit-PatchSet: 1
Gerrit-Owner: Anonymous Coward <zhbingyin(a)sina.com>
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: MDCACHE: fallocate needs to invalidate attributes
by GerritHub
From Frank Filz <ffilzlnx(a)mindspring.com>:
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/427167
Change subject: MDCACHE: fallocate needs to invalidate attributes
......................................................................
MDCACHE: fallocate needs to invalidate attributes
Since this may affect filesize (and will affect space used), we
need to invalidate attributes so they will be refreshed.
Change-Id: I3c0a9ea01c3b3c2e6341c6a6d286564dc0a0c83d
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_file.c
1 file changed, 2 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/67/427167/1
--
To view, visit https://review.gerrithub.io/427167
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3c0a9ea01c3b3c2e6341c6a6d286564dc0a0c83d
Gerrit-Change-Number: 427167
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
6 years, 3 months
Change in ffilz/nfs-ganesha[next]: FSAL_GLUSTER: Allow seek2 on any open file not just open for read
by GerritHub
From Frank Filz <ffilzlnx(a)mindspring.com>:
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/427154
Change subject: FSAL_GLUSTER: Allow seek2 on any open file not just open for read
......................................................................
FSAL_GLUSTER: Allow seek2 on any open file not just open for read
Change-Id: Ic2666e0c96cf71224be4af06f28c5d9719cabcf6
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_GLUSTER/handle.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/54/427154/1
--
To view, visit https://review.gerrithub.io/427154
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic2666e0c96cf71224be4af06f28c5d9719cabcf6
Gerrit-Change-Number: 427154
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
6 years, 3 months