[S] Change in ...nfs-ganesha[next]: docs: Switch GitHub git URLs to https
by Martin Schwenke (GerritHub)
Martin Schwenke has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/550318 )
Change subject: docs: Switch GitHub git URLs to https
......................................................................
docs: Switch GitHub git URLs to https
GitHub no longer appears to support git: URLs for git access.
Signed-off-by: Martin Schwenke <mschwenke(a)ddn.com>
Change-Id: I01212e2177fe3e2fc51e76940291290e4ed41b0b
---
M src/COMPILING_HOWTO.txt
M src/CONTRIBUTING_HOWTO.txt
2 files changed, 15 insertions(+), 3 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/18/550318/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/550318
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I01212e2177fe3e2fc51e76940291290e4ed41b0b
Gerrit-Change-Number: 550318
Gerrit-PatchSet: 1
Gerrit-Owner: Martin Schwenke <martin(a)meltin.net>
Gerrit-MessageType: newchange
6 months, 4 weeks
Announce Push of V5-dev.2
by Frank Filz
Branch next
Tag:V5-dev.2
Merge Highlights
* Add Prometheus and Grafana monitoring stack.
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
7445b0ef6 Frank S. Filz V5-dev.2
b6c59df5a Bjorn Leffler Add Prometheus and Grafana monitoring stack.
7 months
lock failure
by Alok Sinha
What could be the problem? This is with version 2.8.3.
spillbox-nfs-server-01 :
ganesha.nfsd.regress-vcsregrfeb22-nfs-server-19310[svc_640] nfs4_op_lock
:NFS4 LOCK :EVENT :LOCK failed to create new lock owner Lock:
obj=0x2b8a9406f3b8, fileid=83243891, type=READ , start=0x0,
end=0xffffffffffffffff, owner={STATE_OPEN_OWNER_NFSV4 0x2b89b0387530:
clientid={0x2b893c30cf90 ClientID={Epoch=0x63f5d208 Counter=0x000000b8}
CONFIRMED Client={0x2b893c067760 name=(39:Linux NFSv4.0 spillbox-33/
10.178.28.118) refcount=1} t_delta=0 reservations=1 refcount=10
cb_prog=1073741824 r_addr=10.178.28.54.177.211 r_netid=tcp}
owner=(24:0x6f70656e2069643a00000042000000000002b0d23b581e0d) confirmed=1
seqid=125711 refcount=130}
23/02/2023 18:08:28 : epoch 63f5d208 : spillbox-nfs-server-01 :
ganesha.nfsd.regress-vcsregrfeb22-nfs-server-19310[svc_628]
create_nfs4_owner :NFS4 LOCK :CRIT :Related {STATE_OPEN_OWNER_NFSV4
0x2b89f02a7f80: clientid={0x2b8930343fb0 ClientID={Epoch=0x63f5d208
Counter=0x000000c9} CONFIRMED Client={0x2b89300bd880 name=(39:Linux NFSv4.0
spillbox-80/10.178.28.118) refcount=1} t_delta=0 reservations=1 refcount=19
cb_prog=1073741824 r_addr=10.178.28.23.155.163 r_netid=tcp}
owner=(24:0x6f70656e2069643a00000042000000000002b1a675f4638c) confirmed=1
seqid=127246 refcount=121} doesn't match for {STATE_LOCK_OWNER_NFSV4
0x2b89783ea1c0: clientid={0x2b8930343fb0 ClientID={Epoch=0x63f5d208
Counter=0x000000c9} CONFIRMED Client={0x2b89300bd880 name=(39:Linux NFSv4.0
spillbox-80/10.178.28.118) refcount=1} t_delta=0 reservations=1 refcount=19
cb_prog=1073741824 r_addr=10.178.28.23.155.163 r_netid=tcp}
owner=(20:0x6c6f636b2069643a000000420000000000000000) confirmed=1 seqid=0
related_owner={STATE_OPEN_OWNER_NFSV4 0x2b88c81bada0:
clientid={0x2b8930343fb0 ClientID={Epoch=0x63f5d208 Counter=0x000000c9}
CONFIRMED Client={0x2b89300bd880 name=(39:Linux NFSv4.0 spillbox-80/
10.178.28.118) refcount=1} t_delta=0 reservations=1 refcount=19
cb_prog=1073741824 r_addr=10.178.28.23.155.163 r_netid=tcp}
owner=(24:0x6f70656e2069643a00000042000000000002af455610f805) confirmed=1
seqid=109320 refcount=187} refcount=3}
--
Alok Sinha
www.spillbox.io
https://www.youtube.com/watch?v=g3RujrjlZIY
7 months
NFSv3 CREATE op is breaking RFC1813
by Sagar Singh
NFSv3 CREATE is returning NFS3ERR_PERM on permission error, which seems to
breaking the RFC as all the return errors for CREATE:
ERRORS
NFS3ERR_IO
NFS3ERR_ACCES
NFS3ERR_EXIST
NFS3ERR_NOTDIR
NFS3ERR_NOSPC
NFS3ERR_ROFS
NFS3ERR_NAMETOOLONG
NFS3ERR_DQUOT
NFS3ERR_STALE
NFS3ERR_BADHANDLE
NFS3ERR_NOTSUPP
NFS3ERR_SERVERFAULT
We should have returned NFS3ERR_ACCES, instead of NFS3ERR_PERM.
Although in NFSv4 CREATE, NFS3ERR_PERM is valid.
What are your thoughts?
Patch introducing this breakage:
https://github.com/nfs-ganesha/nfs-ganesha/commit/edba4a5c1be0178c0c32e9c...
Thanks,
Sagar Singh
7 months
Open FD Count does not decrease & new file creation fails
by Ankith Acharya
Hello
I have long running test which keeps failinging after 2-3 days. There is workload which runs fine for 2-3 days and then Ganesha process completely hangs up for file creation. Mounting, unmounting, directory listing, file read - work. However, new file creation fails completely. The process does not allow file creation until it restarts. I have waited for 2-3 days to check if process recovers but it does not.
Ganesha version : 3.5
Client : Windows 2016
NFS version : 3
VFS : custom
Workload
Concurrent access from multiple threads in single client. 1 thread continuously (in a loop) running python os.walk (i.e., readdir) of the entire filesystem, roughly ~5M files total.
Client runs robocopy command copying folder over to NFS share with 10 threads. The folder contains randomly between 1-1000 files of random size between 1Kb-50MB. When the writes complete, a single thread verifies written content, then deletes it. The robocopy continues again.
Debugging
- I looked at the logs and debugger and file creation fails as `mdcache_lru_fds_available` evaluates to false everytime. (https://github.com/nfs-ganesha/nfs-ganesha/blob/V3.5/src/FSAL/Stackable_F...)
- I checked the "lru_state" variable and "open_fd_count" has hit the limit. So, ganesha sends `ERR_FSAL_DELAY` response back each time.
(gdb) p lru_state
$1 = {entries_hiwat = 500000, entries_used = 500000, entries_release_size = 100, chunks_hiwat = 25000, chunks_used = 610, fds_system_imposed = 400000, fds_hard_limit = 396000, fds_hiwat = 360000, fds_lowat = 200000,
futility = 1, per_lane_work = 50, biggest_window = 160000, prev_fd_count = 396000, prev_time = 1676657325, fd_state = 3}
- However, this situation keeps continues indefinitely. The "lru_thread" is actively trying to reap but it is not making any progress. Debug logs prove that. I can probably increase the limits for open fd but it just feels it will hit this limit in a while.
lru_run :INODE LRU :F_DBG :formeropen=396000 totalwork=0 workpass=37 totalclosed:0
Actually processed 6 entries on lane 8 closing 0 descriptors
- Even unmounting the client does not help.
I have ready server in this state. I can run any debug commands and provide logs.
Thanks in advance for the help.
7 months