Change in ...nfs-ganesha[next]: Allow EXPORT pseudo path to be changed during export update
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334 )
Change subject: Allow EXPORT pseudo path to be changed during export update
......................................................................
Allow EXPORT pseudo path to be changed during export update
This also fully allows adding or removing NFSv4 support from an export
since we can now handle the PseudoFS swizzing that occurs.
Note that an explicit PseudoFS export may be removed or added, though
you can not change it from export_id 0 because we currently don't allow
changing the export_id.
Note that this patch doesn't handle DBUS add or remove export though
that is an option to improve. I may add them to this patch (it wouldn't
be that hard) but I want to get this reviewed as is right now.
There are implications to a client of changing the PseudoFS. I have
tested moving an export in the PseudoFS with a client mounted. The
client will be able to continue accessing the export, though it may
see an ESTALE error if it navigates out of the export. The current
working directory will go bad and the pwd comment will fail indicating
a disconnected mount. I have also seen referencing .. from the root of
the export wrapping around back to the root (I believe this is how
disconnected mounts are set up).
FSAL_PSEUDO lookups and create handles (PUTFH or any use of an NFSv3
handle where the inode isn't cached) which fail during an export update
are instead turned into ERR_FSAL_DELAY which turns into NFS4ERR_DELAY or
NFS3ERR_JUKEBOX to force the client to retry under the completed update.
Change-Id: I507dc17a651936936de82303ff1291677ce136be
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_PSEUDO/handle.c
M src/MainNFSD/libganesha_nfsd.ver
M src/Protocols/NFS/nfs4_pseudo.c
M src/include/export_mgr.h
M src/include/nfs_proto_functions.h
M src/support/export_mgr.c
M src/support/exports.c
7 files changed, 560 insertions(+), 203 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/34/490334/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/490334
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I507dc17a651936936de82303ff1291677ce136be
Gerrit-Change-Number: 490334
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
9 months
Re: Unclaimable MDCache entries at the LRU end of L2 queue.
by Pradeep
Missed CC'ing devel(a)lists.nfs-ganesha.org.
On Mon, Jun 27, 2022 at 9:58 PM Pradeep <pradeepthomas(a)gmail.com> wrote:
>
>
> On Mon, Jun 27, 2022 at 5:27 AM Daniel Gryniewicz <dang(a)redhat.com> wrote:
>
>>
>>
>> On 6/24/22 17:13, Pradeep wrote:
>> >
>> >
>> > On Fri, Jun 24, 2022 at 6:44 AM Daniel Gryniewicz <dang(a)redhat.com
>> > <mailto:dang@redhat.com>> wrote:
>> >
>> > The problem with trying a fixed number of entries is that it doesn't
>> > help, unless that number is large enough to be essentially
>> unlimited.
>> > The design of the reap algorithm is that it's O(1) in the number of
>> > entries (O(n) in the number of queues), so that we can invoke it in
>> the
>> > fast path. If we search a small number of entries on each queue,
>> that
>> >
>> >
>> > You are right. We would like the reap to be O(1) in the data path. We
>> > also try to reap from
>> > garbage collector (lru_run). There, we could potentially scan more
>> > entries from LRU end. This
>> > may make the code more complex and hard to read though.
>> >
>> >
>> > doesn't ensure any more than searching 1 that we find something,
>> > especially with large readdir workloads. I think Frank is right, we
>> > need to do something else.
>> >
>> > The LRU currently serves two purposes:
>> >
>> > 1. A list of entries in LRU order, so that we can reap and reuse
>> them.
>> > 2. A divided list that facilitates closing open global FDs
>> >
>> > Does (1) mean we will keep the LRU order even if the entry is ref'd by
>> > readdir
>> > which means readdir could temporarily promote the entry.
>>
>> Unsure. We changed the readdir case specifically because it was
>> clogging up the MRU end of the LRU, and making reaping hard. I think
>> I'd start with leaving that aspect the way it is for now.
>>
>
> When an entry from L2 is promoted, it goes to the LRU end of L1. So, you
> are right
> that we can't reap from L1. This may be ok if we are not over the limit.
> When you are over the limit,
> reclaimable entries get moved to L2 by lru_run (to the MRU end).
>
> The problem is with the entries in L2 at the LRU end. If those get active
> because of readdir,
> we keep those there. This prevents any reaping to happen. If the workload
> is a mix of readdir + stat,
> you could end up with a large number of entries in MDCache. Once workload
> stops, these entries
> can be reclaimed (though in a multi-user environment, workloads run
> pretty much 24x7).
>
> So far, we discussed and rejected these:
> 1. lru_run() to look beyond the first entry at LRU end and free until
> number of entries is lower than hiwat.
> 2. readdir path to move entries to MRU end of the queue so that entries at
> LRU end can be reclaimed.
>
> Anything else can we do to keep the least active entries at LRU end?
>
> Thanks,
> Pradeep
>
>
>> Daniel
>>
>> >
>> > Thanks,
>> > Pradeep
>> >
>> > It seems clear to me that 1 is more important than 2, especially
>> since
>> > NFSv4 will rarely use global FDs, and some of our most important
>> FSALs
>> > don't use system FDs at all.
>> >
>> > What do we think is a good design for handling these two cases?
>> >
>> > Daniel
>> >
>> > On 6/23/22 16:44, Frank Filz wrote:
>> > > Hmm, what's the path to taking a ref without moving to MRU of L2
>> > or LRU of L1?
>> > >
>> > > There are a number of issues with sticky entries blowing the
>> > cache size coming up that really need to be resolved.
>> > >
>> > > I wonder if we should remove any entry with refcount >1 from the
>> > LRU, and note where we should place it when refcount is reduced to
>> > 1. That would take out the pinned entries as well as temporarily in
>> > use entries. The trick would be the refcount bump for the actual LRU
>> > processing.
>> > >
>> > > Frank
>> > >
>> > >> -----Original Message-----
>> > >> From: Pradeep Thomas [mailto:pradeepthomas@gmail.com
>> > <mailto:pradeepthomas@gmail.com>]
>> > >> Sent: Thursday, June 23, 2022 1:36 PM
>> > >> To: devel(a)lists.nfs-ganesha.org <mailto:
>> devel(a)lists.nfs-ganesha.org>
>> > >> Subject: [NFS-Ganesha-Devel] Unclaimable MDCache entries at the
>> > LRU end of
>> > >> L2 queue.
>> > >>
>> > >> Hello,
>> > >>
>> > >> I'm hitting a scenario where the entry at the LRU end of L2
>> > queue becomes
>> > >> active. But we don't move it to L1 - likely because the entry
>> > becomes active in
>> > >> the context of a readdir. The cache keeps growing to a point
>> > where kernel will
>> > >> invoke oom killer to terminate ganesha process.
>> > >>
>> > >> When we reap entries (lru_reap_impl), could we look beyond LRU
>> > end - perhaps
>> > >> try a fixed number of entries? Another option is to garbage
>> > collect the L2 queue
>> > >> also and free claimable entries beyond LRU end of the
>> queue(through
>> > >> mdcache_lru_release_entries()). Any other thoughts?
>> > >> In the instance below, MDCache is supposed to be capped at 100K
>> > entries. But it
>> > >> grows to > 5 million entries (~17*310K).
>> > >>
>> > >> sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p LRU[0].L1'
>> > -ex 'p LRU[0].L2' -
>> > >> ex 'p LRU[1].L1' -ex 'p LRU[1].L2' -ex 'p LRU[2].L1' -ex 'p
>> > LRU[2].L2' -ex 'p
>> > >> LRU[3].L1' -ex 'p LRU[3].L2' -ex 'p LRU[4].L1' -ex 'p LRU[4].L2'
>> > -ex 'p LRU[5].L1' -
>> > >> ex 'p LRU[5].L2' -ex 'p LRU[6].L1' -ex 'p LRU[6].L2'
>> > >>
>> > >> $1 = {q = {next = 0x7fe16a6adc30, prev = 0x7fe066775d30}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 37}
>> > >> $2 = {q = {next = 0x7fe0cd6d1130, prev = 0x7fdd595e2030}, id =
>> > LRU_ENTRY_L2,
>> > >> size = 310609}
>> > >> $3 = {q = {next = 0x7fe222cc7930, prev = 0x7fe0e8afaf30}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 37}
>> > >> $4 = {q = {next = 0x7fdfa2022d30, prev = 0x7fe01c386b30}, id =
>> > LRU_ENTRY_L2,
>> > >> size = 310459}
>> > >> $5 = {q = {next = 0x7fdfdd8acb30, prev = 0x7fe233849b30}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 31}
>> > >> $6 = {q = {next = 0x7fdf014e7e30, prev = 0x7fdd90fd7430}, id =
>> > LRU_ENTRY_L2,
>> > >> size = 310297}
>> > >> $7 = {q = {next = 0x7fde79a4f030, prev = 0x7fe233a4aa30}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 32}
>> > >> $8 = {q = {next = 0x7fe061388430, prev = 0x7fdd24b5cf30}, id =
>> > LRU_ENTRY_L2,
>> > >> size = 310659}
>> > >> $9 = {q = {next = 0x7fe1e96ce430, prev = 0x7fe0b3b4b130}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 34}
>> > >> $10 = {q = {next = 0x7fe00d84ff30, prev = 0x7fdd685b1530}, id =
>> > >> LRU_ENTRY_L2, size = 310635}
>> > >> $11 = {q = {next = 0x7fdf9df4fb30, prev = 0x7fe2414aaa30}, id =
>> > LRU_ENTRY_L1,
>> > >> size = 33}
>> > >> $12 = {q = {next = 0x7fe165e82d30, prev = 0x7fdf1d2b8a30}, id =
>> > >> LRU_ENTRY_L2, size = 310566}
>> > >> $13 = {q = {next = 0x7fe159e55a30, prev = 0x7fde3f973d30}, id =
>> > >> LRU_ENTRY_L1, size = 41}
>> > >> $14 = {q = {next = 0x7fdf4fbb9030, prev = 0x7fdea8ca0730}, id =
>> > >> LRU_ENTRY_L2, size = 310460}
>> > >>
>> > >> First entry has a ref of 2. But next entries are actually
>> claimable.
>> > >>
>> > >> sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p
>> *(mdcache_lru_t
>> > >> *)LRU[0].L2.q.next'
>> > >> $1 = {q = {next = 0x7fe0fbff0c30, prev = 0x7fe250de2960
>> > <LRU+32>}, qid =
>> > >> LRU_ENTRY_L2, refcnt = 2, flags = 0, lane = 0, cf = 0}
>> > >>
>> > >> sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p
>> *(mdcache_lru_t
>> > >> *)0x7fe0fbff0c30'
>> > >> $1 = {q = {next = 0x7fe0c2c5a130, prev = 0x7fe0cd6d1130}, qid =
>> > >> LRU_ENTRY_L2, refcnt = 1, flags = 0, lane = 0, cf = 0}
>> > >>
>> > >> sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p
>> *(mdcache_lru_t
>> > >> *)0x7fe0c2c5a130'
>> > >> $1 = {q = {next = 0x7fe06dfeac30, prev = 0x7fe142936430}, qid =
>> > >> LRU_ENTRY_L2, refcnt = 1, flags = 0, lane = 0, cf = 0}
>> > >> _______________________________________________
>> > >> Devel mailing list -- devel(a)lists.nfs-ganesha.org
>> > <mailto:devel@lists.nfs-ganesha.org> To unsubscribe send an email
>> to
>> > >> devel-leave(a)lists.nfs-ganesha.org
>> > <mailto:devel-leave@lists.nfs-ganesha.org>
>> > >
>> >
>>
>>
2 years, 4 months
[M] Change in ...nfs-ganesha[next]: Remove state_exp from state_t
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/540556 )
Change subject: Remove state_exp from state_t
......................................................................
Remove state_exp from state_t
Anywhere a state is actually being used, the export should be in
op context so op_ctx->state_exp can be used instead. This avoids
any possibility that the export in the state_t is no longer a valid
pointer (NULL values have been reported).
Change-Id: I078f12348409ae25938a97354b86c51c7c229f41
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_export.c
M src/FSAL/Stackable_FSALs/FSAL_NULL/export.c
M src/Protocols/9P/9p_proto_tools.c
M src/Protocols/NFS/nfs4_op_open.c
M src/SAL/nfs4_state.c
M src/SAL/nfs4_state_id.c
M src/SAL/nlm_state.c
M src/gtest/fsal_api/test_commit2_latency.cc
M src/include/FSAL/fsal_commonlib.h
M src/include/sal_data.h
10 files changed, 29 insertions(+), 23 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/56/540556/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/540556
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I078f12348409ae25938a97354b86c51c7c229f41
Gerrit-Change-Number: 540556
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
2 years, 4 months
Announce Push of V4.0.7
by Frank Filz
Branch next
Tag:V4.0.7
Merge Highlights
* Various GPFS fixes
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
d6ad1b227 Frank S. Filz V4.0.7
28b1bd278 Yogendra Charya [GPFS] Changes for nfs-ganesha containerization
58b578066 Yogendra Charya [GPFS] Initialized cli_ip field to NULL.
f7c84724d Yogendra Charya [GPFS] Add OPENHANDLE_GET_VERSION4 for NFS Client
IP address
cd0b88188 Yogendra Charya [GPFS] cli_ip field got added and filled for
xstat_arg andd stat_name_arg
2 years, 4 months
Unclaimable MDCache entries at the LRU end of L2 queue.
by Pradeep Thomas
Hello,
I'm hitting a scenario where the entry at the LRU end of L2 queue becomes active. But we don't move it to L1 - likely because the entry becomes active in the context of a readdir. The cache keeps growing to a point where kernel will invoke oom killer to terminate ganesha process.
When we reap entries (lru_reap_impl), could we look beyond LRU end - perhaps try a fixed number of entries? Another option is to garbage collect the L2 queue also and free claimable entries beyond LRU end of the queue(through mdcache_lru_release_entries()). Any other thoughts?
In the instance below, MDCache is supposed to be capped at 100K entries. But it grows to > 5 million entries (~17*310K).
sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p LRU[0].L1' -ex 'p LRU[0].L2' -ex 'p LRU[1].L1' -ex 'p LRU[1].L2' -ex 'p LRU[2].L1' -ex 'p LRU[2].L2' -ex 'p LRU[3].L1' -ex 'p LRU[3].L2' -ex 'p LRU[4].L1' -ex 'p LRU[4].L2' -ex 'p LRU[5].L1' -ex 'p LRU[5].L2' -ex 'p LRU[6].L1' -ex 'p LRU[6].L2'
$1 = {q = {next = 0x7fe16a6adc30, prev = 0x7fe066775d30}, id = LRU_ENTRY_L1, size = 37}
$2 = {q = {next = 0x7fe0cd6d1130, prev = 0x7fdd595e2030}, id = LRU_ENTRY_L2, size = 310609}
$3 = {q = {next = 0x7fe222cc7930, prev = 0x7fe0e8afaf30}, id = LRU_ENTRY_L1, size = 37}
$4 = {q = {next = 0x7fdfa2022d30, prev = 0x7fe01c386b30}, id = LRU_ENTRY_L2, size = 310459}
$5 = {q = {next = 0x7fdfdd8acb30, prev = 0x7fe233849b30}, id = LRU_ENTRY_L1, size = 31}
$6 = {q = {next = 0x7fdf014e7e30, prev = 0x7fdd90fd7430}, id = LRU_ENTRY_L2, size = 310297}
$7 = {q = {next = 0x7fde79a4f030, prev = 0x7fe233a4aa30}, id = LRU_ENTRY_L1, size = 32}
$8 = {q = {next = 0x7fe061388430, prev = 0x7fdd24b5cf30}, id = LRU_ENTRY_L2, size = 310659}
$9 = {q = {next = 0x7fe1e96ce430, prev = 0x7fe0b3b4b130}, id = LRU_ENTRY_L1, size = 34}
$10 = {q = {next = 0x7fe00d84ff30, prev = 0x7fdd685b1530}, id = LRU_ENTRY_L2, size = 310635}
$11 = {q = {next = 0x7fdf9df4fb30, prev = 0x7fe2414aaa30}, id = LRU_ENTRY_L1, size = 33}
$12 = {q = {next = 0x7fe165e82d30, prev = 0x7fdf1d2b8a30}, id = LRU_ENTRY_L2, size = 310566}
$13 = {q = {next = 0x7fe159e55a30, prev = 0x7fde3f973d30}, id = LRU_ENTRY_L1, size = 41}
$14 = {q = {next = 0x7fdf4fbb9030, prev = 0x7fdea8ca0730}, id = LRU_ENTRY_L2, size = 310460}
First entry has a ref of 2. But next entries are actually claimable.
sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p *(mdcache_lru_t *)LRU[0].L2.q.next'
$1 = {q = {next = 0x7fe0fbff0c30, prev = 0x7fe250de2960 <LRU+32>}, qid = LRU_ENTRY_L2, refcnt = 2, flags = 0, lane = 0, cf = 0}
sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p *(mdcache_lru_t *)0x7fe0fbff0c30'
$1 = {q = {next = 0x7fe0c2c5a130, prev = 0x7fe0cd6d1130}, qid = LRU_ENTRY_L2, refcnt = 1, flags = 0, lane = 0, cf = 0}
sudo gdb -q -p $(pidof ganesha.nfsd) -batch -ex 'p *(mdcache_lru_t *)0x7fe0c2c5a130'
$1 = {q = {next = 0x7fe06dfeac30, prev = 0x7fe142936430}, qid = LRU_ENTRY_L2, refcnt = 1, flags = 0, lane = 0, cf = 0}
2 years, 4 months
Re: How FSAL readdir should report rdattr_error ?
by Frank Filz
David,
Sorry, the list got removed from the CC at some point…
We do have a developer community call on Tuesdays at 7:00 AM Pacific Time (not sure what timezone you’re in). We use Google Meet.
Ganesha community call
Tuesday, June 14 · 7:00 – 7:30am
Google Meet joining info
Video call link: https://meet.google.com/mkh-ctnj-tqz
Or dial: (US) +1 401-702-0462 PIN: 495 972 631#
More phone numbers: https://tel.meet/mkh-ctnj-tqz?pin=2844708468265
Or join via SIP: sip:2844708468265@gmeet.redhat.com
We also have an IRC channel (#ganesha) on Libera.chat.
I can also do Zoom or Google Meet if you want to chat individually.
What is the backend you are trying to implement on? You have mentioned VFS and kernel attr cache. Linux or some other OS?
Frank
From: David Rieber [mailto:drieber@google.com]
Sent: Monday, June 13, 2022 4:32 PM
To: Frank Filz <ffilzlnx(a)mindspring.com>
Subject: Re: [NFS-Ganesha-Devel] Re: How FSAL readdir should report rdattr_error ?
Thank you Frank. Please know I am very very very new to ganesha and NFS :-)
What you are saying roughly matches what I think I saw when going through ganesha codebase. In particular, like you said, there is no field in struct attrlist to store the rdattr_err!
It may turn out that I don't need this feature so much after all. There is a story behind why I thought we may need this: it has to do with an observation that the kernel attr cache appeared to not be working. It is a long story. Would it be possible to arrange a zoom call to ask questions in that format? I understand if you prefer to stick to email.
Also, AFAICT you are replying to me directly rather than to the mailing list: https://lists.nfs-ganesha.org/archives/list/devel@lists.nfs-ganesha.org/t... .... Am I confused? Why not post to that thread? This is not a huge deal, I was just curious :-)
Thanks!
On Mon, Jun 13, 2022 at 12:15 PM Frank Filz <ffilzlnx(a)mindspring.com <mailto:ffilzlnx@mindspring.com> > wrote:
David,
OK, I’ve looked through the code some. While I coded some of that RDATTR_ERR stuff originally, I really can’t make sense of it anymore… If you feel like you might actually fail to get attributes, then you might want to set up a test that actually does so, and figure out what happens…
We do have support for NFSv4 junction crossing failing, but that’s handled up in the protocol layer not the FSAL layer.
One consideration is that in order to resolve a file handle to an object (or produce a file handle from LOOKUP or CREATE) we need the base attributes to know what type of object it is. And I do know that currently we don’t support partial sets of base (POSIX) attributes.
There doesn’t seem to be a mechanism to pass the actual error for RDATTR_ERR up from the FSAL (at the protocol layer we can produce NFS4ERR_MOVED or NFS4ERR_WRONGSEC).
So if you really do need to pass up RDATTR_ERR, we probably need some work to have fsal_status_t rdattr_err in struct attrlist so the error can be passed up. And then we need to figure where in GETATTR and READDIR we should handle the errors.
Frank
From: David Rieber [mailto:drieber@google.com <mailto:drieber@google.com> ]
Sent: Monday, June 13, 2022 9:37 AM
To: Frank Filz <ffilzlnx(a)mindspring.com <mailto:ffilzlnx@mindspring.com> >
Subject: Re: [NFS-Ganesha-Devel] Re: How FSAL readdir should report rdattr_error ?
I do not have a specific scenario right now, but since I am working on a VFS I have to be prepared to properly implement the readdir API, and in particular the fsal_readdir_cb. The way I understand it, readdir takes a attrmask representing the requested attributes. My FSAL must invoke the fsal_readdir_cb once for every directory entry. If for whatever reason the FSAL cannot produce all the requested attributes, the NFSv4 protocol specifies that attribute "rdattr_error" (attribute 11) should be used. My question is how to implement this.
In ganesha fsals I see other functions that do stuff with RDATTR_ERR, for example proxyv4_getattrs has this snippet:
...
if (rc != NFS4_OK) {
if (attrs->request_mask & ATTR_RDATTR_ERR) {
/* Caller asked for error to be visible. */
attrs->valid_mask = ATTR_RDATTR_ERR;
}
return nfsstat4_to_fsal(rc);
}
...
Strangely I don't see anything similar for readdir. My guess is readdir should do this to signal a "read attribute error":
attrs->valid_mask = ATTR_RDATTR_ERR;
Can you confirm?
p.s. I am new to this mailing list, so I may be missing something but..... I don't see your reply to my message on https://lists.nfs-ganesha.org/archives/list/devel@lists.nfs-ganesha.org/t... .... Can you re-post your June 10 reply there so it is easily available to the whole community?
Thanks!
On Fri, Jun 10, 2022 at 4:39 PM Frank Filz <ffilzlnx(a)mindspring.com <mailto:ffilzlnx@mindspring.com> > wrote:
David,
Sorry, I meant to respond to this and got distracted. I will try and respond next week.
What attributes are you not able to provide?
Frank
> -----Original Message-----
> From: David Rieber via Devel [mailto:devel@lists.nfs-ganesha.org <mailto:devel@lists.nfs-ganesha.org> ]
> Sent: Friday, June 10, 2022 1:49 PM
> To: devel(a)lists.nfs-ganesha.org <mailto:devel@lists.nfs-ganesha.org>
> Subject: [NFS-Ganesha-Devel] Re: How FSAL readdir should report rdattr_error ?
>
> The wording of my question may be unclear. Essentially what I am asking is how
> the READDIR callback should behave when the FSAL cannot get all the requested
> attributes.
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org <mailto:devel@lists.nfs-ganesha.org> To unsubscribe send an email to
> devel-leave(a)lists.nfs-ganesha.org <mailto:devel-leave@lists.nfs-ganesha.org>
2 years, 5 months
Announce Push of V4.0.6
by Frank Filz
Branch next
Tag:V4.0.6
Merge Highlights
* Various fixes for PROXY_V3
* Minor format fix in nfs4_op_read.c
* Use acl_free after calls to acl_get_qualifier
* Various config changes and documentation
* MDCACHE - Fix race between unexport and LRU reap.
* scripts: make install_git_hook.sh is workable for macOS
* Added missing files to .gitignore
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
5339a10b5 Frank S. Filz V4.0.6
fa8b51d38 Lior Suliman Add a configuration parameter for NFSv3 proxy to
disabled lookup optimization for CWD and CWD parent
799caf52b Lior Suliman Added missing files to .gitignore
c6ad4265e Assaf Yaari Add conf param whether to call getattrs after
completing read
4f227f9c2 Vicente Cheng config: add comment for PrefRead/PrefWrite
c74425393 Vicente Cheng scripts: make install_git_hook.sh is workable for
macOS
dd892ad8b Pradeep MDCACHE - Fix race between unexport and LRU reap.
82c60b176 Martin Schwenke config: Avoid incorrect duplicate Export_id error
92b8b7ea9 Frank S. Filz Use acl_free after calls to acl_get_qualifier
353b13a4a Frank S. Filz Minor format fix in nfs4_op_read.c
97cd42574 Bjorn Leffler Refactor NFS op/proc codes to name conversions.
2dd7f468c Bjorn Leffler Change valid attribute check in Proxy_V3 to allow
rawdev.
d25284841 Bjorn Leffler Only initialise nlm and rpc once in proxy_v3 fsal.
2 years, 5 months
How FSAL readdir should report rdattr_error ?
by David Rieber
https://www.rfc-editor.org/rfc/rfc7530#section-16.24 covers READDIR. It says
"In some cases, the server may encounter an error while obtaining the attributes for a directory entry. Instead of returning an error for the entire READDIR operation, the server can instead return the attribute 'fattr4_rdattr_error'. With this, the server is able to communicate the failure to the client and not fail the entire operation in the instance of what might be a transient failure. Obviously, the client must request the fattr4_rdattr_error attribute for this method to work properly. If the client does not request the attribute, the server has no choice but to return failure for the entire READDIR operation."
I'm working in a team building a VFS. We want to support "readdirplus" style of readdir. If the VFS cannot get all attributes, my. FSAL must set the rdattr_error. How should it do that? The fsal_readdir_cb does not seem to have a way to express that. My guess is I must set bit ATTR_RDATTR_ERR on field fsal_attrlist.valid_mask on the attrs argument of fsal_readdir_cb.
2 years, 5 months