Change in ...nfs-ganesha[next]: Fix syntax error in ganesha_stats script
by Trishali Nayar (GerritHub)
Trishali Nayar has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455775
Change subject: Fix syntax error in ganesha_stats script
......................................................................
Fix syntax error in ganesha_stats script
Print should be a function due to " __future__ import print_function"
Change-Id: I39c04e6487a613fcbd64a650a7581158505773d6
Signed-off-by: Trishali Nayar <ntrishal(a)in.ibm.com>
---
M src/scripts/ganeshactl/ganesha_mgr.py
1 file changed, 7 insertions(+), 7 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/75/455775/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455775
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I39c04e6487a613fcbd64a650a7581158505773d6
Gerrit-Change-Number: 455775
Gerrit-PatchSet: 1
Gerrit-Owner: Trishali Nayar <ntrishal(a)in.ibm.com>
Gerrit-MessageType: newchange
5 years, 7 months
Re: Seeing out of order logs
by Sriram Patil
Found some excerpts in the knowledge base article. Following excerpts from the KB article explain why the out of order timestamp is a possibility,
"Virtual machines share their underlying hardware with the host operating system, or on VMware ESX®, the VMkernel. Other applications and other virtual machines might also be running on the same host machine. At the moment that a virtual machine should generate a virtual timer interrupt, it might not actually be running. In fact, the virtual machine might not get a chance to run again until it has accumulated a backlog of many timer interrupts. In addition, even a running virtual machine can sometimes be late in delivering virtual timer interrupts. The virtual machine checks for pending virtual timer interrupts only at certain points, such as when the underlying hardware receives a physical timer interrupt. Many host operating systems do not provide a way for the virtual machine to request a physical timer interrupt at a precisely specified time.
Because the guest operating system keeps time by counting interrupts, time as measured by the guest operating system falls behind real time whenever there is a timer interrupt backlog."
Thanks,
Sriram
On 5/24/19, 11:36 PM, "Sriram Patil" <sriramp(a)vmware.com> wrote:
Ohh okay. I agree that this is very unusual behavior in a normal case because ganesha calls time every time when logging. Also, it opens and closes file for every log message.
I will investigate further on similar lines. Thanks for the help.
- Sriram
On 5/24/19, 9:04 PM, "Daniel Gryniewicz" <dang(a)redhat.com> wrote:
My guess is that something related to waking up the thread triggers a
system clock update from the VM host to the VM, but that this is done
asynchronously, and that it takes some time. This means that the first
call to time() in the thread is likely to get an out-of-date timestamp.
I don't know that this is the case; but VMWare, in particular, has some
really strange operation with-respect-to the system clock inside a VM.
When I last looked at this (~10 years ago, admittedly), there were some
settings you could change that would make it better, but we couldn't get
good, accurate time inside a VM no matter what we did. Note that we
needed sub-second accuracy, so you can likely make it good enough for
Ganesha timestamps.
Note: I'm blaming virtualization (and VMWare in particular) here, but I
don't know that this is the problem; I've just never seen this issue in
Ganesha before, and I have seen clock issues in VMs, so it's my best
guess. It may turn out to be something else entirely.
Daniel
On 5/24/19 10:59 AM, Sriram Patil wrote:
> Hi Daniel,
>
> Yes I have set a date format and I was running ganesha inside a container on a VMWare VM.
>
> But I fail to understand how come the time is different only for the first log message of every request?
>
> Thanks,
> Sriram
>
>
> On 5/24/19, 6:53 PM, "Daniel Gryniewicz" <dang(a)redhat.com> wrote:
>
> I'm not aware of any issues, nor has anything changed in 3 years. It's
> clear to me from your logs that you've set time_format and/or
> date_format, but likely haven't set it to "syslog_usec", right? This
> means it's just calling time() to get the datestamp, and it's doing it
> during each log message. (If you set "syslog_usec" it calls gettimeofday())
>
> This means, to me, that the first time a thread calls time() in a while,
> it's getting back an older timestamp, but that something kicks the
> timestamp back to normal after that.
>
> I haven't seen anything like this before; time can drift, especially in
> virtual machines (which is why CLOCK_MONOTONIC exists), but this doesn't
> seem to be that.
>
> Are you running in a virtual machine? If so, which type (ie, vmware,
> kvm, etc.)
>
> Daniel
>
> On 5/24/19 1:02 AM, Sriram Patil wrote:
> > CCing Frank and Daniel because I don’t see the email on the mailing list.
> >
> > - Sriram
> >
> > *From: *Sriram Patil <sriramp(a)vmware.com>
> > *Date: *Friday, May 24, 2019 at 1:00 AM
> > *To: *"devel(a)lists.nfs-ganesha.org" <devel(a)lists.nfs-ganesha.org>
> > *Subject: *Seeing out of order logs
> >
> > Hi,
> >
> > I am observing a weird out of order logs. As seen from the logs below,
> > for threads svc_5, svc_7 and svc_14, only the first log messages appears
> > with a timestamp which is about 2 minutes old. When those threads are
> > actually processing the compound operations, the timestamp seems fine.
> > In fact the timestamps for for the reaper thread after svc_5 are also
> > out of order. I am using v2.7.2. Is there any known scenario when we can
> > get these out of order logs?
> >
> > The following log is continuous, I have added blanks lines in between so
> > that they are a bit easier to read.
> >
> > 2019-05-22T20:19:02Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_5] 1332 :nfs_rpc_decode_request :DISP
> > :0x7fa674000a90 fd 21 context 0x7fa670001ac0
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 839
> > :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.78.96 for
> > Program 100003, Version 4, Function 1 has xid=3831317466
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 687 :nfs4_Compound :NFS4
> > :COMPOUND: There are 3 operations, res = 0x7fa670001f70, tag = NO TAG
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 798 :nfs4_Compound :NFS4
> > :Request 0: opcode 53 is OP_SEQUENCE
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 81 :nfs4_op_sequence
> > :SESSIONS :SEQUENCE session=0x7fa670049dd0
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 93 :nfs4_op_sequence
> > :SESSIONS :SESSIONS: DEBUG: SEQUENCE returning status NFS4ERR_EXPIRED
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 977 :nfs4_Compound :NFS4
> > :Status of OP_SEQUENCE in position 0 = NFS4ERR_EXPIRED, op response size
> > is 4 total response size is 40
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 1107 :nfs4_Compound :NFS4
> > :End status = NFS4ERR_EXPIRED lastindex = 0
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_5] 1381 :nfs_rpc_decode_request :DISP
> > :SVC_DECODE on 0x7fa674000a90 fd 21 (::ffff:10.192.78.96:1013)
> > xid=3831317466 returned XPRT_IDLE
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_5] 1294 :free_nfs_request :DISP
> > :free_nfs_request: 0x7fa674000a90 fd 21 xp_refcnt 6 rq_refcnt 0
> >
> > 2019-05-22T20:19:08Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [reaper] 219 :reaper_run :CLIENT ID :Now checking
> > NFS4 clients for expiration2019-05-22T20:21:01Z : epoch 5ce5ab1d :
> > localhost : ganesha.nfsd-38[none] [reaper] 906 :nfs_client_id_expire
> > :CLIENT ID :Expiring {0x7fa664004c10 ClientID={Epoch=0x5ce5ab1d
> > Counter=0x00000003} CONFIRMED Client={0x7fa6700019d0 name=(27:Linux
> > NFSv4.1 h10-192-78-96) refcount=2} t_delta=124 reservations=0 refcount=3}
> >
> > 2019-05-22T20:19:16Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_7] 1332 :nfs_rpc_decode_request :DISP
> > :0x7fa664000b20 fd 31 context 0x7fa6680082e0 <--- XXX
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 839
> > :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.73.159 for
> > Program 100003, Version 4, Function 1 has xid=1048477897
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 687 :nfs4_Compound :NFS4
> > :COMPOUND: There are 1 operations, res = 0x7fa668008da0, tag = NO TAG
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 798 :nfs4_Compound :NFS4
> > :Request 0: opcode 53 is OP_SEQUENCE
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 81 :nfs4_op_sequence
> > :SESSIONS :SEQUENCE session=0x7fa664001ca0
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 977 :nfs4_Compound :NFS4
> > :Status of OP_SEQUENCE in position 0 = NFS4_OK, op response size is 40
> > total response size is 76
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_7] 1381 :nfs_rpc_decode_request :DISP
> > :SVC_DECODE on 0x7fa664000b20 fd 31 (::ffff:10.192.73.159:908)
> > xid=1048477897 returned XPRT_IDLE
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_7] 1294 :free_nfs_request :DISP
> > :free_nfs_request: 0x7fa664000b20 fd 31 xp_refcnt 11 rq_refcnt 0
> >
> > 2019-05-22T20:19:17Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[none] [svc_14] 1332 :nfs_rpc_decode_request :DISP
> > :0x7fa674000a90 fd 21 context 0x7fa64403a150 <--- XXX
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 839
> > :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.78.96 for
> > Program 100003, Version 4, Function 1 has xid=3848094682
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 687 :nfs4_Compound :NFS4
> > :COMPOUND: There are 1 operations, res = 0x7fa64403ab90, tag = NO TAG
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 798 :nfs4_Compound :NFS4
> > :Request 0: opcode 53 is OP_SEQUENCE
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 74 :nfs4_op_sequence
> > :SESSIONS :SESSIONS: DEBUG: SEQUENCE returning status NFS4ERR_BADSESSION
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 977 :nfs4_Compound :NFS4
> > :Status of OP_SEQUENCE in position 0 = NFS4ERR_BADSESSION, op response
> > size is 4 total response size is 40
> >
> > 2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost :
> > ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 1107 :nfs4_Compound :NFS4
> > :End status = NFS4ERR_BADSESSION lastindex = 0
> >
> > Thanks,
> >
> > Sriram
> >
>
>
>
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
5 years, 7 months
Announce Push of V2.8-rc1
by Frank Filz
Branch next
Tag:V2.8-rc1
NOTE: We hope to have just one rc, please test with this...
NOTE: This merge contains an ntirpc pullup, please update your submodule.
Release Highlights
* Pullup ntirpc to pre-1.8.0
* MDCACHE - Restart readdir if directory is invalidated
* Add symbols needed for tests
* FSAL fix race in FSAL close method
* fsal_open2 - check for non-regular file when open by name
* FSAL_GLUSTER: Include "enable_upcall" option in sample gluster.conf
* spec file changes for RHEL8 python3
* [FSAL_VFS] Reduce the number of opens done for referral directories
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
a3c6fa3 Frank S. Filz V2.8-rc1
52af43b Sriram Patil [FSAL_VFS] Reduce the number of opens done for referral
directories
7874c2a Malahal Naineni spec file changes for RHEL8 python3
6b52bbe Soumya Koduri FSAL_GLUSTER: Include "enable_upcall" option in sample
gluster.conf
13e7ca8 Frank S. Filz fsal_open2 - check for non-regular file when open by
name
a08cbf6 Frank S. Filz FSAL fix race in FSAL close method
7f57c4d Daniel Gryniewicz Add symbols needed for tests
fdaebee Daniel Gryniewicz MDCACHE - Restart readdir if directory is
invalidated
371d2b9 Daniel Gryniewicz Pullup ntirpc to pre-1.8.0
5 years, 7 months
Seeing out of order logs
by Sriram Patil
Hi,
I am observing a weird out of order logs. As seen from the logs below, for threads svc_5, svc_7 and svc_14, only the first log messages appears with a timestamp which is about 2 minutes old. When those threads are actually processing the compound operations, the timestamp seems fine. In fact the timestamps for for the reaper thread after svc_5 are also out of order. I am using v2.7.2. Is there any known scenario when we can get these out of order logs?
The following log is continuous, I have added blanks lines in between so that they are a bit easier to read.
2019-05-22T20:19:02Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_5] 1332 :nfs_rpc_decode_request :DISP :0x7fa674000a90 fd 21 context 0x7fa670001ac0
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 839 :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.78.96 for Program 100003, Version 4, Function 1 has xid=3831317466
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 687 :nfs4_Compound :NFS4 :COMPOUND: There are 3 operations, res = 0x7fa670001f70, tag = NO TAG
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 798 :nfs4_Compound :NFS4 :Request 0: opcode 53 is OP_SEQUENCE
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 81 :nfs4_op_sequence :SESSIONS :SEQUENCE session=0x7fa670049dd0
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 93 :nfs4_op_sequence :SESSIONS :SESSIONS: DEBUG: SEQUENCE returning status NFS4ERR_EXPIRED
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 977 :nfs4_Compound :NFS4 :Status of OP_SEQUENCE in position 0 = NFS4ERR_EXPIRED, op response size is 4 total response size is 40
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_5] 1107 :nfs4_Compound :NFS4 :End status = NFS4ERR_EXPIRED lastindex = 0
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_5] 1381 :nfs_rpc_decode_request :DISP :SVC_DECODE on 0x7fa674000a90 fd 21 (::ffff:10.192.78.96:1013) xid=3831317466 returned XPRT_IDLE
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_5] 1294 :free_nfs_request :DISP :free_nfs_request: 0x7fa674000a90 fd 21 xp_refcnt 6 rq_refcnt 0
2019-05-22T20:19:08Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [reaper] 219 :reaper_run :CLIENT ID :Now checking NFS4 clients for expiration2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [reaper] 906 :nfs_client_id_expire :CLIENT ID :Expiring {0x7fa664004c10 ClientID={Epoch=0x5ce5ab1d Counter=0x00000003} CONFIRMED Client={0x7fa6700019d0 name=(27:Linux NFSv4.1 h10-192-78-96) refcount=2} t_delta=124 reservations=0 refcount=3}
2019-05-22T20:19:16Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_7] 1332 :nfs_rpc_decode_request :DISP :0x7fa664000b20 fd 31 context 0x7fa6680082e0 <--- XXX
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 839 :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.73.159 for Program 100003, Version 4, Function 1 has xid=1048477897
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 687 :nfs4_Compound :NFS4 :COMPOUND: There are 1 operations, res = 0x7fa668008da0, tag = NO TAG
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 798 :nfs4_Compound :NFS4 :Request 0: opcode 53 is OP_SEQUENCE
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 81 :nfs4_op_sequence :SESSIONS :SEQUENCE session=0x7fa664001ca0
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.73.159] [svc_7] 977 :nfs4_Compound :NFS4 :Status of OP_SEQUENCE in position 0 = NFS4_OK, op response size is 40 total response size is 76
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_7] 1381 :nfs_rpc_decode_request :DISP :SVC_DECODE on 0x7fa664000b20 fd 31 (::ffff:10.192.73.159:908) xid=1048477897 returned XPRT_IDLE
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_7] 1294 :free_nfs_request :DISP :free_nfs_request: 0x7fa664000b20 fd 31 xp_refcnt 11 rq_refcnt 0
2019-05-22T20:19:17Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[none] [svc_14] 1332 :nfs_rpc_decode_request :DISP :0x7fa674000a90 fd 21 context 0x7fa64403a150 <--- XXX
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 839 :nfs_rpc_process_request :DISP :Request from ::ffff:10.192.78.96 for Program 100003, Version 4, Function 1 has xid=3848094682
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 687 :nfs4_Compound :NFS4 :COMPOUND: There are 1 operations, res = 0x7fa64403ab90, tag = NO TAG
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 798 :nfs4_Compound :NFS4 :Request 0: opcode 53 is OP_SEQUENCE
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 74 :nfs4_op_sequence :SESSIONS :SESSIONS: DEBUG: SEQUENCE returning status NFS4ERR_BADSESSION
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 977 :nfs4_Compound :NFS4 :Status of OP_SEQUENCE in position 0 = NFS4ERR_BADSESSION, op response size is 4 total response size is 40
2019-05-22T20:21:01Z : epoch 5ce5ab1d : localhost : ganesha.nfsd-38[::ffff:10.192.78.96] [svc_14] 1107 :nfs4_Compound :NFS4 :End status = NFS4ERR_BADSESSION lastindex = 0
Thanks,
Sriram
5 years, 7 months
Change in ...nfs-ganesha[next]: Add symbols needed for tests
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455687
Change subject: Add symbols needed for tests
......................................................................
Add symbols needed for tests
Change-Id: Id91ed77045807c5b097a0e0c1e12d1e25cc1b256
Signed-off-by: Daniel Gryniewicz <dang(a)redhat.com>
---
M src/MainNFSD/libganesha_nfsd.ver
1 file changed, 28 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/87/455687/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455687
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Id91ed77045807c5b097a0e0c1e12d1e25cc1b256
Gerrit-Change-Number: 455687
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 7 months
Change in ...nfs-ganesha[next]: MDCACHE - Restart readdir if directory is invalidated
by Daniel Gryniewicz (GerritHub)
Daniel Gryniewicz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455686
Change subject: MDCACHE - Restart readdir if directory is invalidated
......................................................................
MDCACHE - Restart readdir if directory is invalidated
A lookup of an entry can nuke a dirent if the keys change for the entry.
If the new dirent can't be placed, then the directory will be
invalidated. In this case, we need to restart the readdir, picking up
from where we left off.
Change-Id: I73344e9cde2bdccfef6566d0e2b3a6675de00a9c
Signed-off-by: Daniel Gryniewicz <dang(a)fprintf.net>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c
1 file changed, 15 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/86/455686/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455686
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I73344e9cde2bdccfef6566d0e2b3a6675de00a9c
Gerrit-Change-Number: 455686
Gerrit-PatchSet: 1
Gerrit-Owner: Daniel Gryniewicz <dang(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 7 months
Re: Better interop for NFS/SMB file share mode/reservation
by J. Bruce Fields
On Tue, Mar 05, 2019 at 04:47:48PM -0500, J. Bruce Fields wrote:
> On Thu, Feb 14, 2019 at 04:06:52PM -0500, J. Bruce Fields wrote:
> > After this:
> >
> > https://marc.info/?l=linux-nfs&m=154966239918297&w=2
> >
> > delegations would no longer conflict with opens from the same tgid. So
> > if your threads all run in the same process and you're willing to manage
> > conflicts among your own clients, that should still allow you to do
> > multiple opens of the same file without giving up your lease/delegation.
> >
> > I'd be curious to know whether that works with Samba's design.
>
> Any idea whether that would work?
>
> (Easy? Impossible? Possible, but realistically the changes required to
> Samba would be painful enough that it'd be unlikely to get done?)
Volker reminds me off-list that he'd like to see Ganesha and Samba work
out an API in userspace first before commiting to a user<->kernel API.
Jeff, wasn't there some work (on Ceph maybe?) on a userspace delegation
API? Is that close to what's needed?
In any case, my immediate goal is just to get knfsd fixed, which doesn't
really commit us to anything--knfsd only needs kernel internal
interfaces. But it'd be nice to have at least some idea if we're on the
right track, to save having to redo that work later.
--b.
5 years, 7 months
Change in ...nfs-ganesha[next]: FSAL fix race in FSAL close method
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455570
Change subject: FSAL fix race in FSAL close method
......................................................................
FSAL fix race in FSAL close method
We should check if the global fd is open or closed while holding the
lock to prevent races.
Change-Id: Ib042425c9fae67961603f7f2df2d696b21b2011f
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/FSAL/FSAL_CEPH/handle.c
M src/FSAL/FSAL_GLUSTER/handle.c
M src/FSAL/FSAL_VFS/file.c
3 files changed, 14 insertions(+), 13 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/70/455570/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/455570
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ib042425c9fae67961603f7f2df2d696b21b2011f
Gerrit-Change-Number: 455570
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
5 years, 7 months