lseek gets bad offset from nfs client with ganesha/gluster which supports SEEK
by Kinglong Mee
The latest ganesha/gluster supports seek according to,
https://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-41#section-15.11
From the given sa_offset, find the next data_content4 of type sa_what
in the file. If the server can not find a corresponding sa_what,
then the status will still be NFS4_OK, but sr_eof would be TRUE. If
the server can find the sa_what, then the sr_offset is the start of
that content. If the sa_offset is beyond the end of the file, then
SEEK MUST return NFS4ERR_NXIO.
For a file's filemap as,
Part 1: HOLE 0x0000000000000000 ---> 0x0000000000600000
Part 2: DATA 0x0000000000600000 ---> 0x0000000000700000
Part 3: HOLE 0x0000000000700000 ---> 0x0000000001000000
SEEK(0x700000, SEEK_DATA) gets result (sr_eof:1, sr_offset:0x70000) from ganesha/gluster;
SEEK(0x700000, SEEK_HOLE) gets result (sr_eof:0, sr_offset:0x70000) from ganesha/gluster.
If an application depends the lseek result for data searching, it may enter infinite loop.
while (1) {
next_pos = lseek(fd, cur_pos, seek_type);
if (seek_type == SEEK_DATA) {
seek_type = SEEK_HOLE;
} else {
seek_type = SEEK_DATA;
}
if (next_pos == -1) {
return ;
cur_pos = next_pos;
}
The lseek syscall always gets 0x70000 from nfs client for those two cases,
but, if underlying filesystem is ext4/f2fs, or the nfs server is knfsd,
the lseek(0x700000, SEEK_DATA) gets ENXIO.
I wanna to know,
should I fix the ganesha/gluster as knfsd return ENXIO for the first case?
or should I fix the nfs client to return ENXIO for the first case?
thanks,
Kinglong Mee
4 years, 1 month
NLM reclaim without Ganesha loosing the locks
by Suhrud Patankar
Hello All,
For one case in our cluster, I need to send sm-notify to the client
even when Ganesha has not restarted, but it is in Grace period.
So the client sends reclaim to a server which has not actually lost the locks.
Can Ganesha handle NLM reclaim locks without actually loosing the locks first?
Thanks & Regards,
Suhrud
5 years, 10 months
Announce Push of V2.8-dev.10
by Frank Filz
Branch next
Tag:V2.8-dev.10
Release Highlights
* Fix to set ATTR_RDATTR_ERR correctly as valid_mask in open2.
* cmake: use -fno-strict-aliasing for Linux build
* rpm/selinux: fix %pre install and %postun uninstall of selinux policy
* ganesha_status.py - fix missing parens
* NFSv3 and NFSv4 detailed statistics via ganesha_stats
* exports: fix variable type in foreach_gsh_export
* exports: don't allow dbus remove to leave subexports disconnected in
pseudoroot
* exports: add generation counter to config_root and helpers to fetch it
* exports: copy the config tree generation to the export when adding or
updating
* exports: prune off old exports after reloading config
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
60775c4 Frank S. Filz V2.8-dev.10
46bb5e5 Jeff Layton exports: prune off old exports after reloading config
c328125 Jeff Layton exports: copy the config tree generation to the export
when adding or updating
eedcd2c Jeff Layton exports: add generation counter to config_root and
helpers to fetch it
f689617 Jeff Layton exports: don't allow dbus remove to leave subexports
disconnected in pseudoroot
8c5cee1 Jeff Layton exports: fix variable type in foreach_gsh_export
4914b4d Sachin Punadikar NFSv4 detailed statistics via ganesha_stats
2f25043 Sachin Punadikar NFSv3 detailed statistics via ganesha_stats
5294c1a Daniel Gryniewicz ganesha_status.py - fix missing parens
f775a43 Kaleb S. KEITHLEY rpm/selinux: fix %pre install and %postun
uninstall of selinux policy
b0b226f Thomas Serlin cmake: use -fno-strict-aliasing for Linux build
ba667d4 Gaurav B. Gangalwar Fix to set ATTR_RDATTR_ERR correctly as
valid_mask in open2.
5 years, 10 months
FW: Change in ffilz/nfs-ganesha[next]: WIP - TEMP NTIRPC PULLUP
by Frank Filz
Patrice, Dominique
Could you investigate what the failure is here? I can’t make sense of the failure, it may need debug to understand what value the segfault is actually happening on (since a NULL req ptr should have failed earlier in the function).
Thanks
Frank
From: CEA-HPC (GerritHub) [mailto:gerrit@gerrithub.io]
Sent: Tuesday, December 18, 2018 3:28 PM
To: Frank Filz <ffilzlnx(a)mindspring.com>
Cc: Gluster Community Jenkins <gerrithub(a)gluster.org>
Subject: Change in ffilz/nfs-ganesha[next]: WIP - TEMP NTIRPC PULLUP
Build OK - NFSv4.1 proxy mount failed
mount proxy:
server proxy:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version V2.7-rc4-140-gae1a90bb8, built at Dec 18 2018 21:16:12 on vm1.pcocc.c-inti.ccc.ocre.cea.fr
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
[New Thread 0x7ffff4ecc700 (LWP 5256)]
[New Thread 0x7ffff46cb700 (LWP 5257)]
[New Thread 0x7ffff3e8a700 (LWP 5258)]
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_start_grace :STATE :EVENT :NFS Server skipping GRACE (Graceless is true)
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,35,36+ep
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] gsh_dbus_pkginit :DBUS :CRIT :server bus reg failed (org.ganesha.nfsd, Connection ":1.27" is not allowed to own the service "org.ganesha.nfsd" due to security policies in the configuration file)
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[pxy_clientid_renewer] pxy_setclientid :FSAL :EVENT :Negotiating a new ClientId with the remote server
[New Thread 0x7ffff0e3b700 (LWP 5259)]
[New Thread 0x7ffff063a700 (LWP 5260)]
[New Thread 0x7fffeceff700 (LWP 5261)]
[New Thread 0x7fffecdfe700 (LWP 5262)]
[New Thread 0x7fffeccfd700 (LWP 5263)]
[New Thread 0x7fffecbfc700 (LWP 5265)]
[New Thread 0x7fffecafb700 (LWP 5266)]
[New Thread 0x7fffec8f9700 (LWP 5269)]
[New Thread 0x7fffec9fa700 (LWP 5268)]
[New Thread 0x7fffec6f7700 (LWP 5271)]
[New Thread 0x7fffec7f8700 (LWP 5270)]
[New Thread 0x7fffec4f5700 (LWP 5273)]
[New Thread 0x7fffec5f6700 (LWP 5272)]
[New Thread 0x7fffec2f3700 (LWP 5275)]
[New Thread 0x7fffec3f4700 (LWP 5274)]
[New Thread 0x7fffec1f2700 (LWP 5276)]
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
18/12/2018 21:16:18 : epoch 5c1963a2 : vm1.pcocc.c-inti.ccc.ocre.cea.fr : ganesha.nfsd-5252[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
[New Thread 0x7fffafbc3700 (LWP 5277)]
[New Thread 0x7fffaf3c2700 (LWP 5278)]
[New Thread 0x7fffaebc1700 (LWP 5279)]
[New Thread 0x7fffae3c0700 (LWP 5280)]
[New Thread 0x7fffadbbf700 (LWP 5281)]
[New Thread 0x7fffad3be700 (LWP 5282)]
[New Thread 0x7fffacbbd700 (LWP 5283)]
[New Thread 0x7fffac3bc700 (LWP 5284)]
[New Thread 0x7fffabbbb700 (LWP 5285)]
[New Thread 0x7fffab3ba700 (LWP 5286)]
[New Thread 0x7fffaabb9700 (LWP 5287)]
[New Thread 0x7fffaa3b8700 (LWP 5288)]
[New Thread 0x7fffa9bb7700 (LWP 5289)]
[New Thread 0x7fffa93b6700 (LWP 5290)]
[New Thread 0x7fffa8bb5700 (LWP 5291)]
[New Thread 0x7fffa83b4700 (LWP 5292)]
[New Thread 0x7fffa7bb3700 (LWP 5293)]
[New Thread 0x7fffa73b2700 (LWP 5294)]
[New Thread 0x7fffa6bb1700 (LWP 5295)]
[New Thread 0x7fffa63b0700 (LWP 5296)]
[New Thread 0x7fffa5baf700 (LWP 5297)]
[New Thread 0x7fffa53ae700 (LWP 5298)]
[New Thread 0x7fffa4bad700 (LWP 5299)]
[New Thread 0x7fffa43ac700 (LWP 5300)]
[New Thread 0x7fffa3bab700 (LWP 5301)]
[New Thread 0x7fffa33aa700 (LWP 5302)]
[New Thread 0x7fffa2ba9700 (LWP 5303)]
[New Thread 0x7fffa23a8700 (LWP 5304)]
[New Thread 0x7fffa1ba7700 (LWP 5305)]
[New Thread 0x7fffa13a6700 (LWP 5306)]
[New Thread 0x7fffa0ba5700 (LWP 5307)]
[New Thread 0x7fffa03a4700 (LWP 5308)]
[New Thread 0x7fff9fba3700 (LWP 5309)]
[New Thread 0x7fff9f3a2700 (LWP 5310)]
[New Thread 0x7fff9eba1700 (LWP 5311)]
[New Thread 0x7fff9e3a0700 (LWP 5312)]
[New Thread 0x7fff9db9f700 (LWP 5313)]
[New Thread 0x7fff9d39e700 (LWP 5314)]
[New Thread 0x7fff9cb9d700 (LWP 5315)]
[New Thread 0x7fff9c39c700 (LWP 5316)]
[New Thread 0x7fff9bb9b700 (LWP 5317)]
[New Thread 0x7fff9b39a700 (LWP 5318)]
[New Thread 0x7fff9ab99700 (LWP 5319)]
[New Thread 0x7fff9a398700 (LWP 5320)]
[New Thread 0x7fff99b97700 (LWP 5321)]
[New Thread 0x7fff99396700 (LWP 5322)]
[New Thread 0x7fff98b95700 (LWP 5323)]
[New Thread 0x7fff98394700 (LWP 5324)]
[New Thread 0x7fff97b93700 (LWP 5325)]
[New Thread 0x7fff97392700 (LWP 5326)]
[New Thread 0x7fff96b91700 (LWP 5327)]
[New Thread 0x7fff96390700 (LWP 5328)]
[New Thread 0x7fff95b8f700 (LWP 5329)]
[New Thread 0x7fff9538e700 (LWP 5330)]
[New Thread 0x7fff94b8d700 (LWP 5331)]
[New Thread 0x7fff9438c700 (LWP 5332)]
[New Thread 0x7fff93b8b700 (LWP 5333)]
[New Thread 0x7fff9338a700 (LWP 5334)]
[New Thread 0x7fff92b89700 (LWP 5335)]
[New Thread 0x7fff92388700 (LWP 5336)]
[New Thread 0x7fff91b87700 (LWP 5337)]
[New Thread 0x7fff91386700 (LWP 5338)]
[New Thread 0x7fff90b85700 (LWP 5339)]
[New Thread 0x7fff90384700 (LWP 5340)]
[New Thread 0x7fff8fb83700 (LWP 5341)]
[New Thread 0x7fff8f382700 (LWP 5342)]
[New Thread 0x7fff8eb81700 (LWP 5343)]
[New Thread 0x7fff8e380700 (LWP 5344)]
[New Thread 0x7fff8db7f700 (LWP 5345)]
[New Thread 0x7fff8d37e700 (LWP 5346)]
[New Thread 0x7fff8cb7d700 (LWP 5347)]
[New Thread 0x7fff8c37c700 (LWP 5348)]
[New Thread 0x7fff8bb7b700 (LWP 5349)]
[New Thread 0x7fff8b37a700 (LWP 5350)]
[New Thread 0x7fff8ab79700 (LWP 5351)]
[New Thread 0x7fff8a378700 (LWP 5352)]
[New Thread 0x7fff89b77700 (LWP 5353)]
[New Thread 0x7fff89376700 (LWP 5354)]
[New Thread 0x7fff88b75700 (LWP 5355)]
[New Thread 0x7fff88374700 (LWP 5356)]
[New Thread 0x7fff87b73700 (LWP 5357)]
[New Thread 0x7fff87372700 (LWP 5358)]
[New Thread 0x7fff86b71700 (LWP 5359)]
[New Thread 0x7fff86370700 (LWP 5360)]
[New Thread 0x7fff85b6f700 (LWP 5361)]
[New Thread 0x7fff8536e700 (LWP 5362)]
[New Thread 0x7fff84b6d700 (LWP 5363)]
[New Thread 0x7fff8436c700 (LWP 5364)]
[New Thread 0x7fff83b6b700 (LWP 5365)]
[New Thread 0x7fff8336a700 (LWP 5366)]
[New Thread 0x7fff82b69700 (LWP 5367)]
[New Thread 0x7fff82368700 (LWP 5368)]
[New Thread 0x7fff81b67700 (LWP 5369)]
[New Thread 0x7fff81366700 (LWP 5370)]
[New Thread 0x7fff80b65700 (LWP 5371)]
[New Thread 0x7fff80364700 (LWP 5372)]
[New Thread 0x7fff7fb63700 (LWP 5373)]
[New Thread 0x7fff7f362700 (LWP 5374)]
[New Thread 0x7fff7eb61700 (LWP 5375)]
[New Thread 0x7fff7e360700 (LWP 5376)]
[New Thread 0x7fff7db5f700 (LWP 5377)]
[New Thread 0x7fff7d35e700 (LWP 5378)]
[New Thread 0x7fff7cb5d700 (LWP 5379)]
[New Thread 0x7fff7c35c700 (LWP 5380)]
[New Thread 0x7fff7bb5b700 (LWP 5381)]
[New Thread 0x7fff7b35a700 (LWP 5382)]
[New Thread 0x7fff7ab59700 (LWP 5383)]
[New Thread 0x7fff7a358700 (LWP 5384)]
[New Thread 0x7fff79b57700 (LWP 5385)]
[New Thread 0x7fff79356700 (LWP 5386)]
[New Thread 0x7fff78b55700 (LWP 5387)]
[New Thread 0x7fff78354700 (LWP 5388)]
[New Thread 0x7fff77b53700 (LWP 5389)]
[New Thread 0x7fff77352700 (LWP 5390)]
[New Thread 0x7fff76b51700 (LWP 5391)]
[New Thread 0x7fff76350700 (LWP 5392)]
[New Thread 0x7fff75b4f700 (LWP 5393)]
[New Thread 0x7fff7534e700 (LWP 5394)]
[New Thread 0x7fff74b4d700 (LWP 5395)]
[New Thread 0x7fff7434c700 (LWP 5396)]
[New Thread 0x7fff73b4b700 (LWP 5397)]
[New Thread 0x7fff7334a700 (LWP 5398)]
[New Thread 0x7fff72b49700 (LWP 5399)]
[New Thread 0x7fff72348700 (LWP 5400)]
[New Thread 0x7fff71b47700 (LWP 5401)]
[New Thread 0x7fff71346700 (LWP 5402)]
[New Thread 0x7fff70b45700 (LWP 5403)]
[New Thread 0x7fff70344700 (LWP 5404)]
[New Thread 0x7fff6fb43700 (LWP 5405)]
[New Thread 0x7fff6f342700 (LWP 5406)]
[New Thread 0x7fff6eb41700 (LWP 5407)]
[New Thread 0x7fff6e340700 (LWP 5408)]
[New Thread 0x7fff6db3f700 (LWP 5409)]
[New Thread 0x7fff6d33e700 (LWP 5410)]
[New Thread 0x7fff6cb3d700 (LWP 5411)]
[New Thread 0x7fff6c33c700 (LWP 5412)]
[New Thread 0x7fff6bb3b700 (LWP 5413)]
[New Thread 0x7fff6b33a700 (LWP 5414)]
[New Thread 0x7fff6ab39700 (LWP 5415)]
[New Thread 0x7fff6a338700 (LWP 5416)]
[New Thread 0x7fff69b37700 (LWP 5417)]
[New Thread 0x7fff69336700 (LWP 5418)]
[New Thread 0x7fff68b35700 (LWP 5419)]
[New Thread 0x7fff68334700 (LWP 5420)]
[New Thread 0x7fff67b33700 (LWP 5421)]
[New Thread 0x7fff67332700 (LWP 5422)]
[New Thread 0x7fff66b31700 (LWP 5423)]
[New Thread 0x7fff66330700 (LWP 5424)]
[New Thread 0x7fff65b2f700 (LWP 5425)]
[New Thread 0x7fff6532e700 (LWP 5426)]
[New Thread 0x7fff64b2d700 (LWP 5427)]
[New Thread 0x7fff6432c700 (LWP 5428)]
[New Thread 0x7fff63b2b700 (LWP 5429)]
[New Thread 0x7fff6332a700 (LWP 5430)]
[New Thread 0x7fff62b29700 (LWP 5431)]
[New Thread 0x7fff62328700 (LWP 5432)]
[New Thread 0x7fff61b27700 (LWP 5433)]
[New Thread 0x7fff61326700 (LWP 5434)]
[New Thread 0x7fff60b25700 (LWP 5435)]
[New Thread 0x7fff60324700 (LWP 5436)]
[New Thread 0x7fff5fb23700 (LWP 5437)]
[New Thread 0x7fff5f322700 (LWP 5438)]
[New Thread 0x7fff5eb21700 (LWP 5439)]
[New Thread 0x7fff5e320700 (LWP 5440)]
[New Thread 0x7fff5db1f700 (LWP 5441)]
[New Thread 0x7fff5d31e700 (LWP 5442)]
[New Thread 0x7fff5cb1d700 (LWP 5443)]
[New Thread 0x7fff5c31c700 (LWP 5444)]
[New Thread 0x7fff5bb1b700 (LWP 5445)]
[New Thread 0x7fff5b31a700 (LWP 5446)]
[New Thread 0x7fff5ab19700 (LWP 5447)]
[New Thread 0x7fff5a318700 (LWP 5448)]
[New Thread 0x7fff59b17700 (LWP 5449)]
[New Thread 0x7fff59316700 (LWP 5450)]
[New Thread 0x7fff58b15700 (LWP 5451)]
[New Thread 0x7fff58314700 (LWP 5452)]
[New Thread 0x7fff57b13700 (LWP 5453)]
[New Thread 0x7fff57312700 (LWP 5454)]
[New Thread 0x7fff56b11700 (LWP 5455)]
[New Thread 0x7fff56310700 (LWP 5456)]
[New Thread 0x7fff55b0f700 (LWP 5457)]
[New Thread 0x7fff5530e700 (LWP 5458)]
[New Thread 0x7fff54b0d700 (LWP 5459)]
[New Thread 0x7fff5430c700 (LWP 5460)]
[New Thread 0x7fff53b0b700 (LWP 5461)]
[New Thread 0x7fff5330a700 (LWP 5462)]
[New Thread 0x7fff52b09700 (LWP 5463)]
[New Thread 0x7fff52308700 (LWP 5464)]
[New Thread 0x7fff51b07700 (LWP 5465)]
[New Thread 0x7fff51306700 (LWP 5466)]
[New Thread 0x7fff50b05700 (LWP 5467)]
[New Thread 0x7fff50304700 (LWP 5468)]
[New Thread 0x7fff4fb03700 (LWP 5469)]
[New Thread 0x7fff4f302700 (LWP 5470)]
[New Thread 0x7fff4eb01700 (LWP 5471)]
[New Thread 0x7fff4e300700 (LWP 5472)]
[New Thread 0x7fff4daff700 (LWP 5473)]
[New Thread 0x7fff4d2fe700 (LWP 5474)]
[New Thread 0x7fff4cafd700 (LWP 5475)]
[New Thread 0x7fff4c2fc700 (LWP 5476)]
[New Thread 0x7fff4bafb700 (LWP 5477)]
[New Thread 0x7fff4b2fa700 (LWP 5478)]
[New Thread 0x7fff4aaf9700 (LWP 5479)]
[New Thread 0x7fff4a2f8700 (LWP 5480)]
[New Thread 0x7fff49af7700 (LWP 5481)]
[New Thread 0x7fff492f6700 (LWP 5482)]
[New Thread 0x7fff48af5700 (LWP 5483)]
[New Thread 0x7fff482f4700 (LWP 5484)]
[New Thread 0x7fff47af3700 (LWP 5485)]
[New Thread 0x7fff472f2700 (LWP 5486)]
[New Thread 0x7fff46af1700 (LWP 5487)]
[New Thread 0x7fff469f0700 (LWP 5488)]
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffec9fa700 (LWP 5268)]
0x00007ffff7bb760c in svc_request (xprt=0x7fffe4000be0, xdrs=0x7fffd4000f00) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:806
806 XDR_DESTROY(req->rq_xdrs);
#0 0x00007ffff7bb760c in svc_request (xprt=0x7fffe4000be0, xdrs=0x7fffd4000f00) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:806
#1 0x00007ffff7bbb022 in svc_vc_recv (xprt=0x7fffe4000be0) at /opt/nfs-ganesha/src/libntirpc/src/svc_vc.c:798
#2 0x00007ffff7bb7531 in svc_rqst_xprt_task (wpe=0x7fffe4000e00) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:772
#3 0x00007ffff7bb7e56 in svc_rqst_epoll_loop (wpe=0x826520) at /opt/nfs-ganesha/src/libntirpc/src/svc_rqst.c:1085
#4 0x00007ffff7bc08a5 in work_pool_thread (arg=0x7fffdc000be0) at /opt/nfs-ganesha/src/libntirpc/src/work_pool.c:181
#5 0x00007ffff6372e25 in start_thread () from /lib64/libpthread.so.0
#6 0x00007ffff5c7f34d in clone () from /lib64/libc.so.6
Patch set 1:Verified -1
View Change <https://review.gerrithub.io/437471>
To view, visit change 437471 <https://review.gerrithub.io/437471> . To unsubscribe, or for help writing mail filters, visit settings <https://review.gerrithub.io/settings> .
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: comment
Gerrit-Change-Id: I0d25f08ad1487d1175d46e9759075af26acf0942
Gerrit-Change-Number: 437471
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com <mailto:ffilzlnx@mindspring.com> >
Gerrit-Reviewer: CEA-HPC <gerrithub-hpc(a)cea.fr <mailto:gerrithub-hpc@cea.fr> >
Gerrit-Reviewer: Gandi Buildbot <ganesha(a)gandi.net <mailto:ganesha@gandi.net> >
Gerrit-CC: Gluster Community Jenkins <gerrithub(a)gluster.org <mailto:gerrithub@gluster.org> >
Gerrit-Comment-Date: Tue, 18 Dec 2018 23:27:42 +0000
Gerrit-HasComments: No
Gerrit-HasLabels: Yes
5 years, 10 months
Change in ffilz/nfs-ganesha[next]: Return error count after failed proc_block
by Sergey Lysanov (GerritHub)
Sergey Lysanov has uploaded this change for review. ( https://review.gerrithub.io/438072
Change subject: Return error count after failed proc_block
......................................................................
Return error count after failed proc_block
It helps to avoid commit of DS block with not initialized
FSAL inside.
Example of ganesha config that will lead to assert in
pnfs_ds_insert function:
DS {
FSAL {
Name = NOT_EXISTING_FSAL;
}
}
Change-Id: I4124b0542ad1518867169f3e3abf5ad2b71416bb
Signed-off-by: Sergey Lysanov <slysanov(a)virtuozzo.com>
---
M src/config_parsing/config_parsing.c
1 file changed, 3 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/72/438072/1
--
To view, visit https://review.gerrithub.io/438072
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I4124b0542ad1518867169f3e3abf5ad2b71416bb
Gerrit-Change-Number: 438072
Gerrit-PatchSet: 1
Gerrit-Owner: Sergey Lysanov <slysanov(a)virtuozzo.com>
5 years, 10 months
Invalid attrs_out->valid_mask in vfs_open2_by_handle
by gaurav gangalwar
In vfs_open2-> vfs_open2_by_handle for some cases like UNCHECKED create we
want to set attrs_out->valid_mask to ATTR_RDATTR_ERR, so that caller should
not rely on populated attrs_out and do getattrs to get valid attrs.
} else if (attrs_out && attrs_out->request_mask & ATTR_RDATTR_ERR) {
attrs_out->valid_mask &= ATTR_RDATTR_ERR;
}
Doing "&=" is not setting ATTR_RDATTR_ERR as valid_mask is set to 0 by
caller. So caller will always rely on attrs_out.
We should do "=" here.
} else if (attrs_out && attrs_out->request_mask & ATTR_RDATTR_ERR) {
attrs_out->valid_mask = ATTR_RDATTR_ERR;
}
This could happen if there is create with createmode < FSAL_EXCLUSIVE and
file is already created with mdcache populated handle. We could end
corrupting mdcache attrs in such case.
Regards,
Gaurav
5 years, 10 months
Change in ffilz/nfs-ganesha[next]: Fix to set ATTR_RDATTR_ERR correctly as valid_mask in vfs_open2_by_ha...
by Gaurav (GerritHub)
Gaurav has uploaded this change for review. ( https://review.gerrithub.io/437927
Change subject: Fix to set ATTR_RDATTR_ERR correctly as valid_mask in vfs_open2_by_handle.
......................................................................
Fix to set ATTR_RDATTR_ERR correctly as valid_mask in vfs_open2_by_handle.
Change-Id: I30da8c60e861b44e6bc1c256f09c9a2dd0cbc999
Signed-off-by: Gaurav B. Gangalwar <gaurav.gangalwar(a)gmail.com>
---
M src/FSAL/FSAL_VFS/file.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/27/437927/1
--
To view, visit https://review.gerrithub.io/437927
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I30da8c60e861b44e6bc1c256f09c9a2dd0cbc999
Gerrit-Change-Number: 437927
Gerrit-PatchSet: 1
Gerrit-Owner: Gaurav <gaurav.gangalwar(a)gmail.com>
5 years, 10 months
回复:Re: [NFS-Ganesha-Devel]回复:Re: Depleting fds security issue
by QR
ganesha 2.6.3 doesn't contain the fix, so this issue doesn't exist in ganesha 2.6.3.
Thanks.
--------------------------------
----- 原始邮件 -----
发件人:gaurav gangalwar <gaurav.gangalwar(a)gmail.com>
收件人:zhbingyin(a)sina.com
抄送人:devel <devel(a)lists.nfs-ganesha.org>, dang <dang(a)redhat.com>
主题:[NFS-Ganesha-Devel] Re: [NFS-Ganesha-Devel]回复:Re: Depleting fds security issue
日期:2018年12月20日 16点06分
On Thu, Dec 20, 2018 at 4:04 AM QR <zhbingyin(a)sina.com> wrote:
Could you tell me how to reproduce this? Thanks in advance.You need to take this tirpc fix first https://github.com/nfs-ganesha/ntirpc/pull/155Then run your script.
I run the below script for more than 12 hours, but fail to reproduce this with ganesha 2.6.3.-------------------------------------------------------------------------------------------------------------------------set -e
i=1while :do echo $(date) $i mount -t nfs -o vers=4,minorversion=0,tcp 10.226.138.235:/ /mnt/235/ umount /mnt/235/ let "i++"done
--------------------------------
----- 原始邮件 -----
发件人:Daniel Gryniewicz <dang(a)redhat.com>
收件人:devel(a)lists.nfs-ganesha.org
主题:[NFS-Ganesha-Devel] Re: Depleting fds security issue
日期:2018年12月19日 22点11分
We've been able to reproduce and debug this, and have a fix here:
https://github.com/nfs-ganesha/ntirpc/pull/161
Daniel
On 12/17/2018 06:19 AM, gaurav gangalwar wrote:
>
> On Fri, Dec 14, 2018 at 9:18 PM William Allen Simpson
> <william.allen.simpson(a)gmail.com
> <mailto:william.allen.simpson@gmail.com>> wrote:
>
> On 12/11/18 12:30 AM, gaurav gangalwar wrote:
> > As we can see in the log highlighted we are still left with one
> ref even after SVC_DESTROY in UMNT and all SVC_RELEASE.
> >
> [...]
>
> > I tried with SVC_RELEASE instead of SVC_DESTROY in same patch, it
> worked at least for connection on which UMNT came, we are releasing
> the xprt for it.
> >
> Then it would seem that there already is another SVC_DESTROY
> somewhere else. The whole
> point of SVC_DESTROY is once-only. So your SVC_DESTROY didn't do
> anything.
>
> So do we need any fix in Ganesha code for this issue?
> We do SVC_DESTROY in svc_vc_recv if there is connection close from
> client, so I don't think we need to do it on UMNT in Ganesha.
>
>
> [...]
>
> > As we can see in the logs refcnt is going to zero for the
> connection on which UMNT came.
> > But there are logs for other connections also, which we are not
> cleaning up as there is no UMNT on them, but we are polling on them
> and they are getting closed,
> > I just have single client and running same script, looks like
> client is opening multiple connections and closing them on UMNT.
> >
> What client is running multiple connections on the same fd?
>
> There are no multiple connections on same fd, there are multiple
> fds/connections from client which are getting closed without UMNT call
> on them so they will not get cleaned from server. I pasted the logs for it.
> Point is doing SVC_RELEASE in UMNT is not fixing the issue, we are still
> left with uncleaned connections.
> As per discussion here
> https://github.com/nfs-ganesha/ntirpc/pull/160
> Looks like we need fix in tirpc only.
> Regards,
> Gaurav
>
>
>
>
>
>
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
>
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
5 years, 10 months
回复:Re: Depleting fds security issue
by QR
Could you tell me how to reproduce this? Thanks in advance.
I run the below script for more than 12 hours, but fail to reproduce this with ganesha 2.6.3.-------------------------------------------------------------------------------------------------------------------------set -e
i=1while :do echo $(date) $i mount -t nfs -o vers=4,minorversion=0,tcp 10.226.138.235:/ /mnt/235/ umount /mnt/235/ let "i++"done
--------------------------------
----- 原始邮件 -----
发件人:Daniel Gryniewicz <dang(a)redhat.com>
收件人:devel(a)lists.nfs-ganesha.org
主题:[NFS-Ganesha-Devel] Re: Depleting fds security issue
日期:2018年12月19日 22点11分
We've been able to reproduce and debug this, and have a fix here:
https://github.com/nfs-ganesha/ntirpc/pull/161
Daniel
On 12/17/2018 06:19 AM, gaurav gangalwar wrote:
>
> On Fri, Dec 14, 2018 at 9:18 PM William Allen Simpson
> <william.allen.simpson(a)gmail.com
> <mailto:william.allen.simpson@gmail.com>> wrote:
>
> On 12/11/18 12:30 AM, gaurav gangalwar wrote:
> > As we can see in the log highlighted we are still left with one
> ref even after SVC_DESTROY in UMNT and all SVC_RELEASE.
> >
> [...]
>
> > I tried with SVC_RELEASE instead of SVC_DESTROY in same patch, it
> worked at least for connection on which UMNT came, we are releasing
> the xprt for it.
> >
> Then it would seem that there already is another SVC_DESTROY
> somewhere else. The whole
> point of SVC_DESTROY is once-only. So your SVC_DESTROY didn't do
> anything.
>
> So do we need any fix in Ganesha code for this issue?
> We do SVC_DESTROY in svc_vc_recv if there is connection close from
> client, so I don't think we need to do it on UMNT in Ganesha.
>
>
> [...]
>
> > As we can see in the logs refcnt is going to zero for the
> connection on which UMNT came.
> > But there are logs for other connections also, which we are not
> cleaning up as there is no UMNT on them, but we are polling on them
> and they are getting closed,
> > I just have single client and running same script, looks like
> client is opening multiple connections and closing them on UMNT.
> >
> What client is running multiple connections on the same fd?
>
> There are no multiple connections on same fd, there are multiple
> fds/connections from client which are getting closed without UMNT call
> on them so they will not get cleaned from server. I pasted the logs for it.
> Point is doing SVC_RELEASE in UMNT is not fixing the issue, we are still
> left with uncleaned connections.
> As per discussion here
> https://github.com/nfs-ganesha/ntirpc/pull/160
> Looks like we need fix in tirpc only.
> Regards,
> Gaurav
>
>
>
>
>
>
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
>
_______________________________________________
Devel mailing list -- devel(a)lists.nfs-ganesha.org
To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org
5 years, 10 months