Fwd: Ganesha 2.5, crash /segfault while executing nlm4_Unlock
by Sachin Punadikar
---------- Forwarded message ----------
From: Sachin Punadikar <punadikar.sachin(a)gmail.com>
Date: Tue, Jun 26, 2018 at 3:57 PM
Subject: Ganesha 2.5, crash /segfault while executing nlm4_Unlock
To: nfs-ganesha-devel <nfs-ganesha-devel(a)lists.sourceforge.net>
Hi All,
Recently a crash was reported by customer for Ganesha 2.5.
(gdb) where
#0 0x00007f475872900b in pthread_rwlock_wrlock () from
/lib64/libpthread.so.0
#1 0x000000000041eac9 in fsal_obj_handle_fini (obj=0x7f4378028028) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL/commonlib.c:192
#2 0x000000000053180f in mdcache_lru_clean (entry=0x7f4378027ff0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:589
#3 0x0000000000536587 in _mdcache_lru_unref (entry=0x7f4378027ff0,
flags=0, func=0x5a9380 <__func__.23209> "cih_remove_checked", line=406)
at /usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1921
#4 0x0000000000543e91 in cih_remove_checked (entry=0x7f4378027ff0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_hash.h:406
#5 0x0000000000544b26 in mdc_clean_entry (entry=0x7f4378027ff0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:235
#6 0x000000000053181e in mdcache_lru_clean (entry=0x7f4378027ff0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:592
#7 0x0000000000536587 in _mdcache_lru_unref (entry=0x7f4378027ff0,
flags=0, func=0x5a70af <__func__.23112> "mdcache_put", line=190)
at /usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:1921
#8 0x0000000000539666 in mdcache_put (entry=0x7f4378027ff0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.h:190
#9 0x000000000053f062 in mdcache_put_ref (obj_hdl=0x7f4378028028) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/FSAL
/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:1709
#10 0x000000000049bf0f in nlm4_Unlock (args=0x7f4294165830,
req=0x7f4294165028, res=0x7f43f001e0e0)
at /usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/Prot
ocols/NLM/nlm_Unlock.c:128
#11 0x000000000044c719 in nfs_rpc_execute (reqdata=0x7f4294165000) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/Main
NFSD/nfs_worker_thread.c:1290
#12 0x000000000044cf23 in worker_run (ctx=0x3c200e0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/Main
NFSD/nfs_worker_thread.c:1562
#13 0x000000000050a3e7 in fridgethr_start_routine (arg=0x3c200e0) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm013.00-0.1.1-Source/supp
ort/fridgethr.c:550
#14 0x00007f4758725dc5 in start_thread () from /lib64/libpthread.so.0
#15 0x00007f4757de673d in clone () from /lib64/libc.so.6
A closer look at the backtrace indicates that there was cyclic flow of
execution as below:
nlm4_Unlock -> mdcache_put_ref -> mdcache_put -> _mdcache_lru_unref ->
mdcache_lru_clean -> fsal_obj_handle_fini and then mdc_clean_entry ->
cih_remove_checked -> (purposely coping next flow on below line)
-> _mdcache_lru_unref -> mdcache_lru_clean -> fsal_obj_handle_fini
(currently crashing here)
Do we see any code issue here ? Any hints on how to RCA this issue ?
Thanks in advance.
--
with regards,
Sachin Punadikar
--
with regards,
Sachin Punadikar
6 years, 3 months
Pnfs flex layout: synthetic uid/gid
by Supriti Singh
Hello,
I am looking at how to implement synthetic uid/gids for flex layout.
From rfc section "Implementation Notes for Synthetic uids/gids": [0]
" When the metadata server had a request to access a file, a SETATTR would be sent to the storage device to set the
owner and group of the data file. The user and group might be selected in a round robin fashion from the range of
available ids. Those ids would be sent back as ffds_user and ffds_group to the client. And it would present them as the
RPC credentials to the storage device. When the client was done accessing the file and the metadata server knew that no
other client was accessing the file, it
could reset the owner and group to restrict access to the data file."
Few questions regarding implementation:
1. To implement it in nfs-ganesha, we would have to add a way to generate uids/gids, store a mapping between uids/gids
and the corresponding data file. Is there already such structure in nfs-ganesha that can be reused?
2. As I understood, once metadata server generates uid/gid, it needs to send a setattr to the storage device to set
owner and group of data file. But I am not sure how does this synthetic uid/gid maps to real uid/gid that a data file
might have. The same data file in cephfs can be opened through clients other than nfs as well.
[0] https://tools.ietf.org/html/draft-ietf-nfsv4-flex-files-19#page-3
------
Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
6 years, 5 months
Change in ffilz/nfs-ganesha[next]: gtest/fsal_api: Changes in read2, write2 and close.
by GerritHub
From Girjesh Rajoria <grajoria(a)redhat.com>:
Girjesh Rajoria has uploaded this change for review. ( https://review.gerrithub.io/417262
Change subject: gtest/fsal_api: Changes in read2, write2 and close.
......................................................................
gtest/fsal_api: Changes in read2, write2 and close.
Add bypass cases to read2 and write2 gtest and changing test name
from BIG to LOOP in close.
Change-Id: Icf8a1fd256bb397fbd6635c27e625d465a6cbcc3
Signed-off-by: grajoria <grajoria(a)redhat.com>
---
M src/gtest/fsal_api/test_close_latency.cc
M src/gtest/fsal_api/test_read2_latency.cc
M src/gtest/fsal_api/test_write2_latency.cc
3 files changed, 96 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/62/417262/1
--
To view, visit https://review.gerrithub.io/417262
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Icf8a1fd256bb397fbd6635c27e625d465a6cbcc3
Gerrit-Change-Number: 417262
Gerrit-PatchSet: 1
Gerrit-Owner: Girjesh Rajoria <grajoria(a)redhat.com>
6 years, 5 months
SAL does not call FSAL for conflicting lock op
by ntvietvn@gmail.com
Hello,
After reading the func state_status_t state_lock, it seems that if ganesha receives a non-blocking lock overlapping an existing lock, the SAL handles it self by returning STATE_LOCK_CONFLICT. I also made a test and the result is consistent. Is it normal behavior and if it is, the FSAL is never called, how do you suggest to test upcall fct of the FSAL when it grants a waiting lock?
Thank you
Viet
6 years, 5 months
Change in ffilz/nfs-ganesha[next]: exports: release export state after calling ->unexport op
by GerritHub
From Jeff Layton <jlayton(a)redhat.com>:
Jeff Layton has uploaded this change for review. ( https://review.gerrithub.io/416800
Change subject: exports: release export state after calling ->unexport op
......................................................................
exports: release export state after calling ->unexport op
Nothing defines the "unexport" operation today so this change should not
be noticeable at all. In the future though, we want to allow FSAL_CEPH
to prepare an export for clean shutdown such that we don't tear down
persistent state when tearing down objects.
We can use the unexport operation for that, but that means that we need
to do it before calling state_release_export.
Change-Id: I8edecc24d7e70dcb64eac9f5284d141a2b040bc9
Signed-off-by: Jeff Layton <jlayton(a)redhat.com>
---
M src/support/exports.c
1 file changed, 3 insertions(+), 3 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/00/416800/1
--
To view, visit https://review.gerrithub.io/416800
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8edecc24d7e70dcb64eac9f5284d141a2b040bc9
Gerrit-Change-Number: 416800
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton <jlayton(a)redhat.com>
6 years, 5 months
Announce Push of V2.7-dev.18
by Frank Filz
Branch next
Tag:V2.7-dev.18
Release Highlights
* MDCACHE various debugging of dirent cache
* MDCACHE: For dirent chunking, prevent prev_chunk from being reaped
* MDCACHE: Replace chunk accounting with dirent accounting
* config_samples: remove obsolete config params from sample ceph.conf
* misc rados grace updates
* SAL - Fix coverity error: limit on cleanup was broken
* GTest - fix name too short error
* MEM - Limit dirent readahead to 1 chunk
* FSAL_VFS : only_one_user mode
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
39b5229 Frank S. Filz V2.7-dev.18
7ab8716 Patrice LUCAS FSAL_VFS : only_one_user mode
001940e Daniel Gryniewicz MEM - Limit dirent readahead to 1 chunk
4238c53 Daniel Gryniewicz GTest - fix name too short error
1a20c41 Daniel Gryniewicz SAL - Fix coverity error: limit on cleanup was
broken
0f0feee Jeff Layton specfile: move ganesha-rados-grace into a separate
package
bfaba71 Jeff Layton ganesha-rados-grace: rename rados_grace_tool to
ganesha-rados-grace
b86321e Jeff Layton config_samples: remove obsolete config params from
sample ceph.conf
e9d8dea Frank S. Filz MDCACHE: make all dirent related functions use
LogDebugAlt/LogFullDebugAlt
c7c336e Frank S. Filz MDCACHE: Replace chunk accounting with dirent
accounting
831453c Frank S. Filz MDCACHE: For dirent chunking, prevent prev_chunk from
being reaped
de35287 Frank S. Filz MDCACHE: Add some next_ck debugging for dirent chunks
6 years, 6 months
Change in ffilz/nfs-ganesha[next]: Add client description in dbus ShowExports command
by GerritHub
From Girjesh Rajoria <grajoria(a)redhat.com>:
Girjesh Rajoria has uploaded this change for review. ( https://review.gerrithub.io/416572
Change subject: Add client description in dbus ShowExports command
......................................................................
Add client description in dbus ShowExports command
Change-Id: Ia4aba65a05b37bf31e9809636b78eaca520783ea
Signed-off-by: grajoria <grajoria(a)redhat.com>
---
M src/support/export_mgr.c
1 file changed, 79 insertions(+), 2 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/72/416572/1
--
To view, visit https://review.gerrithub.io/416572
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ia4aba65a05b37bf31e9809636b78eaca520783ea
Gerrit-Change-Number: 416572
Gerrit-PatchSet: 1
Gerrit-Owner: Girjesh Rajoria <grajoria(a)redhat.com>
6 years, 6 months
Change in ffilz/nfs-ganesha[next]: posix_acls: fix use after freed of acl mask
by GerritHub
From Kinglong Mee <kinglongmee(a)gmail.com>:
Kinglong Mee has uploaded this change for review. ( https://review.gerrithub.io/416485
Change subject: posix_acls: fix use after freed of acl mask
......................................................................
posix_acls: fix use after freed of acl mask
valgrind shows an invalid read of acl mask.
Change-Id: If17f346b4229b74baadc3f1761630848f3068208
Signed-off-by: Kinglong Mee <kinglongmee(a)gmail.com>
---
M src/FSAL/posix_acls.c
1 file changed, 4 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/85/416485/1
--
To view, visit https://review.gerrithub.io/416485
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: If17f346b4229b74baadc3f1761630848f3068208
Gerrit-Change-Number: 416485
Gerrit-PatchSet: 1
Gerrit-Owner: Kinglong Mee <kinglongmee(a)gmail.com>
6 years, 6 months
Re: cb_program or cb_callback_ident always the same
by Tuan Viet Nguyen
Hi Daniel,
Thank you for your reply. I've also tried with the client_id but it also
has the same value for 2 different processes. So if the client_id and the
opaque always have the same value (for 2 different processes), how can we
distinguish the client?
I've tried with this field
so_owner.so_nfs4_owner.so_clientid
Thank you.
Viet
On Mon, Apr 30, 2018 at 2:38 PM, Daniel Gryniewicz <dang(a)redhat.com> wrote:
> This list has been deprecated. Please subscribe to the new devel list at
> lists.nfs-ganesha.org.
> Hi.
>
> The client program ID in a lock owner is an opaque. That is, it's not
> defined in the spec, and the server can't use it for anything other than a
> byte string. The concatenation of the client-ID and the opaque part of the
> lock owner is unique, but the opaque part of the lock owner itself is not.
>
> That value only has meaning to the client.
>
> Daniel
>
> On 04/30/2018 08:18 AM, Tuan Viet Nguyen wrote:
>
>> This list has been deprecated. Please subscribe to the new devel list at
>> lists.nfs-ganesha.org.
>>
>>
>>
>> Hello,
>>
>> While trying to get more information related to the lock owner, I'm
>> trying to get the client program id and realize that it always takes the
>> same value (easy to do with a test program forking another process, parent
>> lock a file range then the child locks another range). Is it something
>> similar to the client process id that is stored in the client record
>> structure? or any other suggestions?
>>
>> Thank you
>>
>>
>> ------------------------------------------------------------
>> ------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>
>>
>>
>> _______________________________________________
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel(a)lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>
>>
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
6 years, 6 months