Re: Crash seen in shutdown path
by Frank Filz
(resending response and adding devel mailing list - maybe someone else has
some ideas)
There is this fix:
660f330243c57c0b2fea11c87507b3e1991bb300 FSAL_MDCACHE: avoid assertion due
to wrong check
I'm pretty sure since V2.5 we've also fixed several places where op_ctx was
not setup for things in the shutdown path, but I can't find the patches.
Frank
From: Trishali Nayar [mailto:ntrishal@in.ibm.com]
Sent: Monday, January 14, 2019 6:04 AM
To: Frank Filz <ffilzlnx(a)mindspring.com>
Cc: 'Malahal R Naineni' <mnaineni(a)in.ibm.com>
Subject: RE: Crash seen in shutdown path
This was hit on our 2.5 code stream...
I did try to look into the community stream and even 2.7, but this
particular code seemed same everywhere.
Thanks and regards,
Trishali.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trishali Nayar
IBM Systems
ETZ, Pune.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From: "Frank Filz" <ffilzlnx(a)mindspring.com
<mailto:ffilzlnx@mindspring.com> >
To: "'Trishali Nayar'" <ntrishal(a)in.ibm.com
<mailto:ntrishal@in.ibm.com> >
Cc: "'Malahal R Naineni'" <mnaineni(a)in.ibm.com
<mailto:mnaineni@in.ibm.com> >
Date: 01/12/2019 04:22 AM
Subject: RE: Crash seen in shutdown path
_____
What code base is this with? If upstream, this may be fixed.
Frank
From: Trishali Nayar [ <mailto:ntrishal@in.ibm.com>
mailto:ntrishal@in.ibm.com]
Sent: Friday, January 11, 2019 7:09 AM
To: Frank Filz <ffilzlnx(a)mindspring.com <mailto:ffilzlnx@mindspring.com> >
Cc: Malahal R Naineni <mnaineni(a)in.ibm.com <mailto:mnaineni@in.ibm.com> >
Subject: Crash seen in shutdown path
Hi Frank,
I was looking at a crash in mdcache_lru_clean() routine which happened due
to the assert, as "op_ctx" is not set. This was in the shutdown path via
shutdown_handles()
The first_export_id is having a value of -1 for the entry.
I observed that for the other admin_thread routines in the same shutdown
path...we call init_root_op_context() explicitly Eg- in
remove_all_exports() and unexport() etc.
So only when we get into the below path of Extra file handles hanging
around...we will hit this problem.
static void shutdown_handles(struct fsal_module *fsal)
{
/* Handle iterator */
struct glist_head *hi = NULL;
/* Next pointer in handle iteration */
struct glist_head *hn = NULL;
if (glist_empty(&fsal->handles))
return;
LogDebug(COMPONENT_FSAL, "Extra file handles hanging around."); <<<<
in below path
glist_for_each_safe(hi, hn, &fsal->handles) {
struct fsal_obj_handle *h = glist_entry(hi,
struct
fsal_obj_handle,
handles);
LogDebug(COMPONENT_FSAL,
"Releasing handle");
h->obj_ops->release(h);
}
}
1> I was thinking maybe we should fix this by calling init_root_op_context()
when we get into the "Extra file handles" path...
2> But we would still hit the second assert for "op_ctx->ctx_export"
So could we also move the second assert into a condition:-
if (export_id >= 0 )
assert(op_ctx->ctx_export)
The stack trace and the values in the entry are attached here for reference
as well :
Your insights on this will be extremely useful.
Thanks and regards,
Trishali.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trishali Nayar
IBM Systems
ETZ, Pune.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
5 years, 10 months
Change in ...nfs-ganesha[next]: nfs4_Compound.c: cleanup use of thisarg and thisres
by Frank Filz (GerritHub)
Frank Filz has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440361
Change subject: nfs4_Compound.c: cleanup use of thisarg and thisres
......................................................................
nfs4_Compound.c: cleanup use of thisarg and thisres
thisarg was used one place it wasn't defined. Most uses are to access
argop which is also available as data->opcode.
One function just set thisarg and thisres for the sole purpose of
passing them to the opcode processing function. Simplify the code...
Change-Id: Ied7f4373127aaeef6694e908bd6147431d5966a5
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
---
M src/Protocols/NFS/nfs4_Compound.c
1 file changed, 9 insertions(+), 13 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/61/440361/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440361
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ied7f4373127aaeef6694e908bd6147431d5966a5
Gerrit-Change-Number: 440361
Gerrit-PatchSet: 1
Gerrit-Owner: Frank Filz <ffilzlnx(a)mindspring.com>
Gerrit-MessageType: newchange
5 years, 10 months
Changing time of the weekly community conference call
by Frank Filz
Now that most of us are back from the end of year holidays, can we make a
decision about the conference call?
It would be ideal for me if we could start the call 30 minutes earlier, 7:00
AM Pacific Time. Unfortunately due to school schedules, I need to keep it on
PST or PDT, whichever is active, so the time of the meeting will shift for
those who don't observe seasonal time changes (India for example), and will
be wonky for a week or two for those whose time shift occurs on different
dates from the US.
I know some of us at Red Hat attend a meeting every other Tuesday at 7:15,
I've asked if that meeting time can be changed.
I know earlier would be easier on folks in Europe and India, but I don't
know if you have other schedule conflicts.
Depending on how complete a response I get over the next few days, we may or
may not try to shift for the next meeting. I want to make sure all
interested folks have a chance to respond, and then time to get an adjusted
time notice out before the first meeting at the new time such that everyone
has sufficient notice.
Thanks
Frank
5 years, 10 months
Change in ...nfs-ganesha[next]: Convert gsh_malloc and friends to macros
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440133
Change subject: Convert gsh_malloc and friends to macros
......................................................................
Convert gsh_malloc and friends to macros
This would help track memory leaks using mtrace and muntrace. This is
done under LINUX only as mtrace/muntrace aren't available under other
environments.
Change-Id: I8670e4aa981541864ca74fa10dc77792c7f75c01
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/include/abstract_mem.h
1 file changed, 68 insertions(+), 0 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/33/440133/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440133
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I8670e4aa981541864ca74fa10dc77792c7f75c01
Gerrit-Change-Number: 440133
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
5 years, 10 months
Announce Push of V2.8-dev.12
by Frank Filz
Branch next
Tag:V2.8-dev.12
NOTE: This merge includes an ntirpc pullup, please update your submodule.
Release Highlights
* Implementation of async from ntirpc all the way down to FSAL
* Test code in FSAL_MEM to test async
* doc: add section about flags to ganesha-rados-grace manpage
* rados_grace: return success when adding an entry that already exists
* Repopulate chunk after reacquiring a dropped lock
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
2e3a253 Frank S. Filz V2.8-dev.12
654dd70 Madhu Thorat Repopulate chunk after reacquiring a dropped lock
54e1776 Jeff Layton rados_grace: return success when adding an entry that
already exists
459ab71 Jeff Layton doc: add section about flags to ganesha-rados-grace
manpage
69f73fa Frank S. Filz FSAL_MEM: Add test option to make actual async I/O
completion
0c0e8b2 Frank S. Filz VFS: Implement vfs_update_export
87ffeff Frank S. Filz Add ability to update EXPORT FSAL parameters
bf445b8 Frank S. Filz Add a bit of config parsing debug
12af917 Frank S. Filz Make gtest read and write operations async capable
3aa3487 Frank S. Filz Make 9P read and write async capable
0e25680 Frank S. Filz Make nfs3 read and write async capable
ac337c3 Frank S. Filz Make nfs4 read and write async capable
a055849 Frank S. Filz NFS4: Prepare nfs4_Compound.c for async compounds
6e098be Frank S. Filz NFS: Prepare nfs_worker_thread.c for async processing
5e2dccf Frank S. Filz fsal_helper functions to synchronize async I/O
6fba0e3 Frank S. Filz Replace request_cb with alloc_cb and free_cb with
libntirpc pullup
4eacf5c Frank S. Filz Move Nb_Worker from NFS_CORE_PARAM to _9P
4414373 Frank S. Filz Drive by log messae formatting fix...
1d3e653 Frank S. Filz CONFIG: Add CONFIG_ITEM_DEPRECATED for warning
deprecated config parms
2759f9a Frank S. Filz Without a shared queueing we no longer need a shared
request_data_t
eb87d1b Frank S. Filz Remove vestiges of 9P work queue from NFS code and
rename functions
cceb8e6 Frank S. Filz Remove rpc_call_t from request_data_t
4b9fed9 Frank S. Filz (mostly) remove queue_wait
af3af0b Frank S. Filz Add nfs_req_result type and convert nfs4 compound ops
to return it
7b496f5 Frank S. Filz Strip out WRITE_PLUS code
cea9371 Frank S. Filz Move setting of resp->resop for nfs4_read for clarity
c0e6c19 Frank S. Filz nfs4_op_open: NULL pointer bug - replace goto out3
with return
5 years, 10 months
ganesha coredump in mdcache_get_chunk()
by Madhu Thorat
Hello,
Not sure if my previous mail was sent, hence re-sending the mail.
We saw a crash at the following line in mdcache_get_chunk() as prev_chunk's
dirent list is empty.
chunk->reload_ck = glist_last_entry(&prev_chunk->dirents,
mdcache_dir_entry_t,
chunk_list)->ck;
The backtrace of the coredump is at the end of the mail.
I could reproduce similar crash by doing the following:
1. In mdcache_readdir_chunked inserted a sleep() after the content_lock is
released and before the content_lock is acquired for writing as follows:
2805 again:
2806 /* Get here on first pass, retry if we don't hold the write
lock,
2807 * and repeated passes if we need to fetch another chunk.
2808 */
2809
2810 LogFullDebugAlt(COMPONENT_NFS_READDIR,
COMPONENT_CACHE_INODE,
2811 "Readdir chunked next_ck=0x%"PRIx64"
look_ck=%"PRIx64,
2812 next_ck, look_ck);
2813
2814 if (look_ck == 0 ||
2815 !mdcache_avl_lookup_ck(directory, look_ck, &dirent)) {
2816 fsal_status_t status;
2817 /* This starting position isn't in our cache...
2818 * Go populate the cache and process from there.
2819 */
2820 if (!has_write) {
2821 /* Upgrade to write lock and retry just in
case
2822 * another thread managed to populate this
cookie
2823 * in the meantime.
2824 */
2825
PTHREAD_RWLOCK_unlock(&directory->content_lock);
2826 * sleep(30);
// Sleep here*
2827
PTHREAD_RWLOCK_wrlock(&directory->content_lock);
2828 has_write = true;
2829 goto again;
2830 }
2. From 1st client run 'ls' inside a mounted directory for an export. 'ls'
is made to wait as there is sleep() in mdcache_readdir_chunked()
3. Immediately from 2nd client remove all the entries inside the mounted
directory for the same export.
4. After sleep() time is over, ganesha crashes as the 'prev_chunk' is not
valid in mdcache_readdir_chunked()
Following is the coredump for reference. The code used was ganesha 2.5 and
it has patches for 'readdir' taken from
https://github.com/dang/nfs-ganesha/tree/v2.5-readdir, the code for
mdcache_readdir_chunked() looks similar to mdcache_readdir_chunked() in 2.8
#0 0x00007fae938dc4ab in raise () from /lib64/libpthread.so.0
#1 0x000000000045549e in crash_handler (signo=11,
info=0x7fae25f48eb0, ctx=0x7fae25f48d80) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_init.c:225
#2 <signal handler called>
#3 mdcache_get_chunk (parent=0x7faa1001a290,
prev_chunk=0x7fade0206350, whence=2147483647) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:909
#4 0x000000000054fbe9 in mdcache_populate_dir_chunk
(directory=0x7faa1001a290, whence=2147483647, dirent=0x7fae25f49680,
prev_chunk=0x7fade0206350, eod_met=0x7fae25f4967f) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2659
#5 0x0000000000551767 in mdcache_readdir_chunked
(directory=0x7faa1001a290, whence=2147483647,
dir_state=0x7fae25f49990, cb=0x43310f <populate_dirent>, attrmask=0,
eod_met=0x7fae25f49e8b) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:3053
#6 0x000000000053f39f in mdcache_readdir (dir_hdl=0x7faa1001a2c8,
whence=0x7fae25f49970, dir_state=0x7fae25f49990, cb=0x43310f
<populate_dirent>, attrmask=0, eod_met=0x7fae25f49e8b) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:639
#7 0x00000000004339f3 in fsal_readdir (directory=0x7faa1001a2c8,
cookie=2147483647, nbfound=0x7fae25f49e8c, eod_met=0x7fae25f49e8b,
attrmask=0, cb=0x495d70 <nfs3_readdir_callback>,
opaque=0x7fae25f49e40) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/fsal_helper.c:1502
#8 0x0000000000495b57 in nfs3_readdir (arg=0x7fa8b4f75e80,
req=0x7fa8b4f75678, res=0x7faad82c8c70) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/Protocols/NFS/nfs3_readdir.c:289
#9 0x000000000044ccde in nfs_rpc_execute (reqdata=0x7fa8b4f75650) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1290
#10 0x000000000044d4e8 in worker_run (ctx=0x4926600) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1562
#11 0x000000000050c57f in fridgethr_start_routine (arg=0x4926600) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/support/fridgethr.c:550
(gdb) frame 3
(gdb) p *prev_chunk
$9 = {chunks = {next = 0x7faad808a570, prev = 0x7fade00000d8}, dirents
= {next = 0x7fade0206360, prev = 0x7fade0206360}, parent = 0x0,
chunk_lru = {q = {next = 0x0, prev = 0x0}, qid = LRU_ENTRY_L1, refcnt
= 0, flags = 0, lane = 534, cf = 0}, reload_ck = 1453366958, next_ck =
0, num_entries = 112}
To fix this I have posted a patch:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440079
Thanks,
Madhu Thorat.
5 years, 10 months
ganesha coredump in mdcache_get_chunk()
by Madhu P Punjabi
Hello,
We saw a crash at the following line in mdcache_get_chunk() as prev_chunk's
dirent list is empty.
chunk->reload_ck = glist_last_entry(&prev_chunk->dirents,
mdcache_dir_entry_t,
chunk_list)->ck;
The backtrace of the coredump is at the end of the mail.
I could reproduce similar crash by doing the following:
1. In mdcache_readdir_chunked inserted a sleep(30) after the content_lock
is released and before the content_lock is acquired for writing as follows:
2805 again:
2806 /* Get here on first pass, retry if we don't hold the write
lock,
2807 * and repeated passes if we need to fetch another chunk.
2808 */
2809
2810 LogFullDebugAlt(COMPONENT_NFS_READDIR,
COMPONENT_CACHE_INODE,
2811 "Readdir chunked next_ck=0x%"PRIx64"
look_ck=%"PRIx64,
2812 next_ck, look_ck);
2813
2814 if (look_ck == 0 ||
2815 !mdcache_avl_lookup_ck(directory, look_ck, &dirent)) {
2816 fsal_status_t status;
2817 /* This starting position isn't in our cache...
2818 * Go populate the cache and process from there.
2819 */
2820 if (!has_write) {
2821 /* Upgrade to write lock and retry just in
case
2822 * another thread managed to populate this
cookie
2823 * in the meantime.
2824 */
2825 PTHREAD_RWLOCK_unlock(&directory->
content_lock);
2826 sleep
(30); // Sleep here
2827 PTHREAD_RWLOCK_wrlock(&directory->
content_lock);
2828 has_write = true;
2829 goto again;
2830 }
2. From 1st client run 'ls' inside a mounted directory for an export. 'ls'
is made to wait as there is sleep() in mdcache_readdir_chunked()
3. Immediately from 2nd client remove all the entries inside the mounted
directory for the same export.
4. After sleep() time is over, ganesha crashes as the 'prev_chunk' is not
valid in mdcache_readdir_chunked()
Following is the coredump for reference. The code used was ganesha 2.5 and
it has patches for 'readdir' taken from
https://github.com/dang/nfs-ganesha/tree/v2.5-readdir, the code for
mdcache_readdir_chunked() looks similar to mdcache_readdir_chunked() in
2.8
#0 0x00007fae938dc4ab in raise () from /lib64/libpthread.so.0
#1 0x000000000045549e in crash_handler (signo=11,
info=0x7fae25f48eb0, ctx=0x7fae25f48d80) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_init.c:225
#2 <signal handler called>
#3 mdcache_get_chunk (parent=0x7faa1001a290,
prev_chunk=0x7fade0206350, whence=2147483647) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_lru.c:909
#4 0x000000000054fbe9 in mdcache_populate_dir_chunk
(directory=0x7faa1001a290, whence=2147483647, dirent=0x7fae25f49680,
prev_chunk=0x7fade0206350, eod_met=0x7fae25f4967f) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:2659
#5 0x0000000000551767 in mdcache_readdir_chunked
(directory=0x7faa1001a290, whence=2147483647,
dir_state=0x7fae25f49990, cb=0x43310f <populate_dirent>, attrmask=0,
eod_met=0x7fae25f49e8b) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:3053
#6 0x000000000053f39f in mdcache_readdir (dir_hdl=0x7faa1001a2c8,
whence=0x7fae25f49970, dir_state=0x7fae25f49990, cb=0x43310f
<populate_dirent>, attrmask=0, eod_met=0x7fae25f49e8b) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:639
#7 0x00000000004339f3 in fsal_readdir (directory=0x7faa1001a2c8,
cookie=2147483647, nbfound=0x7fae25f49e8c, eod_met=0x7fae25f49e8b,
attrmask=0, cb=0x495d70 <nfs3_readdir_callback>,
opaque=0x7fae25f49e40) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/FSAL/fsal_helper.c:1502
#8 0x0000000000495b57 in nfs3_readdir (arg=0x7fa8b4f75e80,
req=0x7fa8b4f75678, res=0x7faad82c8c70) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/Protocols/NFS/nfs3_readdir.c:289
#9 0x000000000044ccde in nfs_rpc_execute (reqdata=0x7fa8b4f75650) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1290
#10 0x000000000044d4e8 in worker_run (ctx=0x4926600) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/MainNFSD/nfs_worker_thread.c:1562
#11 0x000000000050c57f in fridgethr_start_routine (arg=0x4926600) at
/usr/src/debug/nfs-ganesha-2.5.3-ibm031.00-0.1.1-Source/support/fridgethr.c:550
(gdb) frame 3
(gdb) p *prev_chunk
$9 = {chunks = {next = 0x7faad808a570, prev = 0x7fade00000d8}, dirents
= {next = 0x7fade0206360, prev = 0x7fade0206360}, parent = 0x0,
chunk_lru = {q = {next = 0x0, prev = 0x0}, qid = LRU_ENTRY_L1, refcnt
= 0, flags = 0, lane = 534, cf = 0}, reload_ck = 1453366958, next_ck =
0, num_entries = 112}
To fix this I have posted a patch:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440079
Thanks,
Madhu Thorat.
5 years, 10 months
Change in ...nfs-ganesha[next]: Repopulate chunk after reacquiring a dropped lock
by Name of user not set (GerritHub)
madhu.punjabi(a)in.ibm.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440079
Change subject: Repopulate chunk after reacquiring a dropped lock
......................................................................
Repopulate chunk after reacquiring a dropped lock
In mdcache_readdir_chunked() when first_pass=false, it
means we have a valid chunk. But in the next pass if
write lock has not been already acquired then we drop
the existing lock and acquire a write lock. Here after
dropping the lock there is a possibility that another
thread may acquire the lock and discard the chunk.
Thus, we not cannot trust the chunk pointer and we
should repopulate the chunk.
Change-Id: I41c806b96e6cd26789693bf5cfdb070675f0ffac
Signed-off-by: Madhu Thorat <madhu.punjabi(a)in.ibm.com>
---
M src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c
1 file changed, 18 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/79/440079/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/440079
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I41c806b96e6cd26789693bf5cfdb070675f0ffac
Gerrit-Change-Number: 440079
Gerrit-PatchSet: 1
Gerrit-Owner: madhu.punjabi(a)in.ibm.com
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: test
by Jack Halford (GerritHub)
Jack Halford has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/439987
Change subject: test
......................................................................
test
Change-Id: I05c4195176a263089a9d6ef626064f84338d1cd4
Signed-off-by: Jack Halford <jack(a)crans.org>
---
M src/os/freebsd/subr.c
1 file changed, 1 insertion(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/87/439987/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/439987
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I05c4195176a263089a9d6ef626064f84338d1cd4
Gerrit-Change-Number: 439987
Gerrit-PatchSet: 1
Gerrit-Owner: Jack Halford <jack(a)crans.org>
Gerrit-MessageType: newchange
5 years, 10 months
Change in ...nfs-ganesha[next]: rados_grace: return success when adding an entry that already exists
by Jeff Layton (GerritHub)
Jeff Layton has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/439919
Change subject: rados_grace: return success when adding an entry that already exists
......................................................................
rados_grace: return success when adding an entry that already exists
When someone does a "ganesha-rados-grace add" and the entry already
exists, we currently return -EEXIST and the program exits with an
error.
This is problematic for rook, as it wants to run the thing every time
the daemon is restarted, and we want to just ignore it if it already
exists.
Just return success in this case, and don't change anything.
Change-Id: I6481766cc72f5c4e0eef783cf7f3053f3e070806
Signed-off-by: Jeff Layton <jlayton(a)redhat.com>
---
M src/support/rados_grace.c
1 file changed, 0 insertions(+), 1 deletion(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/19/439919/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/439919
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I6481766cc72f5c4e0eef783cf7f3053f3e070806
Gerrit-Change-Number: 439919
Gerrit-PatchSet: 1
Gerrit-Owner: Jeff Layton <jlayton(a)redhat.com>
Gerrit-MessageType: newchange
5 years, 10 months