Announce Push of V2.8-dev.17
by Frank Filz
Branch next
Tag:V2.8-dev.17
Release Highlights
* Drop dirent ref when releasing dirent
* Update developer onboarding instructions:
* config_samples: properly quote and terminate the sample watch_url setting
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
bccd234 Frank S. Filz V2.8-dev.17
760a2a7 Jeff Layton config_samples: properly quote and terminate the sample
watch_url setting
75ed224 Bjorn Leffler Update developer onboarding instructions:
03ee21e Daniel Gryniewicz Drop dirent ref when releasing dirent
5 years, 10 months
Change in ...nfs-ganesha[next]: Working sample configuration for the PROXY FSAL.
by Name of user not set (GerritHub)
leffler(a)google.com has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/443670
Change subject: Working sample configuration for the PROXY FSAL.
......................................................................
Working sample configuration for the PROXY FSAL.
Also my first checkin to Ganesha to learn the tools and procedures.
Signed-off-by: Bjorn Leffler <leffler(a)google.com>
Change-Id: I8b1affc2bfee3e76f1392453b0e9dab89e4e6d3b
---
M src/config_samples/proxy.conf
1 file changed, 17 insertions(+), 18 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/70/443670/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/443670
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: I8b1affc2bfee3e76f1392453b0e9dab89e4e6d3b
Gerrit-Change-Number: 443670
Gerrit-PatchSet: 1
Gerrit-Owner: leffler(a)google.com
Gerrit-MessageType: newchange
5 years, 10 months
Crash in mdc_lookup_uncached with a bad name pointer
by Rungta, Vandana
Hello,
We are seeing the following crash with NFS Ganesha 2.7.1 – we crash further in our fsal module’s lookup method trying to use the “name” which has an invalid pointer.
This has the following 3 patches applied to NFS Ganesha 2.7.1
https://github.com/nfs-ganesha/nfs-ganesha/commit/654dd706d22663c6ae6029e...
https://github.com/nfs-ganesha/nfs-ganesha/commit/5dc6a70ed42275a4f6772b9...
The most recent patch to not return dead hash entries:
https://github.com/nfs-ganesha/nfs-ganesha/commit/25320e6544f6c5a045f20c5...
The workload:
Concurrent access from multiple threads. 1 thread continuously (in a loop) running python os.walk (i.e., readdir) of the entire filesystem, roughly ~5M files total. 5 more threads are writing a few thousand files each. When the writes complete, a single thread verifies written content, then deletes it. Then the writes repeat again.
This is the same workload that causes our OOM issue.
https://lists.nfs-ganesha.org/archives/list/devel@lists.nfs-ganesha.org/t...
#5 0x00007f6acd6a1dca in foo_lookup (parent=0x6fe2e420, name=0xa0 <Address 0xa0 out of bounds>,
handle=0x7f6abf3255a8, attrs_out=0x7f6abf3254a0) at /opt/src/src/handle.c:364
#6 0x000000000053a7a6 in mdc_lookup_uncached (mdc_parent=0x291a4ba0, name=0xa0 <Address 0xa0 out of bounds>,
new_entry=0x7f6abf325728, attrs_out=0x0) at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:1293
#7 0x0000000000541344 in mdcache_readdir_chunked (directory=0x291a4ba0, whence=0, dir_state=0x7f6abf325900,
cb=0x43217c <populate_dirent>, attrmask=122830, eod_met=0x7f6abf325ffb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_helpers.c:3065
#8 0x000000000052e8c3 in mdcache_readdir (dir_hdl=0x291a4bd8, whence=0x7f6abf3258e0, dir_state=0x7f6abf325900,
cb=0x43217c <populate_dirent>, attrmask=122830, eod_met=0x7f6abf325ffb)
at /src/src/FSAL/Stackable_FSALs/FSAL_MDCACHE/mdcache_handle.c:559
#9 0x0000000000432a76 in fsal_readdir (directory=0x291a4bd8, cookie=0, nbfound=0x7f6abf325ffc,
eod_met=0x7f6abf325ffb, attrmask=122830, cb=0x492018 <nfs3_readdirplus_callback>, opaque=0x7f6abf325fb0)
at /src/src/FSAL/fsal_helper.c:1158
#10 0x0000000000491e71 in nfs3_readdirplus (arg=0x10704818, req=0x10704110, res=0x3103f090)
at /src/src/Protocols/NFS/nfs3_readdirplus.c:310
#11 0x00000000004574d1 in nfs_rpc_process_request (reqdata=0x10704110) at /src/src/MainNFSD/nfs_worker_thread.c:1329
#12 0x0000000000457c90 in nfs_rpc_valid_NFS (req=0x10704110) at /src/src/MainNFSD/nfs_worker_thread.c:1549
#13 0x00007f6ad115ce75 in svc_vc_decode (req=0x10704110) at /src/src/libntirpc/src/svc_vc.c:825
---Type <return> to continue, or q <return> to quit---
#14 0x000000000044a688 in nfs_rpc_decode_request (xprt=0x18390200, xdrs=0x174943c0)
at /src/src/MainNFSD/nfs_rpc_dispatcher_thread.c:1341
#15 0x00007f6ad115cd86 in svc_vc_recv (xprt=0x18390200) at /src/src/libntirpc/src/svc_vc.c:798
#16 0x00007f6ad11594d3 in svc_rqst_xprt_task (wpe=0x18390418) at /src/src/libntirpc/src/svc_rqst.c:767
#17 0x00007f6ad115994d in svc_rqst_epoll_events (sr_rec=0x2779260, n_events=1)
at /src/src/libntirpc/src/svc_rqst.c:939
#18 0x00007f6ad1159be2 in svc_rqst_epoll_loop (sr_rec=0x2779260) at /src/src/libntirpc/src/svc_rqst.c:1012
#19 0x00007f6ad1159c95 in svc_rqst_run_task (wpe=0x2779260) at /src/src/libntirpc/src/svc_rqst.c:1048
#20 0x00007f6ad11625f6 in work_pool_thread (arg=0x3b5d170) at /src/src/libntirpc/src/work_pool.c:181
#21 0x00007f6ad0169de5 in start_thread () from /lib64/libpthread.so.0
#22 0x00007f6acfa71bad in clone () from /lib64/libc.so.6
(gdb) select-frame 7
(gdb) info locals
status = {major = ERR_FSAL_INVAL, minor = 0}
cb_result = DIR_CONTINUE
entry = 0x0
attrs = {request_mask = 122830, valid_mask = 1433550, supported = 1433582, type = REGULAR_FILE, filesize = 1024,
fsid = {major = 0, minor = 0}, acl = 0x0, fileid = 47680710, mode = 438, numlinks = 1, owner = 65534,
group = 65534, rawdev = {major = 0, minor = 0}, atime = {tv_sec = 1548784955, tv_nsec = 582000000}, creation = {
tv_sec = 0, tv_nsec = 0}, ctime = {tv_sec = 1548784955, tv_nsec = 582000000}, mtime = {tv_sec = 1548784955,
tv_nsec = 582000000}, chgtime = {tv_sec = 1548784955, tv_nsec = 582000000}, spaceused = 1024,
change = 1548784955582, generation = 0, expire_time_attr = 60, fs_locations = 0x0}
dirent = 0x5296230
has_write = true
set_first_ck = false
next_ck = 2419507
look_ck = 2419507
chunk = 0x156d7240
first_pass = true
eod = false
reload_chunk = false
__func__ = "mdcache_readdir_chunked"
__PRETTY_FUNCTION__ = "mdcache_readdir_chunked"
(gdb)
status = {major = ERR_FSAL_INVAL, minor = 0}
cb_result = DIR_CONTINUE
entry = 0x0
attrs = {request_mask = 122830, valid_mask = 1433550, supported = 1433582, type = REGULAR_FILE, filesize = 1024,
fsid = {major = 0, minor = 0}, acl = 0x0, fileid = 47680710, mode = 438, numlinks = 1, owner = 65534,
group = 65534, rawdev = {major = 0, minor = 0}, atime = {tv_sec = 1548784955, tv_nsec = 582000000}, creation = {
tv_sec = 0, tv_nsec = 0}, ctime = {tv_sec = 1548784955, tv_nsec = 582000000}, mtime = {tv_sec = 1548784955,
tv_nsec = 582000000}, chgtime = {tv_sec = 1548784955, tv_nsec = 582000000}, spaceused = 1024,
change = 1548784955582, generation = 0, expire_time_attr = 60, fs_locations = 0x0}
dirent = 0x5296230
has_write = true
set_first_ck = false
next_ck = 2419507
look_ck = 2419507
chunk = 0x156d7240
first_pass = true
eod = false
reload_chunk = false
__func__ = "mdcache_readdir_chunked"
__PRETTY_FUNCTION__ = "mdcache_readdir_chunked"
(gdb) print *dirent
$1 = {chunk_list = {next = 0x0, prev = 0xa1}, chunk = 0x3018770, node_name = {
left = 0x7f6acfd39848 <main_arena+232>, right = 0x0, parent = 0}, node_ck = {left = 0x5296258, right = 0x0,
parent = 0}, node_sorted = {left = 0xffffffff, right = 0x0, parent = 0}, ck = 0, eod = false, namehash = 8192,
ckey = {hk = 1056768, fsal = 0x0, kv = {addr = 0x0, len = 134650068}}, flags = 0,
name = 0xa0 <Address 0xa0 out of bounds>, name_buffer = 0x52962d8 " "}
(gdb) print dirent.name
$3 = 0xa0 <Address 0xa0 out of bounds>
(gdb) print *0x52962d8
$4 = 32
(gdb) print dirent.name_buffer
$5 = 0x52962d8 " "
(gdb) print *dirent.name_buffer
$6 = 32 ' '
(gdb) print *chunk
$7 = {chunks = {next = 0x291a4e28, prev = 0x291a4e28}, dirents = {next = 0x2ed8c060, prev = 0x48ff2870},
parent = 0x291a4ba0, chunk_lru = {q = {next = 0x7e1c00 <CHUNK_LRU+1792>, prev = 0x4d6f5c88}, qid = LRU_ENTRY_L1,
refcnt = 0, flags = 0, lane = 8, cf = 0}, reload_ck = 0, next_ck = 0, num_entries = 2500}
5 years, 10 months
Change in ...nfs-ganesha[next]: Allow special characters in filenames
by Malahal (GerritHub)
Malahal has uploaded this change for review. ( https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/443537
Change subject: Allow special characters in filenames
......................................................................
Allow special characters in filenames
Don't allow DOT or DOTDOT file names, also don't allow 'slash' in the
file name.
Change-Id: Ibd708f8ecd905e200601ab5d519e4413c80cc595
Signed-off-by: Malahal Naineni <malahal(a)us.ibm.com>
---
M src/Protocols/NFS/nfs4_op_create.c
M src/Protocols/NFS/nfs4_op_link.c
M src/Protocols/NFS/nfs4_op_lookup.c
M src/Protocols/NFS/nfs4_op_open.c
M src/Protocols/NFS/nfs4_op_remove.c
M src/Protocols/NFS/nfs4_op_rename.c
M src/Protocols/NFS/nfs4_op_secinfo.c
7 files changed, 11 insertions(+), 10 deletions(-)
git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha refs/changes/37/443537/1
--
To view, visit https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/443537
To unsubscribe, or for help writing mail filters, visit https://review.gerrithub.io/settings
Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-Change-Id: Ibd708f8ecd905e200601ab5d519e4413c80cc595
Gerrit-Change-Number: 443537
Gerrit-PatchSet: 1
Gerrit-Owner: Malahal <malahal(a)gmail.com>
Gerrit-MessageType: newchange
5 years, 10 months
Re: [Nfs-ganesha-devel] Memory growth in mdcache with entries_used >> entries_hiwat
by Daniel Gryniewicz
Moving to the new list...
I'm not seeing how that patch can have caused this, as an entry should
have already been removed from the LRU before hitting the changes in the
patch. I'll run some tests, though, and see what I can work out.
Daniel
On 1/28/19 1:49 PM, Kropelin, Adam via Nfs-ganesha-devel wrote:
> This list has been deprecated. Please subscribe to the new devel list at lists.nfs-ganesha.org.
>
>
> After applying https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/441566
> our long-running test is showing continually increasing memory usage.
> Eventually ganesha.nfsd consumes all memory in the box and we OOM.
> Looking at a core, it appears that the mdcache lru contains far more
> entries than the high water mark would normally allow.
>
> We have…
>
> CacheInode {
>
> Dir_Chunk = 500000;
>
> Entries_HWMark = 500000;
>
> }
>
> …and after we run for a while, we observe from a core (obtained at
> runtime using gcore)...
>
> (gdb) print lru_state
>
> $8 = {entries_hiwat = 500000, entries_used = 2437134, chunks_hiwat =
> 100000, chunks_used = 2002, fds_system_imposed = 400000, fds_hard_limit
> = 396000, fds_hiwat = 360000, fds_lowat = 200000, futility = 0,
> per_lane_work = 50, biggest_window = 160000, prev_fd_count = 160,
> prev_time = 1548692973, fd_state = 0}
>
> So we have 2.4M entries with a high water mark of 500K. The difference
> appears to account for the unexpected memory usage.
>
> This seems to be new behavior after applying the above patch, although
> it’s hard to be certain because earlier we hit the core with entries
> being freed twice before running into the high memory usage.
>
> Some more info from the core:
>
> (gdb) print LRU
>
> $2 = {{L1 = {q = {next = 0xe4788210, prev = 0x23a701f0}, id =
> LRU_ENTRY_L1, size = 147689}, L2 = {q = {
>
> next = 0x23a8b7b0, prev = 0xe478ef80}, id = LRU_ENTRY_L2, size
> = 150}, cleanup = {q = {next = 0xcca1ff70,
>
> prev = 0x139e2bf0}, id = LRU_ENTRY_CLEANUP, size = 748}, mtx =
> {__data = {__lock = 0, __count = 0,
>
> __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __list =
> {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x1dd98360,
> prev = 0x1100cc30}, id = LRU_ENTRY_L1,
>
> size = 143935}, L2 = {q = {next = 0xfa204e0, prev = 0x1e26a0a0},
> id = LRU_ENTRY_L2, size = 145}, cleanup = {
>
> q = {next = 0xb26f8720, prev = 0x18f1810}, id =
> LRU_ENTRY_CLEANUP, size = 811}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x4903dc0,
> prev = 0x96cb4320}, id = LRU_ENTRY_L1,
>
> size = 141362}, L2 = {q = {next = 0x2eea11f0, prev = 0x48fd050},
> id = LRU_ENTRY_L2, size = 147}, cleanup = {
>
> q = {next = 0xdc74e5b0, prev = 0xca6f990}, id =
> LRU_ENTRY_CLEANUP, size = 817}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x79dcf850,
> prev = 0x7d9445d0}, id = LRU_ENTRY_L1,
>
> size = 122171}, L2 = {q = {next = 0x7e823b30, prev = 0x79dcfc90},
> id = LRU_ENTRY_L2, size = 146}, cleanup = {
>
> q = {next = 0x3245ddb0, prev = 0x1121e510}, id =
> LRU_ENTRY_CLEANUP, size = 761}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x1a94af70,
> prev = 0xec32de0}, id = LRU_ENTRY_L1,
>
> size = 142951}, L2 = {q = {next = 0xec25300, prev = 0x1a944200},
> id = LRU_ENTRY_L2, size = 150}, cleanup = {
>
> q = {next = 0xcf8eada0, prev = 0xba9afd0}, id =
> LRU_ENTRY_CLEANUP, size = 759}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x49b54c90,
> prev = 0x1896b90}, id = LRU_ENTRY_L1,
>
> size = 133691}, L2 = {q = {next = 0x413dbca0, prev = 0x66d15ce0},
> id = LRU_ENTRY_L2, size = 142}, cleanup = {
>
> q = {next = 0xe249dcb0, prev = 0x135e9ae0}, id =
> LRU_ENTRY_CLEANUP, size = 804}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x25eba620,
> prev = 0x96783780}, id = LRU_ENTRY_L1,
>
> ---Type <return> to continue, or q <return> to quit---
>
> size = 146051}, L2 = {q = {next = 0x1d179420, prev = 0x25eb38b0},
> id = LRU_ENTRY_L2, size = 150}, cleanup = {
>
> q = {next = 0xab572d30, prev = 0x11a0d550}, id =
> LRU_ENTRY_CLEANUP, size = 765}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x15bc6d50,
> prev = 0x3c292070}, id = LRU_ENTRY_L1,
>
> size = 135767}, L2 = {q = {next = 0x5d1fad10, prev = 0x15bbffe0},
> id = LRU_ENTRY_L2, size = 148}, cleanup = {
>
> q = {next = 0x5b5c1490, prev = 0x88016f0}, id =
> LRU_ENTRY_CLEANUP, size = 720}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x67772f0,
> prev = 0x964eb1b0}, id = LRU_ENTRY_L1,
>
> size = 140729}, L2 = {q = {next = 0x30d68c80, prev = 0x6770580},
> id = LRU_ENTRY_L2, size = 147}, cleanup = {
>
> q = {next = 0xc4710dd0, prev = 0x874dc10}, id =
> LRU_ENTRY_CLEANUP, size = 791}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x5d1c210,
> prev = 0x302682d0}, id = LRU_ENTRY_L1,
>
> size = 140670}, L2 = {q = {next = 0x3026f040, prev = 0x5d154a0},
> id = LRU_ENTRY_L2, size = 147}, cleanup = {
>
> q = {next = 0xcb5487f0, prev = 0xc679490}, id =
> LRU_ENTRY_CLEANUP, size = 794}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0xe2791600,
> prev = 0x96252be0}, id = LRU_ENTRY_L1,
>
> size = 148189}, L2 = {q = {next = 0xec06a870, prev = 0xe2798370},
> id = LRU_ENTRY_L2, size = 145}, cleanup = {
>
> q = {next = 0xa4dec380, prev = 0xb29dc50}, id =
> LRU_ENTRY_CLEANUP, size = 746}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0xe5dff590,
> prev = 0x23d43b90}, id = LRU_ENTRY_L1,
>
> size = 147297}, L2 = {q = {next = 0x23de7e10, prev = 0xe5e06300},
> id = LRU_ENTRY_L2, size = 141}, cleanup = {
>
> q = {next = 0xa6b8d390, prev = 0x10229cb0}, id =
> LRU_ENTRY_CLEANUP, size = 744}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x1cc790a0,
> prev = 0x4ba049f0}, id = LRU_ENTRY_L1,
>
> size = 143750}, L2 = {q = {next = 0xf39d9d0, prev = 0x1cc72330},
> id = LRU_ENTRY_L2, size = 147}, cleanup = {
>
> ---Type <return> to continue, or q <return> to quit---
>
> q = {next = 0xd37ff620, prev = 0x13b47010}, id =
> LRU_ENTRY_CLEANUP, size = 780}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0xeaf4fd00,
> prev = 0x25ab91d0}, id = LRU_ENTRY_L1,
>
> size = 146505}, L2 = {q = {next = 0x25aa4980, prev = 0xeaf56a70},
> id = LRU_ENTRY_L2, size = 150}, cleanup = {
>
> q = {next = 0x23c6c110, prev = 0x11e02670}, id =
> LRU_ENTRY_CLEANUP, size = 788}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0xe3dea010,
> prev = 0x2831da50}, id = LRU_ENTRY_L1,
>
> size = 148717}, L2 = {q = {next = 0x283247c0, prev = 0xe3df0d80},
> id = LRU_ENTRY_L2, size = 149}, cleanup = {
>
> q = {next = 0xb84809e0, prev = 0x87cee00}, id =
> LRU_ENTRY_CLEANUP, size = 850}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0xce3ced10,
> prev = 0xd5f2db20}, id = LRU_ENTRY_L1,
>
> size = 153934}, L2 = {q = {next = 0xd5f12560, prev = 0xce3d5a80},
> id = LRU_ENTRY_L2, size = 149}, cleanup = {
>
> q = {next = 0xca840ac0, prev = 0xda6b170}, id =
> LRU_ENTRY_CLEANUP, size = 779}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}, {L1 = {q = {next = 0x2d292f90,
> prev = 0x17851300}, id = LRU_ENTRY_L1,
>
> size = 138008}, L2 = {q = {next = 0x1786c8c0, prev = 0x2d2854b0},
> id = LRU_ENTRY_L2, size = 147}, cleanup = {
>
> q = {next = 0xc7c1e980, prev = 0xb2a3820}, id =
> LRU_ENTRY_CLEANUP, size = 761}, mtx = {__data = {__lock = 0,
>
> __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins =
> 0, __list = {__prev = 0x0, __next = 0x0}},
>
> __size = '\000' <repeats 39 times>, __align = 0}, iter = {active
> = false, glist = 0x0, glistn = 0x0},
>
> __pad0 = '\000' <repeats 63 times>}}
>
> (gdb)
>
> (gdb) print *(struct mdcache_lru__ *)0xe4788210
>
> $3 = {q = {next = 0xe47814a0, prev = 0x7e0620 <LRU>}, qid =
> LRU_ENTRY_L1, refcnt = 1, flags = 0, lane = 0, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xe47814a0
>
> $5 = {q = {next = 0xe477a730, prev = 0xe4788210}, qid = LRU_ENTRY_L1,
> refcnt = 1, flags = 0, lane = 0, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xe477a730
>
> $6 = {q = {next = 0xe47739c0, prev = 0xe47814a0}, qid = LRU_ENTRY_L1,
> refcnt = 1, flags = 0, lane = 0, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xe47739c0
>
> $7 = {q = {next = 0xe476cc50, prev = 0xe477a730}, qid = LRU_ENTRY_L1,
> refcnt = 1, flags = 0, lane = 0, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xb84809e0
>
> $8 = {q = {next = 0xaa559720, prev = 0x7e12a0 <LRU+3200>}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 3, flags = 3,
>
> lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xaa559720
>
> $9 = {q = {next = 0xd9754930, prev = 0xb84809e0}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 3, flags = 3, lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xd9754930
>
> $10 = {q = {next = 0xa94e7f40, prev = 0xaa559720}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 2, flags = 3, lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xa94e7f40
>
> $11 = {q = {next = 0x185aa7d0, prev = 0xd9754930}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 3, flags = 3, lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0x185aa7d0
>
> $12 = {q = {next = 0xcb912a30, prev = 0xa94e7f40}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 1, flags = 3, lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xcb912a30
>
> $13 = {q = {next = 0xc7727370, prev = 0x185aa7d0}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 2, flags = 3, lane = 14, cf = 0}
>
> (gdb)
>
> $14 = {q = {next = 0xc7727370, prev = 0x185aa7d0}, qid =
> LRU_ENTRY_CLEANUP, refcnt = 2, flags = 3, lane = 14, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xf39d9d0
>
> $15 = {q = {next = 0xf3a4740, prev = 0x7e10c0 <LRU+2720>}, qid =
> LRU_ENTRY_L2, refcnt = 1, flags = 0, lane = 12,
>
> cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xf3a4740
>
> $16 = {q = {next = 0xf3ab4b0, prev = 0xf39d9d0}, qid = LRU_ENTRY_L2,
> refcnt = 1, flags = 0, lane = 12, cf = 0}
>
> (gdb) print *(struct mdcache_lru__ *)0xf3ab4b0
>
> $17 = {q = {next = 0xf3b2220, prev = 0xf3a4740}, qid = LRU_ENTRY_L2,
> refcnt = 1, flags = 0, lane = 12, cf = 0}
>
>
>
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
5 years, 10 months
Re: [NFS-Ganesha-Support] Re: Some errors starting ganesha - ceph
by Frank Filz
From: Oscar Segarra [mailto:oscar.segarra@gmail.com]
Sent: Tuesday, February 5, 2019 1:30 PM
To: Frank Filz <ffilzlnx(a)mindspring.com>
Cc: Jeff Layton <jlayton(a)poochiereds.net>; dang(a)redhat.com; support(a)lists.nfs-ganesha.org; devel(a)nfs-ganesha.org
Subject: Re: [NFS-Ganesha-Support] Re: Some errors starting ganesha - ceph
Hi Frank,
Thanks a lot for your quick answer. I have tried your suggested change:
[root@vdicube_pub_ceph_nfs /]# cat /etc/ganesha/ganesha.conf
NFSV4
{
Allow_Numeric_Owners = false;
}
NFS_KRB5
{
PrincipalName = nfs;
KeytabPath = /etc/krb5.keytab;
Active_krb5 = false;
}
NFS_CORE_PARAM
{
# Enable NLM (network lock manager protocol)
Enable_NLM = false;
}
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 2046;
# Exported path (mandatory)
Path = /;
# Pseudo Path (for NFS v4)
Pseudo = /;
# Access control options
Access_Type = NONE;
Squash = No_Root_Squash;
Anonymous_Uid = -2;
Anonymous_Gid = -2;
# NFS protocol options
Transports = "TCP";
Protocols = "4";
SecType = "sys";
Manage_Gids = true;
CLIENT {
Clients = 192.168.100.104,192.168.100.105;
Access_Type = RO;
}
# Exporting FSAL
FSAL {
Name = CEPH;
User_Id = "admin";
}
}
LOG {
Default_Log_Level = WARN;
Components {
# ALL = DEBUG;
# SESSIONS = INFO;
}
}
[root@vdicube_pub_ceph_nfs /]#
28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs : ganesha.nfsd-
> 25[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs,
This one can be made to not happen by setting:
NFS_KRB5
{
Active_krb5 = false;
}
You are right, the previous message has dissappeared!
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs : ganesha.nfsd-
> 25[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs : ganesha.nfsd-
> 25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds
> directory (/var/run/ganesha) already exists,
This one perhaps should not be an EVENT, maybe just an INFO, or maybe silent (with perhaps a INFO if it IS created).
I have deleted the folder /var/run/ganesha before run ganesha and messages have disappeared too.
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs : ganesha.nfsd-
> 25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN
> :gssd_refresh_krb5_machine_credential failed (-1765328160:0),
Hmm, the code that generates this perhaps should not be executed if Active_krb5 = false.
Unfortunately not. Messages are still in the log:
Yea, we will have to make code changes for that one.
05/02/2019 22:14:08 : epoch 5c59fc9f : vdicube_pub_ceph_nfs : ganesha.nfsd-25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
The statd daemon is not necessary for NFS v4 only, and Enable_NLM = false is just fine in that case. NLM is Network Lock Manager, the lock protocol for NFS v3.
Let me another question regarding parallel services. sssd and rpcbind ara necessary for nfsv4 too?
sssd is not necessary for NFS though I guess maybe it could be used (I really don’t know anything about it… other than knowing it doesn’t need to be part of a minimal NFS implementation…)
rpcbind is optional with NFS v4. NFS v4 clients are well prepared to communicate to the server over the default port 2049 or optionally a port specified on the mount command.
You can also set Protocols = 4 in NFS_CORE_PARAM to completely disable NFS v4 (and 9P).
I don't want to disable nfsv4... :S. Please can you clarify?
Sorry, I mistyped, Protocols = 4 will leave NFS v4 active and disable NFS v3 and 9P.
Thanks a lot.
5 years, 10 months
V2.8 release planned date
by Denis Kondratenko
Hi guys,
what is the planned date for the V2.8 release?
Thanks,
--
Denis Kondratenko
Engineering Manager SUSE Linux Enterprise Storage
SUSE LINUX GmbH
Maxfeldstraße 5
90409 Nürnberg
Germany
GF: SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Dilip
Upmanyu, Graham Norton, HRB 21284 (AG Nürnberg)
5 years, 10 months
Proposed 2.7.2
by Daniel Gryniewicz
Here's my proposed 2.7.2 patchset. I've only included bug fixes, except
for the requested export cleanup. If you have other changes you want
in, please propose them in this thread.
2f6d8b7d6 (HEAD -> V2.7-stable) Remove -Wabi from maintainer mode
949de68f8 Fix READDIR duplicate entries
1249a2723 Fix attribute comparison in NFS4_OP_VERIFY
20ed57717 Fix GTest build
ba756d3ce nfs4_op_open: NULL pointer bug - replace goto out3 with return
ebcde738d Don't call nfs_req_creds if we don't have export
7b5c6c833 Handle race while adding host to IP-name cache
9edc1fbab exports: prune off old exports after reloading config
8ea299930 exports: copy the config tree generation to the export when
adding or updating
914469434 exports: add generation counter to config_root and helpers to
fetch it
d0d4d0201 exports: don't allow dbus remove to leave subexports
disconnected in pseudoroot
5d20939bf ganesha_status.py - fix missing parens
9fa5bb085 rpm/selinux: fix %pre install and %postun uninstall of selinux
policy
1d5e66792 Fix to set ATTR_RDATTR_ERR correctly as valid_mask in open2.
1d8f0d242 CEPH: do a getattr after creating a dir and applying extra
attributes
b62c60b49 Read/Write - Don't leak owners/states
7e7c01f23 SAL: fix multiple state reference leaks
3be24ba32 Set op_ctx for lock_avail and lock_grant
ad2b9a30a FSAL_VFS, FreeBSD: upstream support for d_off
c4119a3c5 Add checks for client access in NFS user access checks.
a1b0888f3 FSAL_UP: add missing locks in async delegation recall
0eb7faf67 Handle NLM share FREE_ALL for windows clients
6cdc3fb22 NLM share reservation access check with owner_skip
770db8de3 Fixed dereferencing null ctx_export on nfs_read
789b56d35 fix the bug of import config form rados url
01f6e20d3 Remove idmap entry only if it is present in uid_tree
218d7f8ec MDCACHE: fix invalid assert in mdcache_avl_lookup_ck()
0fc6e3a2a Restore op_ctx->ctx_export and fsal_export in nfs4_op_readdir
adce80154 Fix incorrect parent file handle update on rename.
7a22a86fc Fix NFSv3 EOF handling.
738550a12 FSAL_GLUSTER/FSAL_GPFS: Do not error out if setattr ATTR_ACL
with empty acl
6fce461ad FSAL_GLUSTER: fix memory leak of acl_t
1b5ba9978 [GPFS] Handle failures with NFS readdir operation
ae73a8f32 FSAL_GLUSTER: Copy user creds and lease_id after reopening fd
3ea561a75 Acquire state's fdlock in close2
9383863eb Remove unexported export on DBUS unexport.
40c7b7d54 rpc_callback : check return value for
nfs_rpc_create_chan_v40() properly
c86969ef0 FSAL_PROXY : module options
e23eaf9c9 SAL: Fix dead lock in revoke_owner_delegs()
Daniel
5 years, 10 months