So, this is interesting, since it means that entries are being reaped.
There are dirents for entries that don't exist, so re-adding the entry
is colliding with the existing dirent.
If you stop the load and wait a while, does the number of entries go down?
On Tue, Jan 29, 2019 at 4:13 PM <vrungta(a)amazon.com> wrote:
>
> The issue is very reproducible. Just takes a few hours. If there is any debug info
you would like turned on, I'd be glad to enable it and re-run the test.
>
> The test workload that causes this:
> Concurrent access from multiple threads. 1 thread continuously (in a loop) running
python
> os.walk (i.e., readdir) of the entire filesystem, roughly ~5M files total. 5 more
threads
> are writing a few thousand files each. When the writes complete, a single thread
verifies
> written content, then deletes it. Then the writes repeat again.
>
> I did see about 97,000 of the following messages in a few minutes of enabling debug:
> LogDebugAlt(COMPONENT_NFS_READDIR, COMPONENT_CACHE_INODE,
> "Already existent when inserting new dirent on entry=%p
name=%s",
> entry, v->name);
> LogFullDebugAlt(COMPONENT_NFS_READDIR, COMPONENT_CACHE_INODE,
> "Duplicate insert of %s v->chunk=%p
v2->chunk=%p",
> v->name, v->chunk, v2->chunk);
> _______________________________________________
> Devel mailing list -- devel(a)lists.nfs-ganesha.org
> To unsubscribe send an email to devel-leave(a)lists.nfs-ganesha.org