Hi,
We are using nfs-ganesha with gluster using a two node ubuntu cluster. There are around
3 to 10 NFS clients connecting to nfs-ganesha. When any of the NFS client, starts
reading a lot of files on the mounted storage, we see that the memory utilization
increases very quickly and does not release the memory. The memory keep increasing
causing the OOM killer to kill the process or the linux server reboots.
I have tried to lot of options under MD-Cache but it has not worked. Any
suggestions/help to fix this issue would be hugely appreciated. Please let me know if
additional information is needed.
Note : The issue at :
https://github.com/nfs-ganesha/nfs-ganesha/issues/116 was the
closest I could find similar to my issue but am not sure if the fix for this is in GA
released version.
The memory usage on ganesha is as below,
Files Read - memory usage
150K - 923508 kB
200K - 1418032 kB
250K - 2161752 kB
300K - 2926580 kB
400K - 4457200 kB
Version info :
NFS-ganesha version - 3.3-ubuntu1~bionic6
Gluster version - 8.2-ubuntu1~bionic1
Ganesha config file :
NFS_CORE_PARAM {
Protocols = 3, 4;
mount_path_pseudo = true;
MNT_Port = 38465;
NLM_Port = 38467;
}
MDCACHE {
FD_HWMark_Percent = 60;
Reaper_Work_Per_Lane = 5000;
}
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 77;
Transports = TCP, UDP;
# Exported path (mandatory)
Path = "/storage";
# Pseudo Path (required for NFS v4)
Pseudo = "/storage";
# Required for access (default is None)
# Could use CLIENT blocks instead
Access_Type = RW;
# Allow root access
Squash = No_Root_Squash;
# Security flavor supported
SecType = "sys";
# Exporting FSAL
FSAL {
Name = "GLUSTER";
Hostname = localhost;
Volume = "storage";
Up_poll_usec = 10; # Upcall poll interval in microseconds
Transport = tcp; # tcp or rdma
}
}
Thanks,
Prashanth