Thank you both for the help!
The capabilities needed to run on FSAL_VFS are a bit of a show-stopper for me. But for
posterity let me write down my experience trying to get it to work.
Here's the crazy capsh command I came up with to get the process running with the
necessary ~5 or so capabilities (I found these listed in the source):
sudo capsh --caps="cap_dac_read_search+eip cap_dac_override=+eip cap_fowner=+eip
cap_setgid=+eip cap_setuid=+eip cap_setpcap,cap_setuid,cap_setgid+ep" --keep=1
--user=tom --addamb=cap_dac_read_search,cap_dac_override,cap_fowner,cap_setgid,cap_setuid
-- -c "/home/tom/tools/ganesha/result/bin/ganesha.nfsd -F -f ganesha.conf -L
./logs.txt -p ./ganesha.pid"
and here's my config:
NFS_KRB5 {
Active_krb5 = false;
}
NFS_CORE_PARAM {
Protocols = 4;
NFS_Port = 7777;
Rquota_Port = 7778;
Enable_RQUOTA = false;
}
NFSv4 {
RecoveryRoot = "/tmp/recovery_root";
}
EXPORT {
Export_Id = 2;
Path = /tmp/exported;
Pseudo = /tmp/exported;
Access_Type = RW;
Squash = No_Root_Squash;
Transports = "TCP";
Protocols = 4;
# SecType = none;
SecType = "sys";
FSAL {
Name = VFS;
}
}
Note that the custom recovery root and the export dirs need to exist, or it immediately
segfaults.
With this setup, the server *sort of* works. I'm able to run a `mount` command on the
same machine. If I put a file in the exported dir, I can read it in the mount. Writing to
the mount is more problematic -- if I echo some data into a file, it tends to hang for
several minutes and then sometimes succeed (such that I can read it from the exported
dir).
My dream would be to be able to run Ganesha against FSAL_VFS as a normal user inside a
Docker container, in order to share some folder that the user controls. But I can see that
at a minimum this will require running the container with special capabilities.