Hi,
Thanks a lot for your comments.
Regarding the first lines:
*28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs
: ganesha.nfsd-25[main] nfs_Init_svc :DISP :CRIT :Cannot acquire
credentials for principal nfs,*
28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main]
nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized,
*28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs
: ganesha.nfsd-25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT
:Callback creds directory (/var/run/ganesha) already exists,*
*28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs
: ganesha.nfsd-25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN
:gssd_refresh_krb5_machine_credential failed (-1765328160:0),*
For the time being, I don't need krb5 authentication, I just want to filter
by client IP address. I have not installed any krb5 package and my
ganesha.conf has no refrerence to krb5 therefore I cannot understand why
I'm reciving theese messages. Any help in this point will be really welcome.
NFSV4
{
Allow_Numeric_Owners = false;
}
NFS_CORE_PARAM
{
# Enable NLM (network lock manager protocol)
Enable_NLM = false;
}
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 2046;
# Exported path (mandatory)
Path = /;
# Pseudo Path (for NFS v4)
Pseudo = /;
# Access control options
Access_Type = NONE;
Squash = No_Root_Squash;
Anonymous_Uid = -2;
Anonymous_Gid = -2;
# NFS protocol options
Transports = "TCP";
Protocols = "4";
SecType = "sys";
Manage_Gids = true;
CLIENT {
Clients = *;
Access_Type = RO;
}
# Exporting FSAL
FSAL {
Name = CEPH;
User_Id = "admin";
}
}
LOG {
Default_Log_Level = WARN;
Components {
# ALL = DEBUG;
# SESSIONS = INFO;
}
}
Are those "red" lines normal?
And regarding the other lines:
*28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs
: ganesha.nfsd-25[main] nsm_connect :NLM :CRIT :connect to statd failed:
RPC: Unknown protocol,*
*28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs
: ganesha.nfsd-25[main] nsm_unmonitor_all :NLM :CRIT :Unmonitor all
nsm_connect failed,*
I'm having some problems to start daemons inside my container. Is this
statd mandatory for ganesha nfsv4 well working?
I have discovered that setting: Enable_NLM = false in the EXPORT block this
messages disappear, but I don't know if it is necessary for nfsv4.
Thanks a lot
El mar., 29 ene. 2019 a las 14:36, Jeff Layton (<jlayton(a)poochiereds.net>)
escribió:
On Mon, 2019-01-28 at 23:20 +0100, Oscar Segarra wrote:
> Hi,
>
> I'm running nfs-ganesha from docker container. It seems work perfectly,
nevertheless when I start the server I get the following messages:
>
> Bootstrapping Ganesha NFS config,
> Bootstrapping CA cert,
> Skipping boostraping idmap config,
> Starting rpcbind,
> Starting rpc.statd,
> Starting sssd,
> Starting dbus,
> Starting Ganesha NFS,
> 28/01/2019 23:15:17 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha
Version 2.7.1,
> 28/01/2019 23:15:17 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_set_param_from_conf :NFS STARTUP :EVENT
:Configuration file successfully parsed,
> 28/01/2019 23:15:17 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID
Mapper.,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper
successfully initialized.,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN
GRACE, duration 90,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE
was successfully removed for proper quota management in FSAL,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] lower_my_caps :NFS STARTUP :EVENT :currenty set
capabilities are: =
cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap+eip,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials
for principal nfs,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread
initialized,
> 28/01/2019 23:15:18 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback
creds directory (/var/run/ganesha) already exists,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN
:gssd_refresh_krb5_machine_credential failed (-1765328160:0),
This is just saying that it can't find a krb5 machine credential for
this server. Are you using kerberos? If not, then you can just ignore
this. There may be some way to disable krb5 altogether though.
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :Starting delayed
executor.,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :9P/TCP dispatcher
thread was started successfully,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was
started successfully,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :admin thread was
started successfully,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :reaper thread was
started successfully,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_Start_threads :THREAD :EVENT :General fridge was
started successfully,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[_9p_disp] _9p_dispatcher_thread :9P DISP :EVENT :9P
dispatcher started,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] rpc :TIRPC :EVENT :svc_rqst_hook_events: 0x4b7cb80 fd
1024 xp_refcnt 1 sr_rec 0x4af76d0 evchan 2 ev_refcnt 3 epoll_fd 37 control
fd pair (35:36) hook failed (9),
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nsm_connect :NLM :CRIT :connect to statd failed: RPC:
Unknown protocol,
> > 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nsm_unmonitor_all :NLM :CRIT :Unmonitor all
nsm_connect failed,
Looks like you're exporting via NFSv3? If so, then you probably want to
set up NLM so file locking will work. If not, then you may want to just
disable NFSv3/NLM altogether, as that will allow the server to lift the
grace period early (see the ceph.conf sample config in the ganesha sources).
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_start :NFS STARTUP :EVENT
:-------------------------------------------------,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_start :NFS STARTUP :EVENT : NFS
SERVER INITIALIZED,
> 28/01/2019 23:15:20 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[main] nfs_start :NFS STARTUP :EVENT
:-------------------------------------------------,
> 28/01/2019 23:16:50 : epoch 5c4f7ef5 : vdicube_pub_ceph_nfs :
ganesha.nfsd-25[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now
NOT IN GRACE
Not a warning -- just a log message saying that the grace period has
ended. The timing of this can be important to know if you have problems
with recovery so we log it at a fairly high level.
>
> I'd like to fix them in order to have a clean execution. Is this
possible?
>
> Any help will be welcome,
>
> Thanks a lot.
> Ó
>
--
Jeff Layton <jlayton(a)poochiereds.net>