ESXI 6.7 client creating Thick Eager zeroed vmdk files using ceph fsal
by Robert Toole
Hi,
I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to date
Centos 7 with nfs-ganesha 3.5.
After following the Ceph install guide
https://docs.ceph.com/en/octopus/cephadm/install/#deploying-nfs-ganesha
I am able to create a NFS 4.1 Datastore in vmware using the ip address
of all three nodes. Everything appears to work OK..
The issue however is that for some reason esxi is creating thick
provisioned eager zeroed disks instead of thin provisioned disks on this
datastore, whether I am migrating, cloning, or creating new vms. Even
running vmkfstools -i disk.vmdk -d thin thin_disk.vmdk still results in
a thick eager zeroed vmdk file.
This should not be possible on an NFS datastore, because vmware requires
a VAAI NAS plugin to accomplish thick provisioning over NFS before it
can thick provision disks.
Linux clients to the same datastore can create thin qcow2 images, and
when looking at the images created by esxi from the linux hosts you can
see that the vmdks are indeed thick:
ls -lsh
total 81G
512 -rw-r--r--. 1 root root 230 Mar 25 15:17 test_vm-2221e939.hlog
40G -rw-------. 1 root root 40G Mar 25 15:17 test_vm-flat.vmdk
40G -rw-------. 1 root root 40G Mar 25 15:56 test_vm_thin-flat.vmdk
512 -rw-------. 1 root root 501 Mar 25 15:57 test_vm_thin.vmdk
512 -rw-------. 1 root root 473 Mar 25 15:17 test_vm.vmdk
0 -rw-r--r--. 1 root root 0 Jan 6 1970 test_vm.vmsd
2.0K -rwxr-xr-x. 1 root root 2.0K Mar 25 15:17 test_vm.vmx
but the qcow2 files from the linux hosts are thin as one would expect:
qemu-img create -f qcow2 big_disk_2.img 500G
ls -lsh
total 401K
200K -rw-r--r--. 1 root root 200K Mar 25 15:47 big_disk_2.img
200K -rw-r--r--. 1 root root 200K Mar 25 15:44 big_disk.img
512 drwxr-xr-x. 2 root root 81G Mar 25 15:57 test_vm
These ls -lsh results are the same from esx, linux nfs clients and from
cephfs kernel client.
What is happening here? Are there undocumented VAAI features in
nfs-ganesha with the cephfs fsal ? If so, how do I turn them off ? I
want thin provisioned disks.
ceph nfs export ls dev-nfs-cluster --detailed
[
{
"export_id": 1,
"path": "/Development-Datastore",
"cluster_id": "dev-nfs-cluster",
"pseudo": "/Development-Datastore",
"access_type": "RW",
"squash": "no_root_squash",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "dev-nfs-cluster1",
"fs_name": "dev_cephfs_vol",
"sec_label_xattr": ""
},
"clients": []
}
]
rpm -qa | grep ganesha
nfs-ganesha-ceph-3.5-1.el7.x86_64
nfs-ganesha-rados-grace-3.5-1.el7.x86_64
nfs-ganesha-rados-urls-3.5-1.el7.x86_64
nfs-ganesha-3.5-1.el7.x86_64
centos-release-nfs-ganesha30-1.0-2.el7.centos.noarch
rpm -qa | grep ceph
python3-cephfs-15.2.7-0.el7.x86_64
nfs-ganesha-ceph-3.5-1.el7.x86_64
python3-ceph-argparse-15.2.7-0.el7.x86_64
python3-ceph-common-15.2.7-0.el7.x86_64
cephadm-15.2.7-0.el7.x86_64
libcephfs2-15.2.7-0.el7.x86_64
ceph-common-15.2.7-0.el7.x86_64
ceph -v
ceph version 15.2.7 (<ceph_uuid>) octopus (stable)
The ceph cluster is healthy using bluestore on raw 3.84TB sata 7200 rpm
disks.
--
Robert Toole
rtoole(a)tooleweb.ca
403 368 5680
4 weeks
Possible to run NFSv4 server as non-root user?
by Tom McLaughlin
Hi,
I'm wondering if it's possible to run Ganesha as a normal user, to start up an NFSv4 server on a chosen port the user is allowed to bind and serve the user's files. I've made some progress, but I'm currently facing the error message "ganesha.nfsd-1003865[main] __Register_program :DISP :MAJ :Cannot register RQUOTA V1 on UDP". I thought NFSv4 didn't need to run on UDP, and I've tried to configure it to only run on TCP but no luck so far.
I'm also slightly confused about rpcbind dependency. Is it possible to disable/avoid using for a simple use case like this?
Here's my config so far:
NFS_KRB5 {
Active_krb5 = false;
}
NFS_CORE_PARAM {
Protocols = 4;
NFS_Port = 7777;
Rquota_Port = 7778;
}
EXPORT {
Export_Id = 2;
Path = /tmp/exported;
Pseudo = /tmp/exported;
Access_Type = RW;
Squash = No_Root_Squash;
Transports = "TCP";
Protocols = 4;
# SecType = none;
SecType = "sys";
FSAL {
Name = VFS;
}
}
3 years, 6 months
Proxy FSAL - NFS4 -> 3
by Nick Couchman
I'm using Ganesha NFS with the Proxy FSAL. It seems to work fine as long as
I'm going between NFS clients and servers of the same version (NFSv4.1
client -> NFSv4.1 server). However, I have a situation where I'm trying to
proxy a NFSv4.1 service (Azure File Share NFS) for clients that do not
support NFSv4. When I do this, the "mount" command on the NFSv3 clients
works fine, but as soon as I try to do any data operations, I get I/O
errors:
# mount -t nfs -o vers=3 10.11.12.13:/azure-nfs-file /mnt/azure
# ls -l /mnt/azure
ls: reading directory '/mnt/azure': Remote I/O error
total 0
I've posted my (sanitized) configuration below. My questions are:
1) Is it possible to use Ganesha to proxy this way - allowing clients to
access a NFS server that differs in version, or is this not supported?
2) Am I missing something within my Ganesha config to enable this?
I can also provide the debug logging, if that's helpful.
Thanks - Nick
==ganesha.conf==
NFS_CORE_PARAM {
mount_path_pseudo = true;
Protocols = 3,4;
MNT_Port = 20048;
}
LOG {
Default_Log_Level = INFO;
Components {
FSAL = FULL_DEBUG;
}
}
EXPORT_DEFAULTS {
Access_Type = RW;
}
## Azure NFS Share
EXPORT
{
Export_id = 902;
Path = "/stgaccount/azure-file";
Pseudo = "/azure-file";
Access_Type = RW;
Squash = no_root_squash;
Sectype = sys;
FSAL {
Name = proxy;
Srv_Addr = 10.1.2.3;
Enable_Handle_Mapping = TRUE;
HandleMap_DB_Dir = "/var/ganesha/handledb/902";
HandleMap_Tmp_Dir = "/run/ganesha/tmp/902";
HandleMap_DB_Count = 8;
}
}
3 years, 7 months
Re: Proxy FSAL - NFS4 -> 3
by Nick Couchman
On Sat, Jun 26, 2021 at 12:34 PM Todd Pfaff <pfaff(a)rhpcs.mcmaster.ca> wrote:
> Nick,
>
> I've been using the Proxy FSAL for the past two years and have had much
> better results since moving to the V4.dev branch about a year ago,
> particularly after Solomon Boulos' improvements to the Proxy FSAL. I
> don't claim to be using it exactly as you've described but I believe it
> may be worth your while to try the V4.dev branch.
>
>
Todd,
Thanks for the hints - I'll give it a shot - either see if packages are
available or just compile it myself.
-Nick
3 years, 7 months
fsal_common_is_referral :FSAL :EVENT :Failed to get attrs for referral, handle: 0x7f370400b660, valid_mask: 0, request_mask: 11dfce, supported: 0, error: Permission denied
by Singiali, Bom Bahadur
Hi Team,
Need your advice on this issue:
We have similar setup/issue :
https://github.com/nfs-ganesha/nfs-ganesha/issues/575
Can do NFS Client mounts, however after few hours, some mount can not be
mounted (/nfs-test/ictadmin: Permission denied: Permission denied)
===
NFS-Ganesha Release = V3.3
===
=== Conf ig Snip-it ===
EXPORT
{
Export_Id = 1;
Path = /ictstr01/ictadmin;
Pseudo = /ictstr01/ictadmin;
Protocols = 4;
Access_Type = RW;
Squash = root_squash;
FSAL {
Name = VFS;
}
}
EXPORT
{
Export_Id = 8;
Path = /ictstr01/home;
Pseudo = /ictstr01/home;
Protocols = 4;
FSAL {
Name = VFS;
}
CLIENT
{
Clients = 146.107.1.225/24;
Access_Type = RW;
Squash = no_root_squash;
}
CLIENT
{
Clients = 146.107.0.0/24, 10.216.0.0/16;
Access_Type = RW;
Squash = root_squash;
}
}
===
Error details:
===
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_start :NFS STARTUP :EVENT
:-------------------------------------------------
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED
2021-06-24 23:55:07 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[main]
nfs_start :NFS STARTUP :EVENT
:-------------------------------------------------
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server recovery event 5 nodeid -1 ip
10.11.4.33
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_takeover :CLIENT ID :EVENT :Recovery for nodeid -1 dir
(/var/lib/nfs/ganesha/10.11.4.33/v4recov)
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_impl :CLIENT ID :EVENT :Failed to open v4 recovery dir
(/var/lib/nfs/ganesha/10.11.4.33/v4recov), errno: No such file or
directory (2)
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_takeover :CLIENT ID :EVENT :Failed to read v4 recovery
dir (/var/lib/nfs/ganesha/10.11.4.33/v4recov)
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server recovery event 5 nodeid -1 ip
10.11.4.32
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_takeover :CLIENT ID :EVENT :Recovery for nodeid -1 dir
(/var/lib/nfs/ganesha/10.11.4.32/v4recov)
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_impl :CLIENT ID :EVENT :Failed to open v4 recovery dir
(/var/lib/nfs/ganesha/10.11.4.32/v4recov), errno: No such file or
directory (2)
2021-06-24 23:55:09 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
fs_read_recov_clids_takeover :CLIENT ID :EVENT :Failed to read v4 recovery
dir (/var/lib/nfs/ganesha/10.11.4.32/v4recov)
2021-06-24 23:55:10 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
2021-06-24 23:55:10 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_start_grace :STATE :EVENT :NFS Server recovery event 2 nodeid -1 ip
10.11.4.33
2021-06-24 23:55:10 : epoch 60d51b5b : nsds01 : ganesha.nfsd-116575[dbus]
nfs_release_v4_clients :STATE :EVENT :NFS Server V4 recovery release ip
10.11.4.33
2021-06-24 23:56:47 : epoch 60d51b5b : nsds01 :
ganesha.nfsd-116575[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS
Server Now NOT IN GRACE
2021-06-25 00:18:01 : epoch 60d51b5b : nsds01 :
ganesha.nfsd-116575[svc_45] fsal_common_is_referral :FSAL :EVENT :Failed
to get attrs for referral, handle: 0x7f370400b660, valid_mask: 0,
request_mask: 11dfce, supported: 0, error: Permission denied
2021-06-25 00:18:01 : epoch 60d51b5b : nsds01 :
ganesha.nfsd-116575[svc_23] fsal_common_is_referral :FSAL :EVENT :Failed
to get attrs for referral, handle: 0x7f370400b660, valid_mask: 0,
request_mask: 11dfce, supported: 0, error: Permission denied
Please advice, thanks.
Many thanks
---
Best Regards || Mit freundlichen Grüßen
Bom Bahadur Singiali
<https://nip.helmholtz-muenchen.de/ict> ICT Infrastructure, Helmholtz
Zentrum München
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
3 years, 7 months
ntirpc v3.5
by Daniel Gryniewicz
Announcing ntirpc v3.5.
This stable release contains 3 commits, the most important of which
fixes READDIR for UDP connections. Commits are:
1c6127b8 Add xdr_putbufs to xdrmem
fef3758c Fix short UDP handling
e8711ece Don't release on resume
Daniel
3 years, 7 months