ESXI 6.7 client creating Thick Eager zeroed vmdk files using ceph fsal
by Robert Toole
Hi,
I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to date
Centos 7 with nfs-ganesha 3.5.
After following the Ceph install guide
https://docs.ceph.com/en/octopus/cephadm/install/#deploying-nfs-ganesha
I am able to create a NFS 4.1 Datastore in vmware using the ip address
of all three nodes. Everything appears to work OK..
The issue however is that for some reason esxi is creating thick
provisioned eager zeroed disks instead of thin provisioned disks on this
datastore, whether I am migrating, cloning, or creating new vms. Even
running vmkfstools -i disk.vmdk -d thin thin_disk.vmdk still results in
a thick eager zeroed vmdk file.
This should not be possible on an NFS datastore, because vmware requires
a VAAI NAS plugin to accomplish thick provisioning over NFS before it
can thick provision disks.
Linux clients to the same datastore can create thin qcow2 images, and
when looking at the images created by esxi from the linux hosts you can
see that the vmdks are indeed thick:
ls -lsh
total 81G
512 -rw-r--r--. 1 root root 230 Mar 25 15:17 test_vm-2221e939.hlog
40G -rw-------. 1 root root 40G Mar 25 15:17 test_vm-flat.vmdk
40G -rw-------. 1 root root 40G Mar 25 15:56 test_vm_thin-flat.vmdk
512 -rw-------. 1 root root 501 Mar 25 15:57 test_vm_thin.vmdk
512 -rw-------. 1 root root 473 Mar 25 15:17 test_vm.vmdk
0 -rw-r--r--. 1 root root 0 Jan 6 1970 test_vm.vmsd
2.0K -rwxr-xr-x. 1 root root 2.0K Mar 25 15:17 test_vm.vmx
but the qcow2 files from the linux hosts are thin as one would expect:
qemu-img create -f qcow2 big_disk_2.img 500G
ls -lsh
total 401K
200K -rw-r--r--. 1 root root 200K Mar 25 15:47 big_disk_2.img
200K -rw-r--r--. 1 root root 200K Mar 25 15:44 big_disk.img
512 drwxr-xr-x. 2 root root 81G Mar 25 15:57 test_vm
These ls -lsh results are the same from esx, linux nfs clients and from
cephfs kernel client.
What is happening here? Are there undocumented VAAI features in
nfs-ganesha with the cephfs fsal ? If so, how do I turn them off ? I
want thin provisioned disks.
ceph nfs export ls dev-nfs-cluster --detailed
[
{
"export_id": 1,
"path": "/Development-Datastore",
"cluster_id": "dev-nfs-cluster",
"pseudo": "/Development-Datastore",
"access_type": "RW",
"squash": "no_root_squash",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "dev-nfs-cluster1",
"fs_name": "dev_cephfs_vol",
"sec_label_xattr": ""
},
"clients": []
}
]
rpm -qa | grep ganesha
nfs-ganesha-ceph-3.5-1.el7.x86_64
nfs-ganesha-rados-grace-3.5-1.el7.x86_64
nfs-ganesha-rados-urls-3.5-1.el7.x86_64
nfs-ganesha-3.5-1.el7.x86_64
centos-release-nfs-ganesha30-1.0-2.el7.centos.noarch
rpm -qa | grep ceph
python3-cephfs-15.2.7-0.el7.x86_64
nfs-ganesha-ceph-3.5-1.el7.x86_64
python3-ceph-argparse-15.2.7-0.el7.x86_64
python3-ceph-common-15.2.7-0.el7.x86_64
cephadm-15.2.7-0.el7.x86_64
libcephfs2-15.2.7-0.el7.x86_64
ceph-common-15.2.7-0.el7.x86_64
ceph -v
ceph version 15.2.7 (<ceph_uuid>) octopus (stable)
The ceph cluster is healthy using bluestore on raw 3.84TB sata 7200 rpm
disks.
--
Robert Toole
rtoole(a)tooleweb.ca
403 368 5680
4 weeks, 1 day
Announce Push of V6.3
by Frank Filz
Branch next
Tag:V6.3
Note: This release includes an ntirpc pullup, please make sure to update
your submodule.
Merge Highlights
* some minor log fixups
* some PROXY_V3 fixes
* fridgether.c: After cancel we need to join, before freeing thread
resources
* src/include/sal_functions.h: Avoid using export reserved keyword so this
file can be included in C++.
* gsh_types.h: Add missing include
* src/idmapper/idmapper.c: Clear negatic cache when idmapping status changes
* src/monitoring: Clang-format cc files.
* src/monitoring/exposer.cc: Added scraping latencies histogram.
* src/test/run_test_mode.sh: Added script to run Ganesha for testing.
* fix some spelling mistakes
* Add limit check in wait_to_start_io(). If exceeded, wakeup reaper & return
BUSY
* FSAL_MEM:Don't destroy global FD for valid handle
* some Coverity fixes
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
f503dc406 Frank S. Filz V6.3
a511dd4ca Frank S. Filz ntirpc V6.3 pullup
b37bc668f Shivam Singh Proposed fix for coverity issues: 502177 & 502095
f69068c3c Sachin Punadikar STATE : Fix double unlock (CID 502067)
485610fa1 Sachin Punadikar FSAL_MEM:Don't destroy global FD for valid handle
5d0de2ccf Rojin George Add limit check in wait_to_start_io(). If exceeded,
wakeup reaper & return BUSY
1f528f8e3 izxl007 export_mgr.c: Minor log fixup
70ae04439 izxl007 fix some spelling mistakes
3cc9ece26 Yoni Couriel src/test/run_test_mode.sh: Added script to run
Ganesha for testing.
10260ba2e Yoni Couriel src/monitoring/exposer.cc: Added scraping latencies
histogram.
3d7583de0 Yoni Couriel src/monitoring: Clang-format cc files.
c6a8e9e5b Lior Suliman src/idmapper/idmapper.c: Clear negatic cache when
idmapping status changes
3963f930b Ofir Vainshtein gsh_types.h: Add missing include
bac9246ef Yoni Couriel src/include/sal_functions.h: Avoid using export
reserved keyword so this file can be included in C++.
5abe1bc6a Ofir Vainshtein fridgether.c: After cancel we need to join, before
freeing thread resources
ca6ee9266 Lior Suliman Update .gitignore
7e8b05beb Lior Suliman Change NFSv3 proxy FSAL to return delay instead of
invalid val or server fault to the client
8ff262aea Lior Suliman FSAL_PROXY_V3: Fixed a bug that the FSAL Proxy V3
used the original creds and not the resolved creds
e0e7c8a83 Frank S. Filz NFS3: Minor log fixup
2 months, 2 weeks