Daniel,
Thank you for your help, I have replied off list with the packet
captures. If there is any other information you need please let me know.
On 2021-03-26 7:55 a.m., Daniel Gryniewicz wrote:
There are no undocumented APIs in Ganesha. We support only RFC
specified NFS, nothing else.
I don't know what's happening here, I've never heard of anything like
this before. What would be best is to take a packet dump of the
creation process, so we can see what NFS traffic VMX is sending.
I don't think that Ganesha could be doing this on it's own, it just
takes NFS commands and translates them to CephFS commands, and if it
works with a Linux client, then it should work.
Daniel
On 3/25/21 6:59 PM, Robert Toole via Support wrote:
> Hi,
>
> I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to
> date Centos 7 with nfs-ganesha 3.5.
>
> After following the Ceph install guide
>
https://docs.ceph.com/en/octopus/cephadm/install/#deploying-nfs-ganesha
> I am able to create a NFS 4.1 Datastore in vmware using the ip
> address of all three nodes. Everything appears to work OK..
>
> The issue however is that for some reason esxi is creating thick
> provisioned eager zeroed disks instead of thin provisioned disks on
> this datastore, whether I am migrating, cloning, or creating new vms.
> Even running vmkfstools -i disk.vmdk -d thin thin_disk.vmdk still
> results in a thick eager zeroed vmdk file.
>
> This should not be possible on an NFS datastore, because vmware
> requires a VAAI NAS plugin to accomplish thick provisioning over NFS
> before it can thick provision disks.
>
> Linux clients to the same datastore can create thin qcow2 images, and
> when looking at the images created by esxi from the linux hosts you
> can see that the vmdks are indeed thick:
>
> ls -lsh
> total 81G
> 512 -rw-r--r--. 1 root root 230 Mar 25 15:17 test_vm-2221e939.hlog
> 40G -rw-------. 1 root root 40G Mar 25 15:17 test_vm-flat.vmdk
> 40G -rw-------. 1 root root 40G Mar 25 15:56 test_vm_thin-flat.vmdk
> 512 -rw-------. 1 root root 501 Mar 25 15:57 test_vm_thin.vmdk
> 512 -rw-------. 1 root root 473 Mar 25 15:17 test_vm.vmdk
> 0 -rw-r--r--. 1 root root 0 Jan 6 1970 test_vm.vmsd
> 2.0K -rwxr-xr-x. 1 root root 2.0K Mar 25 15:17 test_vm.vmx
>
> but the qcow2 files from the linux hosts are thin as one would expect:
>
> qemu-img create -f qcow2 big_disk_2.img 500G
>
> ls -lsh
>
> total 401K
> 200K -rw-r--r--. 1 root root 200K Mar 25 15:47 big_disk_2.img
> 200K -rw-r--r--. 1 root root 200K Mar 25 15:44 big_disk.img
> 512 drwxr-xr-x. 2 root root 81G Mar 25 15:57 test_vm
>
> These ls -lsh results are the same from esx, linux nfs clients and
> from cephfs kernel client.
>
> What is happening here? Are there undocumented VAAI features in
> nfs-ganesha with the cephfs fsal ? If so, how do I turn them off ? I
> want thin provisioned disks.
>
> ceph nfs export ls dev-nfs-cluster --detailed
>
> [
> {
> "export_id": 1,
> "path": "/Development-Datastore",
> "cluster_id": "dev-nfs-cluster",
> "pseudo": "/Development-Datastore",
> "access_type": "RW",
> "squash": "no_root_squash",
> "security_label": true,
> "protocols": [
> 4
> ],
> "transports": [
> "TCP"
> ],
> "fsal": {
> "name": "CEPH",
> "user_id": "dev-nfs-cluster1",
> "fs_name": "dev_cephfs_vol",
> "sec_label_xattr": ""
> },
> "clients": []
> }
> ]
>
> rpm -qa | grep ganesha
>
> nfs-ganesha-ceph-3.5-1.el7.x86_64
> nfs-ganesha-rados-grace-3.5-1.el7.x86_64
> nfs-ganesha-rados-urls-3.5-1.el7.x86_64
> nfs-ganesha-3.5-1.el7.x86_64
> centos-release-nfs-ganesha30-1.0-2.el7.centos.noarch
>
> rpm -qa | grep ceph
>
> python3-cephfs-15.2.7-0.el7.x86_64
> nfs-ganesha-ceph-3.5-1.el7.x86_64
> python3-ceph-argparse-15.2.7-0.el7.x86_64
> python3-ceph-common-15.2.7-0.el7.x86_64
> cephadm-15.2.7-0.el7.x86_64
> libcephfs2-15.2.7-0.el7.x86_64
> ceph-common-15.2.7-0.el7.x86_64
>
> ceph -v
>
> ceph version 15.2.7 (<ceph_uuid>) octopus (stable)
>
> The ceph cluster is healthy using bluestore on raw 3.84TB sata 7200
> rpm disks.
>
> --
> Robert Toole
> rtoole(a)tooleweb.ca
> 403 368 5680
>
>
> _______________________________________________
> Support mailing list -- support(a)lists.nfs-ganesha.org
> To unsubscribe send an email to support-leave(a)lists.nfs-ganesha.org
>
_______________________________________________
Support mailing list -- support(a)lists.nfs-ganesha.org
To unsubscribe send an email to support-leave(a)lists.nfs-ganesha.org