V6.5 released...
by Frank Filz
Oops, wrong subject.
From: Frank Filz [mailto:ffilzlnx@mindspring.com]
Sent: Wednesday, January 8, 2025 4:31 PM
To: devel(a)lists.nfs-ganesha.org; support(a)lists.nfs-ganesha.org
Subject: [NFS-Ganesha-Devel] Branch next AND V6-stable
Tag:V6.5
Note: V6-stable branch is also created with this push.
Merge Highlights
This merge does provide some small "features" with new config options,
however,
the defaults result in no change in behavior. Again, these patches are
accepted
to work with Google to closer match their internal repos.
There is a new option to enable/disable sticky grace that defaults to
disable,
returning Ganesha to earlier grace period behavior. Without the support from
the recovery backend, sticky grace could cause problems warranting disabling
it
by default.
Some new Prometheus metrics are added.
* SAL/sal_metrics.c: Added SAL static metrics.
* nfs4_op_readdir: Always return NFS4ERR_TOOSMALL if we didn't have enough
space for a single entry
* FSAL/FSAL_CEPH/handle.c: Moved tracepoint to reduce indentation
* Resolve compilation warnings
* common_utils: Added additional documentation to doc strings
* do not crash when ref count is greater than 1
* Implement log rotation/compress for Ganesha
* Fix the issue where the nfs4_op_write function always returns 0.
* Sticky Grace Configurable option:
* nfs_metrics.c: Fix code style issues
* nfs_main.c: Use dup2 to simplify function calls
* add root_kerberos_principal param
* add read_access_check_policy param
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com
<mailto:ffilzlnx@mindspring.com> >
Contents:
952fb9337 Frank S. Filz V6.5
649d62698 Roy Babayov add read_access_check_policy param
41b1e7e97 Roy Babayov add root_kerberos_principal param
79cc98e80 bjfhdhhaha nfs_main.c: Use dup2 to simplify function calls
e8a3da149 izxl007 nfs_metrics.c: Fix code style issues
a410572cc VidyaThumukunta Sticky Grace Configurable option:
ae99e2eb2 yinlei Fix the issue where the nfs4_op_write function always
returns 0.
b6e5fb110 Prabhu Murugesan Implement log rotation/compress for Ganesha
5c2b80569 Ofir Vainshtein do not crash when ref count is greater than 1
0cff86b0b Lior Suliman common_utils: Added additional documentation to doc
strings
d48c2d258 Lior Suliman Resolve compilation warnings
18ddf224e Lior Suliman FSAL/FSAL_CEPH/handle.c: Moved tracepoint to reduce
indentation
723dcac5b Shahar Hochma nfs4_op_readdir: Always return NFS4ERR_TOOSMALL if
we didn't have enough space for a single entry
86e6b29c3 Yoni Couriel SAL/sal_metrics.c: Added SAL static metrics.
1 week, 4 days
Branch next AND V6-stable
by Frank Filz
Tag:V6.5
Note: V6-stable branch is also created with this push.
Merge Highlights
This merge does provide some small "features" with new config options,
however,
the defaults result in no change in behavior. Again, these patches are
accepted
to work with Google to closer match their internal repos.
There is a new option to enable/disable sticky grace that defaults to
disable,
returning Ganesha to earlier grace period behavior. Without the support from
the recovery backend, sticky grace could cause problems warranting disabling
it
by default.
Some new Prometheus metrics are added.
* SAL/sal_metrics.c: Added SAL static metrics.
* nfs4_op_readdir: Always return NFS4ERR_TOOSMALL if we didn't have enough
space for a single entry
* FSAL/FSAL_CEPH/handle.c: Moved tracepoint to reduce indentation
* Resolve compilation warnings
* common_utils: Added additional documentation to doc strings
* do not crash when ref count is greater than 1
* Implement log rotation/compress for Ganesha
* Fix the issue where the nfs4_op_write function always returns 0.
* Sticky Grace Configurable option:
* nfs_metrics.c: Fix code style issues
* nfs_main.c: Use dup2 to simplify function calls
* add root_kerberos_principal param
* add read_access_check_policy param
Signed-off-by: Frank S. Filz <ffilzlnx(a)mindspring.com>
Contents:
952fb9337 Frank S. Filz V6.5
649d62698 Roy Babayov add read_access_check_policy param
41b1e7e97 Roy Babayov add root_kerberos_principal param
79cc98e80 bjfhdhhaha nfs_main.c: Use dup2 to simplify function calls
e8a3da149 izxl007 nfs_metrics.c: Fix code style issues
a410572cc VidyaThumukunta Sticky Grace Configurable option:
ae99e2eb2 yinlei Fix the issue where the nfs4_op_write function always
returns 0.
b6e5fb110 Prabhu Murugesan Implement log rotation/compress for Ganesha
5c2b80569 Ofir Vainshtein do not crash when ref count is greater than 1
0cff86b0b Lior Suliman common_utils: Added additional documentation to doc
strings
d48c2d258 Lior Suliman Resolve compilation warnings
18ddf224e Lior Suliman FSAL/FSAL_CEPH/handle.c: Moved tracepoint to reduce
indentation
723dcac5b Shahar Hochma nfs4_op_readdir: Always return NFS4ERR_TOOSMALL if
we didn't have enough space for a single entry
86e6b29c3 Yoni Couriel SAL/sal_metrics.c: Added SAL static metrics.
1 week, 4 days
ESXI 6.7 client creating Thick Eager zeroed vmdk files using ceph fsal
by Robert Toole
Hi,
I have a 3 node Ceph octopus 15.2.7 cluster running on fully up to date
Centos 7 with nfs-ganesha 3.5.
After following the Ceph install guide
https://docs.ceph.com/en/octopus/cephadm/install/#deploying-nfs-ganesha
I am able to create a NFS 4.1 Datastore in vmware using the ip address
of all three nodes. Everything appears to work OK..
The issue however is that for some reason esxi is creating thick
provisioned eager zeroed disks instead of thin provisioned disks on this
datastore, whether I am migrating, cloning, or creating new vms. Even
running vmkfstools -i disk.vmdk -d thin thin_disk.vmdk still results in
a thick eager zeroed vmdk file.
This should not be possible on an NFS datastore, because vmware requires
a VAAI NAS plugin to accomplish thick provisioning over NFS before it
can thick provision disks.
Linux clients to the same datastore can create thin qcow2 images, and
when looking at the images created by esxi from the linux hosts you can
see that the vmdks are indeed thick:
ls -lsh
total 81G
512 -rw-r--r--. 1 root root 230 Mar 25 15:17 test_vm-2221e939.hlog
40G -rw-------. 1 root root 40G Mar 25 15:17 test_vm-flat.vmdk
40G -rw-------. 1 root root 40G Mar 25 15:56 test_vm_thin-flat.vmdk
512 -rw-------. 1 root root 501 Mar 25 15:57 test_vm_thin.vmdk
512 -rw-------. 1 root root 473 Mar 25 15:17 test_vm.vmdk
0 -rw-r--r--. 1 root root 0 Jan 6 1970 test_vm.vmsd
2.0K -rwxr-xr-x. 1 root root 2.0K Mar 25 15:17 test_vm.vmx
but the qcow2 files from the linux hosts are thin as one would expect:
qemu-img create -f qcow2 big_disk_2.img 500G
ls -lsh
total 401K
200K -rw-r--r--. 1 root root 200K Mar 25 15:47 big_disk_2.img
200K -rw-r--r--. 1 root root 200K Mar 25 15:44 big_disk.img
512 drwxr-xr-x. 2 root root 81G Mar 25 15:57 test_vm
These ls -lsh results are the same from esx, linux nfs clients and from
cephfs kernel client.
What is happening here? Are there undocumented VAAI features in
nfs-ganesha with the cephfs fsal ? If so, how do I turn them off ? I
want thin provisioned disks.
ceph nfs export ls dev-nfs-cluster --detailed
[
{
"export_id": 1,
"path": "/Development-Datastore",
"cluster_id": "dev-nfs-cluster",
"pseudo": "/Development-Datastore",
"access_type": "RW",
"squash": "no_root_squash",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "dev-nfs-cluster1",
"fs_name": "dev_cephfs_vol",
"sec_label_xattr": ""
},
"clients": []
}
]
rpm -qa | grep ganesha
nfs-ganesha-ceph-3.5-1.el7.x86_64
nfs-ganesha-rados-grace-3.5-1.el7.x86_64
nfs-ganesha-rados-urls-3.5-1.el7.x86_64
nfs-ganesha-3.5-1.el7.x86_64
centos-release-nfs-ganesha30-1.0-2.el7.centos.noarch
rpm -qa | grep ceph
python3-cephfs-15.2.7-0.el7.x86_64
nfs-ganesha-ceph-3.5-1.el7.x86_64
python3-ceph-argparse-15.2.7-0.el7.x86_64
python3-ceph-common-15.2.7-0.el7.x86_64
cephadm-15.2.7-0.el7.x86_64
libcephfs2-15.2.7-0.el7.x86_64
ceph-common-15.2.7-0.el7.x86_64
ceph -v
ceph version 15.2.7 (<ceph_uuid>) octopus (stable)
The ceph cluster is healthy using bluestore on raw 3.84TB sata 7200 rpm
disks.
--
Robert Toole
rtoole(a)tooleweb.ca
403 368 5680
1 week, 5 days