Dynamic provisioning using FSx for OpenZFS
With the Amazon FSx for OpenZFS file system StorageClass defined we can now dynamically provision the file system. Once the file system has deployed successfully we can use Kustomize to update the FSx for OpenZFS volume StorageClass and then dynamically provision a Persistent Volume and mount it.
First, let's examine the fsxz-fs-pvc.yaml
file which defines a PersistentVolumeClaim to create the 128GiB Amazon FSx for OpenZFS file system from the fsx-fs-sc StorageClass we created earlier:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fsxz-fs-pvc
namespace: assets
spec:
accessModes:
- ReadWriteMany
storageClassName: fsxz-fs-sc
resources:
requests:
storage: 128Gi
Run the following to create the file system PVC and deploy the Amazon FSx for OpenZFS file system based on the StorageClass:
namespace/assets unchanged
serviceaccount/assets unchanged
configmap/assets unchanged
service/assets unchanged
persistentvolumeclaim/fsxz-fs-pvc created
deployment.apps/assets configured
Run the following to view the progress of the file system PVC deployment and creation of the FSx for OpenZFS file system. This will typically take 10-15 minutes and when complete the deployment will show as successfully rolled out:
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "assets" rollout to finish: 1 old replicas are pending termination...
deployment "assets" successfully rolled out
When the FSx for OpenZFS file system was created, a root volume for the file system was created as well. It is best practice not to store data in the root volume, but instead create separate child volumes of the root and store data in them. Now that the root volume has been created, you can obtain its volume ID and create a child volume below it within the file system.
Run the following to obtain the root volume ID and set it to an environment variable we'll inject into the volume StorageClass using Kustomize:
Using Kustomize, we'll create the volume storage class and inject the ROOT_VOL_ID
, VPC_CIDR
, and EKS_CLUSTER_NAME
environment variables into the ParentVolumeId
, NfsExports
, and Name
parameters respectively:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fsxz-vol-sc
provisioner: fsx.openzfs.csi.aws.com
parameters:
ResourceType: "volume"
ParentVolumeId: '"$ROOT_VOL_ID"'
CopyTagsToSnapshots: 'false'
DataCompressionType: '"LZ4"'
NfsExports: '[{"ClientConfigurations": [{"Clients": "$VPC_CIDR", "Options": ["rw","crossmnt","no_root_squash"]}]}]'
ReadOnly: 'false'
RecordSizeKiB: '128'
Tags: '[{"Key": "Name", "Value": "$EKS_CLUSTER_NAME-data"}]'
OptionsOnDeletion: '["DELETE_CHILD_VOLUMES_AND_SNAPSHOTS"]'
reclaimPolicy: Delete
allowVolumeExpansion: false
mountOptions:
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=600
- nconnect=16
Apply the kustomization:
Let's examine the volume StorageClass. Note that it uses the FSx OpenZFS CSI driver as the provisioner and is updated with the Root Volume ID and VPC CIDR we exported earlier:
Name: fsxz-vol-sc
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":false,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"fsxz-vol-sc"},"mountOptions":["nfsvers=4.1","rsize=1048576","wsize=1048576","timeo=600","nconnect=16"],"parameters":{"CopyTagsToSnapshots":"false","DataCompressionType":"\"LZ4\"","NfsExports":"[{\"ClientConfigurations\": [{\"Clients\": \"10.42.0.0/16\", \"Options\": [\"rw\",\"crossmnt\",\"no_root_squash\"]}]}]","OptionsOnDeletion":"[\"DELETE_CHILD_VOLUMES_AND_SNAPSHOTS\"]","ParentVolumeId":"\"fsvol-0efa720c2c77956a4\"","ReadOnly":"false","RecordSizeKiB":"128","ResourceType":"volume","Tags":"[{\"Key\": \"Name\", \"Value\": \"eks-workshop-data\"}]"},"provisioner":"fsx.openzfs.csi.aws.com","reclaimPolicy":"Delete"}
Provisioner: fsx.openzfs.csi.aws.com
Parameters: CopyTagsToSnapshots=false,DataCompressionType="LZ4",NfsExports=[{"ClientConfigurations": [{"Clients": "10.42.0.0/16", "Options": ["rw","crossmnt","no_root_squash"]}]}],OptionsOnDeletion=["DELETE_CHILD_VOLUMES_AND_SNAPSHOTS"],ParentVolumeId="fsvol-0efa720c2c77956a4",ReadOnly=false,RecordSizeKiB=128,ResourceType=volume,Tags=[{"Key": "Name", "Value": "eks-workshop-data"}]
AllowVolumeExpansion: False
MountOptions:
nfsvers=4.1
rsize=1048576
wsize=1048576
timeo=600
nconnect=16
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Run the following to create the volume PVC and deploy the volume based on the StorageClass:
namespace/assets unchanged
serviceaccount/assets unchanged
configmap/assets unchanged
service/assets unchanged
persistentvolumeclaim/fsxz-vol-pvc created
deployment.apps/assets configured
Run the following to view the progress of the volume PVC deployment and creation of the volume on the FSx for OpenZFS file system. This will typically take less than 5 minutes and when complete, the deployment will show as successfully rolled out:
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 out of 2 new replicas have been updated...
Waiting for deployment "assets" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "assets" rollout to finish: 1 old replicas are pending termination...
deployment "assets" successfully rolled out
Let's examine the volumeMounts
in the deployment. Notice our new volume named fsxz-vol
is mounted at /usr/share/nginx/html/assets
:
- mountPath: /usr/share/nginx/html/assets
name: fsxz-vol
- mountPath: /tmp
name: tmp-volume
A PersistentVolume (PV) has been automatically created to fulfill our PersistentVolumeClaim (PVC):
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-904e8698-c9dd-426d-9d4e-a2bf35e1c46d 128Gi RWX Delete Bound assets/fsxz-fs-pvc fsxz-fs-sc <unset> 5m29s
pvc-de67d22d-040d-4898-b0ce-0b3139a227c1 1Gi RWX Delete Bound assets/fsxz-vol-pvc fsxz-vol-sc <unset> 27s 31s
Let's examine the details of our PersistentVolumeClaim (PVC):
Name: fsxz-fs-pvc
Namespace: assets
StorageClass: fsxz-fs-sc
Status: Bound
Volume: pvc-904e8698-c9dd-426d-9d4e-a2bf35e1c46d
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
volume.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 128Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 7m30s (x5 over 17m) fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 External provisioner is provisioning volume for claim "assets/fsxz-fs-pvc"
Normal ExternalProvisioning 7m24s (x42 over 17m) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'fsx.openzfs.csi.aws.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Normal ProvisioningSucceeded 5m59s fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 Successfully provisioned volume pvc-904e8698-c9dd-426d-9d4e-a2bf35e1c46d
Name: fsxz-vol-pvc
Namespace: assets
StorageClass: fsxz-vol-sc
Status: Bound
Volume: pvc-de67d22d-040d-4898-b0ce-0b3139a227c1
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
volume.kubernetes.io/storage-provisioner: fsx.openzfs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Used By: assets-8bf5b5bfd-2gcc6
assets-8bf5b5bfd-lw9qp
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 2m13s fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 External provisioner is provisioning volume for claim "assets/fsxz-vol-pvc"
Normal ExternalProvisioning 69s (x7 over 2m13s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'fsx.openzfs.csi.aws.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Normal ProvisioningSucceeded 57s fsx.openzfs.csi.aws.com_fsx-openzfs-csi-controller-6b9cdcddf6-kwx7p_35a063fc-5d91-4ba1-9bce-4d71de597b14 Successfully provisioned volume pvc-de67d22d-040d-4898-b0ce-0b3139a227c1
To demonstrate the shared storage functionality, let's create a new file new_gmt_watch.png
in the assets directory of the first Pod:
chrono_classic.jpg
gentleman.jpg
new_gmt_watch.png <-----------
pocket_watch.jpg
smart_1.jpg
smart_2.jpg
wood_watch.jpg
Now verify that this file exists in the second Pod:
chrono_classic.jpg
gentleman.jpg
new_gmt_watch.png <-----------
pocket_watch.jpg
smart_1.jpg
smart_2.jpg
test.txt
wood_watch.jpg
As you can see, even though we created the file through the first Pod, the second Pod has immediate access to it because they're both using the same Amazon FSx for OpenZFS file system.