Version

Use external gluster volume as PV in kadalu

You can still use kadalu even when you have your storage nodes outside of the kubernetes (k8s) cluster. We describe this kind of setup as 'External'. Such a description is intended to indicate that the storage server processes are not managed by kadalu operator.

In both the ways please note metadata.name (let’s say e-vol) in kind: KadaluStorage will create a StorageClass with name kadalu.<volname> (volname will be replaced with e-vol).

Cleanup for re-using external gluster volume

When you delete a previously used volume (pool) in external gluster from Kadalu, some folders and files will be left behind as we do not run any commands on external gluster except quota commands.

Please delete below files/dirs from root of gluster bricks after deleting the pool in Kadalu if exists

info/ stat.db virtblock/ subvol/

Using external gluster in kadalu native way

In this mode, we expect gluster volume to be created and is in 'Started' state. kadalu storage config takes one of the node IP/hostname, and gluster volume name to use it as the storage for PVs. The PVs would be provided as subdirectories - this is similar to how a PV is created in kadalu native way.

In order for resources.requests.storage requests to be honored please read the server documentation.

The best example for this is, what we use in our CI. Checkout this sample yaml file

# file: external-config.yaml
---
apiVersion: kadalu-operator.storage/v1alpha1
kind: KadaluStorage
metadata:
  name: ext-config
spec:
  type: External
  # Omitting 'single_pv_per_pool' or using 'false' as a value will create
  # 1 PV : 1 Subdir in external gluster volume
  single_pv_per_pool: false
  details:
    # gluster_hosts: [ gluster1.kadalu.io, gluster2.kadalu.io ]
    gluster_host: gluster1.kadalu.io
    gluster_volname: kadalu
    gluster_options: log-level=DEBUG

# file: pvc-from-external-gluster.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-ext-kadalu
spec:
  # Add 'kadalu.' to name from KadaluStorage kind
  storageClassName: kadalu.ext-config
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      # This needs to be set using 'kadalu-quotad'
      storage: 500Mi

Please edit 'gluster_host' and 'gluster_volname' parameters in config section.

You can also setup the external storage through our kubectl kadalu CLI.

$ kubectl kadalu storage-add store-name --external gluster.kadalu.io:/kadalu

After this, just use kadalu.store-name (note that 'store-name' is name used for storage-add) in PVC yaml, like below:

...
spec:
  storageClassName: kadalu.store-name
  ...

Note that, if you have an existing gluster setup inside the k8s cluster, you can treat that also as 'External' and use kadalu to manage it. In this case, you will need to provide the pre-created gluster volume name for the config.

Using GlusterFS directory quota to set capacity limitation for external gluster volumes

kadalu native way does not support distributed volumes on the external gluster. You could execute the following commands before installation to enable kadalu to achieve PV capacity limitation using the Gluster directory Quota.

Please make sure you have enabled quota for external gluster volume before deploying kadalu

gluster volume quota <volname> enable

Create namespace in advance for using Secret

kubectl create namespace kadalu

Specify the username and secret key of the external Gluster node in Secret for Quota setting.

kubectl create secret generic glusterquota-ssh-secret --from-literal=glusterquota-ssh-username=<username> --from-file=ssh-privatekey=<ssh_privatekey_path> -n kadalu

Then install as usual

 kubectl kadalu install

kadalu sets GlusterFS directory quota to the subdirectories assigned to PVs on the external gluster. kadalu connects to the server via ssh by using the secret key and username of the external gluster in K8s Secret.

If you use this configuration, you do not need to install kadalu-quotad to the external gluster server.

Using external storage directly

In this mode, 1 Gluster Volume would be 1 PV. Something very similar to how heketi project allocates PVs. We don’t encourage using this mode, nor recommend it in a new setup. This mode is provided just as an option for those who are consuming gluster volumes in this mode. This helps to use the PVs used in other deployments to be still consumed.

You’ll be needing something similar to below config to use whole gluster volume as a single PV

# file: external-gluster-as-pv.yaml
---
apiVersion: kadalu-operator.storage/v1alpha1
kind: KadaluStorage
metadata:
  name: pv-ext # Keep changing the name for different volumes
spec:
  type: External
  # Setting 'single_pv_per_pool' as 'true' is important
  single_pv_per_pool: true            # Will map 1 Gluster Volume : 1 PV
  details:
    gluster_host: gluster1.kadalu.io   # Change to your gluster host
    gluster_volname: test              # Change to existing gluster volume
    gluster_options: log-level=DEBUG   # Comma separated options

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pv-ext
spec:
  storageClassName: kadalu.pv-ext
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi # This gets ignored in this case.

Please edit gluster_host and gluster_volname parameters in Storage Class.

Then run kubectl apply -f ./external-gluster-as-pv.yaml

Note that in this mode, there would be as many 'StorageClass' as number of PVs, but that is not avoidable at present because we can’t pass user driven input from PV claim yaml.

ON THIS PAGE