Introducción
En la versión 10 de Netbackup se han incorporado cambios importantes en la protección de Kubernetes, simplificando la configuración y mejorando la protección utilizando herramientas nativas para el despliegue, la comunicación y gestión de un cluster de kubernetes.
Desde la consola web vamos a poder realizar las siguientes tareas:
- Configurar permisos
- Descubrir de forma automática los namespaces
- Hacer copias de seguridad de forma eficiente
- Configurar limites para optimizar la red y la infraestructura
- Recuperar namespaces y volumenes persistentes
Las copias de seguridad utilizan las instantáneas de los volúmenes persistentes (utilizando el plugin CSI) y posteriormente permiten la duplicación a una storage unit. Los componentes de una copia de seguridad de kubernetes son los siguientes, veremos que se despliega en un namespace propio un Kubernetes Master que es el que va a coordinar todos los procesos de copia de seguridad y restauración:
Configuración
El despliegue de la infraestructura necesaria se hace utilizando helm. Por lo tanto lo primero será instalarlo:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod +x get_helm.sh ./get_helm.sh root@ks805-control-plane-node-pool-xm8z7:~/netbackup# ./get_helm.sh Downloading https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm root@ks805-control-plane-node-pool-xm8z7:~/netbackup#
Ahora crearemos un namespace para el despliegue:
# kubectl create namespace netbackup namespace/netbackup created
Después será necesario descomprimir el paquete de instalación del Netbackup Operator (netbackupkops-10.1.1.tar.gz) que contiene el directorio necesario para el despliegue helm y un .tar con la imagen:
veritas_license.txt netbackupkops.tar netbackupkops-helm-chart/ netbackupkops-helm-chart/Chart.yaml netbackupkops-helm-chart/values.yaml netbackupkops-helm-chart/.helmignore netbackupkops-helm-chart/templates/ netbackupkops-helm-chart/templates/deployment.yaml netbackupkops-helm-chart/charts/
Tendremos que colocar la imagen en un repositorio accesible:
# docker load -i netbackupkops.tar 63c0270243d0: Loading layer [==================================================>] 216.2MB/216.2MB cbd5a225fe2d: Loading layer [==================================================>] 20.48kB/20.48kB 51683e01254f: Loading layer [==================================================>] 20.48kB/20.48kB 2a589d7d3541: Loading layer [==================================================>] 64.81MB/64.81MB 81b06e1dadb8: Loading layer [==================================================>] 2.207MB/2.207MB 7d35628916f4: Loading layer [==================================================>] 41.33MB/41.33MB d6d39ea9857c: Loading layer [==================================================>] 8.704kB/8.704kB 9da8afe5f5a8: Loading layer [==================================================>] 46.62MB/46.62MB 61fdc46fee0b: Loading layer [==================================================>] 46.59MB/46.59MB ab4a177d4db7: Loading layer [==================================================>] 2.048kB/2.048kB 777082da8168: Loading layer [==================================================>] 28.16kB/28.16kB Loaded image: nbk8splugin.nbartifactory.rsv.ven.veritas.com/10.1.1/nbk8splugin:netbackupkops_10.1.1_0024
La etiquetaremos y la subiremos al repositorio:
# docker tag nbk8splugin.nbartifactory.rsv.ven.veritas.com/10.1.1/nbk8splugin:netbackupkops_10.1.1_0024 repositorio/nbk8splugin:netbackupkops_10.1.1_0024 # docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: ********** Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded # docker push repositorio/nbk8splugin:netbackupkops_10.1.1_0024 The push refers to repository [docker.io/repositorio/nbk8splugin] 777082da8168: Pushed ab4a177d4db7: Pushed 61fdc46fee0b: Pushed 9da8afe5f5a8: Pushed d6d39ea9857c: Pushed 7d35628916f4: Pushed 81b06e1dadb8: Pushed 2a589d7d3541: Pushed 51683e01254f: Pushed cbd5a225fe2d: Pushed 63c0270243d0: Pushed netbackupkops_10.1.1_0024: digest: sha256:8088861af36c5c99160dc828e2c3df1c459460563af76d21346bb707df862f80 size: 2627
Antes de hacer el despliegue con helm, tenemos que modificar el fichero values.yaml para reflejar cual es la imagen correcta:
Cambiar la siguiente linea image: nbk8splugin.nbartifactory.rsv.ven.veritas.com/10.1.1/nbk8splugin:netbackupkops_10.1.1_0024 Por esta otra: image: repositorio/nbk8splugin:netbackupkops_10.1.1_0024
y por ultimo realizaremos el despliegue con el siguiente comando de helm:
# helm install veritas-netbackupkops /root/netbackup/netbackupkops-helm-chart -n netbackup NAME: veritas-netbackupkops LAST DEPLOYED: Wed Mar 22 16:26:42 2023 NAMESPACE: netbackup STATUS: deployed REVISION: 1 TEST SUITE: None
Si todo ha ido bien, tendremos un nuevo pod con el operator:
NAME READY STATUS RESTARTS AGE netbackup-controller-manager-86f547fdb-7k7bs 2/2 Running 0 2m32s
Tendremos que etiquetar ahora el almacenamiento para que lo pueda usar Netbackup, hay que añadir las etiquetas correspondientes al StorageClass y al VolumeSnapshotClass:
# kubectl label sc freenas-nfs-csi netbackup.veritas.com/default-csi-filesystem-storage-class=true storageclass.storage.k8s.io/freenas-nfs-csi labeled # kubectl get sc --show-labels NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE LABELS default-storage-class-1 (default) named-disk.csi.cloud-director.vmware.com Delete Immediate false 14d <none> freenas-nfs-csi org.democratic-csi.nfs Delete Immediate true 41m app.kubernetes.io/instance=zfs-nfs,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=democratic-csi,helm.sh/chart=democratic-csi-0.13.5,netbackup.veritas.com/default-csi-filesystem-storage-class=true # kubectl label volumesnapshotclass freenas-nfs-csi netbackup.veritas.com/default-csi-volume-snapshot-class=true volumesnapshotclass.snapshot.storage.k8s.io/freenas-nfs-csi labeled # kubectl get volumesnapshotclass --show-labels NAME DRIVER DELETIONPOLICY AGE LABELS freenas-nfs-csi org.democratic-csi.nfs Delete 3d17h app.kubernetes.io/instance=zfs-nfs,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=democratic-csi,helm.sh/chart=democratic-csi-0.13.5,netbackup.veritas.com/default-csi-volume-snapshot-class=true
Ahora tendremos que etiquetar y poner también disponible la imagen del datamover que permitirá hacer backup del snapshot:
# docker load -i veritasnetbackup-datamover-10.1.1-0116.tar 911b1a4e3fc8: Loading layer [==================================================>] 217.3MB/217.3MB ebf2989f1b67: Loading layer [==================================================>] 32.26kB/32.26kB 57857c24168d: Loading layer [==================================================>] 26.11kB/26.11kB aa49ebef93e0: Loading layer [==================================================>] 876.5kB/876.5kB 728ebba83b05: Loading layer [==================================================>] 422.1MB/422.1MB 8667ab10ecbb: Loading layer [==================================================>] 2.048kB/2.048kB 4b0a9fc041e8: Loading layer [==================================================>] 28.16kB/28.16kB Loaded image: veritasnetbackup/datamover:10.1.1 docker tag veritasnetbackup/datamover:10.1.1 repositorio/datamover:10.1.1 # docker push repositorio/datamover:10.1.1 The push refers to repository [docker.io/repositorio/datamover] 4b0a9fc041e8: Pushed 8667ab10ecbb: Pushed 728ebba83b05: Pushed aa49ebef93e0: Pushed 57857c24168d: Pushed ebf2989f1b67: Pushed 911b1a4e3fc8: Pushed 10.1.1: digest: sha256:183fa32af8e64885f994487a6d1b4ea08f20283e890fb8624f30b5b01c72d0f4 size: 1784
A continuación tendremos que crear los ficheros de configuración para nuestra infraestructura. El primero es configmap.yaml donde especificamos la ip y el nombre de nuestro servidor primario, así como el repositorio para desplegar la imagen del data mover:
apiVersion: v1 data: datamover.hostaliases: | 172.21.1.11=k8nbu01.k8dom.local datamover.properties: | image=repositorio/datamover:10.1.1 version: "1" kind: ConfigMap metadata: name: k8nbu01.k8dom.local namespace: netbackup
También necesitaremos secret.yaml, con los datos para generar los certificados, rellenaremos el token que hemos creado previamente en el servidor primary y el fingerprint SHA256 de la CA (podemos consultarlo con nbcertcmd -listcacertdetails):
apiVersion: v1 kind: Secret metadata: name: datamover-secret-k8nbu01.k8dom.local namespace: netbackup type: Opaque stringData: token: CREAR_TOKEN_EN_PRIMARY fingerprint: FINGERPRINT
Y backupservercert.yaml con los datos de nuestro cluster de kubernetes:
apiVersion: netbackup.veritas.com/v1 kind: BackupServerCert metadata: name: backupservercert-k8nbu01.k8dom.local namespace: netbackup spec: clusterName: 172.21.1.12:6443 backupServer: k8nbu01.k8dom.local certificateOperation: Create certificateType: NBCA nbcaAttributes: nbcaCreateOptions: secretName: datamover-secret-k8nbu01.k8dom.local
y los aplicaremos con kubectl create -f <fichero>.yaml.
Con esto hemos realizado los pasos necesarios en nuestro cluster de kubernetes. Lo siguiente será registrar el cluster de kubernetes en Netbackup. Para ello en la consola web seleccionaremos a la izquierda Workloads->kubernetes->kubernetes clusters, haremos click en «+Add» y rellenaremos los datos de nuestro cluster:
Al hacer click en Next, nos dejará elegir credenciales o añadir unas nuevas credenciales:
Y tendremos que rellenar las credenciales de nuestro cluster. Las obtendremos con el siguiente comando:
# kubectl get secrets -n netbackup NAME TYPE DATA AGE datamover-secret-k8nbu01.k8dom.local Opaque 2 114s default-token-8mc2c kubernetes.io/service-account-token 3 7m6s netbackup-backup-server-secret kubernetes.io/service-account-token 3 3m57s netbackup-backup-server-token-l5gxn kubernetes.io/service-account-token 3 3m57s netbackup-operator-secret kubernetes.io/service-account-token 3 3m57s netbackup-operator-token-wch9d kubernetes.io/service-account-token 3 3m57s sh.helm.release.v1.veritas-netbackupkops.v1 helm.sh/release.v1 1 3m57s # kubectl get secret -n netbackup netbackup-backup-server-token-l5gxn -o yaml apiVersion: v1 data: ca.crt: **** CERTIFICADO CA**** namespace: bmV0YmFja3Vw token: ***** TOKEN **** kind: Secret metadata: annotations: kubernetes.io/service-account.name: netbackup-backup-server kubernetes.io/service-account.uid: ***** creationTimestamp: "2023-03-24T14:52:18Z" name: netbackup-backup-server-token-xn6kg namespace: netbackup resourceVersion: "6859953" uid: ***** type: kubernetes.io/service-account-token
Rellenaremos el certificado de la ca y el token en las credenciales:
Una vez registrado el cluster, se detectarán los namespaces y podremos hacer backup creando un protection plan o lanzando un backup manual.
Y veremos un resumen en el trabajo de lo que se ha copiado:
Mar 28, 2023 3:38:13 PM - Info nbjm (pid=8432) starting backup job (jobid=126) for client d6ab6465-1437-4a0a-a41b-21489d439627, policy BACKUPNOW+f918c090-620a-4f98-ba5b-a7e1ba9aa917, schedule FULL_0cd08478-e115-4682-a109-5ac1256ecf68 Mar 28, 2023 3:38:13 PM - Info nbjm (pid=8432) requesting NO_STORAGE_UNIT resources from RB for backup job (jobid=126, request id:{4A8EC7DE-90A3-4C81-88A4-97A21B0B42AB}) Mar 28, 2023 3:38:13 PM - requesting resource k8nbu01.k8dom.local.NBU_CLIENT.MAXJOBS.d6ab6465-1437-4a0a-a41b-21489d439627 Mar 28, 2023 3:38:13 PM - requesting resource k8nbu01.k8dom.local.Kubernetes.Backup Jobs per Kubernetes Cluster.172.21.1.12 Mar 28, 2023 3:38:13 PM - granted resource k8nbu01.k8dom.local.NBU_CLIENT.MAXJOBS.d6ab6465-1437-4a0a-a41b-21489d439627 Mar 28, 2023 3:38:13 PM - granted resource k8nbu01.k8dom.local.Kubernetes.Backup Jobs per Kubernetes Cluster.172.21.1.12 Mar 28, 2023 3:38:13 PM - estimated 0 kbytes needed Mar 28, 2023 3:38:13 PM - begin Child Job Mar 28, 2023 3:38:13 PM - begin SLP Managed Snapshot: Stream Discovery Operation Status: 0 Mar 28, 2023 3:38:13 PM - end SLP Managed Snapshot: Stream Discovery; elapsed time 0:00:00 Mar 28, 2023 3:38:13 PM - begin SLP Managed Snapshot: Read File List Operation Status: 0 Mar 28, 2023 3:38:13 PM - end SLP Managed Snapshot: Read File List; elapsed time 0:00:00 Mar 28, 2023 3:38:13 PM - begin SLP Managed Snapshot: Create Snapshot Mar 28, 2023 3:38:15 PM - Info nbcs (pid=284) Backup ID: d6ab6465-1437-4a0a-a41b-21489d439627_1680010693 Mar 28, 2023 3:38:15 PM - Info nbcs (pid=284) Created catalog entry Mar 28, 2023 3:38:17 PM - Info nbcs (pid=284) Initiated Kubernetes backup Mar 28, 2023 3:38:17 PM - Info nbcs (pid=284) Started monitoring the progress of the submitted job... Mar 28, 2023 3:38:18 PM - Info nbcs (pid=284) Job status: InProgress, total items: 0, items backed-up: 0, Errors encountered: 0, Warnings encountered: 0 Mar 28, 2023 3:38:49 PM - Info nbcs (pid=284) Job status: InProgress, total items: 0, items backed-up: 0, Errors encountered: 0, Warnings encountered: 0 Mar 28, 2023 3:39:20 PM - Info nbcs (pid=284) Job status: Completed, total items: 43, items backed-up: 43, Persistent Volume snapshots attempted: 1, Persistent Volume snapshots completed: 1, Errors encountered: 0, Warnings encountered: 0 Mar 28, 2023 3:39:50 PM - Info nbcs (pid=284) Final job status: success Mar 28, 2023 3:39:52 PM - Info nbcs (pid=284) Backup ID: d6ab6465-1437-4a0a-a41b-21489d439627_1680010693 Operation Status: 0 Mar 28, 2023 3:39:55 PM - end SLP Managed Snapshot: Create Snapshot; elapsed time 0:01:42 the requested operation was successfully completed
Para hacer la restauración, podremos elegir en que cluster y namespace hacerlo:
Y seleccionar los objetos que queremos restaurar:
Y también podremos elegir los volúmenes:
Referencias
Veritas NetBackup for Kubernetes Data Protection
Hi Enrique Pereira Calvo
I did install success netbackupkops on OpenShift v4.13 , I create a protection plan for backup pvc (keep snapshot 2 hours after run), anything work normal but when snapshot expires it only delete snapshot on OpenShift but not delete on vSphere ( i use CSI vSphere). I don’t know are you facing the same problem or not. Please check it.
Hi,
I have tested it with freenas and ceph and snapshots were deleted with no issues, so I am afraid the problem should be with vSphere CSI. Sorry but I have not tested it with vSphere CSI.
Regards,
Enrique