Preparing NDK
In this section we will use just one Prism Central (PC)/Prism Element (PE)/K8s cluster to test the data recovery capabilities of NDK.
Prerequisites
- NDK
v1.2
or later deployed on a Nutanix cluster - Nutanix Kubernetes Platform (NKP) cluster
v1.15
or later deployed, accessible viakubectl
. See NKP Deployment for NKP install instructions. - Internal Harbor container registry. See Harbor Installation
- Nutanix CSI driver installed for storage integration. [pre-configured with NKP install]
- Networking configured to allow communication between the Kubernetes cluster, PC and PE.
- Traefik Ingress controller installed for external access. [pre-configured with NKP install]
- K8s Load Balancer installed to facilitate replication workflows. [ Metallb pre-configured with NKP install]
- Linux Tools VM or equivalent environment with
kubectl
,helm
,curl
,docker
andjq
installed. See Jumphost VM for details. - PC, PE and NKP access credentials
High-Level Process
Warning
NKP only supports NDK v1.2
at the time of writing this lab.
We will use NDK v1.2
with NKP v2.15
for this lab.
This lab will be updated as NKP supports NDK v1.3
in the near future.
- Download NDK
v1.2
binaries that are available in Nutanix Support Nutanix Portal - Upload NDK containers to an internal Harbor registry
- Enable NDK to trust internal Harbor registry. See here
- Install NDK
v1.2
Setup source NKP in Primary PC/PE/K8s
Make sure to name your NKP cluster appropriately so it is easy to identify
For the purposes of this lab, we will call the source NKP cluster as nkpprimary
Follow instructions in NKP Deployment to setup source/primary NKP K8s cluster.
Setup Harbor Internal Registry
Follow instructions in Harbor Installation to setup internal Harbor registry for storing NDK
v1.2
containers.
- Login to Harbor
- Create a project called
nkp
in Harbor
Prepare for NDK Installation
Download NDK Binaries
-
Open new
VSCode
window on your jumphost VM -
In
VSCode
Explorer pane, click on existing$HOME
folder -
Click on New Folder name it:
ndk
-
On
VSCode
Explorer plane, click the$HOME/ndk
folder -
On
VSCode
menu, selectTerminal
>New Terminal
-
Browse to
ndk
directory -
In
VSC
, under the newly createdndk
folder, click on New File and create file with the following name: -
Add (append) the following environment variables and save it
-
Source the
.env
file to import environment variables -
Login to Nutanix Portal using your credentials
-
Go to Downloads > Nutanix Data Services for Kubernetes (NDK)
-
Scroll and choose Nutanix Data Services for Kubernetes ( Version: 1.2.0 )
-
Download the NDK binaries bundle from the link you copied earlier
-
Extract the NDK binaries
Upload NDK Binaries to Internal Registry
-
Load NDK container images and upload to internal Harbor registry
docker load -i ndk-${NDK_VERSION}.tar docker login ${IMAGE_REGISTRY} for img in ndk/manager:${NDK_VERSION} ndk/infra-manager:${NDK_VERSION} ndk/job-scheduler:${NDK_VERSION} ndk/kube-rbac-proxy:${KUBE_RBAC_PROXY_VERSION} ndk/bitnami-kubectl:${KUBECTL_VERSION}; do docker tag $img ${IMAGE_REGISTRY}/${img}; docker push ${IMAGE_REGISTRY}/${img};done
docker load -i ndk-1.2.0.tar docker login harbor.example.com/nkp for img in ndk/manager:1.2.0 ndk/infra-manager:1.2.0 ndk/job-scheduler:1.2.0 ndk/kube-rbac-proxy:v0.17.0 ndk/bitnami-kubectl:1.30.3; do docker tag ndk/bitnami-kubectl:1.30.3 harbor.example.com/nkp/ndk/bitnami-kubectl:1.30.3; docker push harbor.example.com/nkp/ndk/bitnami-kubectl:1.30.3;done
Install NDK on Primary NKP Cluster
- Login to VSCode Terminal
-
Set you NKP cluster KUBECONFIG
-
Test connection to
nkpprimary
cluster$ kubectl get nodes NAME STATUS ROLES AGE VERSION nkpprimary-md-0-vd5kr-ff8r8-hq764 Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-jjpvx Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-md28h Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-xvmf6 Ready <none> 3d4h v1.32.3 nkpprimary-xnnk5-6pnr8 Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-87slh Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-fjdd4 Ready control-plane 3d4h v1.32.3
-
Install NDK
helm upgrade -n ntnx-system --install ndk chart/ \ --set manager.repository="$IMAGE_REGISTRY/ndk/manager" \ --set manager.tag=${NDK_VERSION} \ --set infraManager.repository="$IMAGE_REGISTRY/ndk/infra-manager" \ --set infraManager.tag=${NDK_VERSION} \ --set kubeRbacProxy.repository="$IMAGE_REGISTRY/ndk/kube-rbac-proxy" \ --set kubeRbacProxy.tag=${KUBE_RBAC_PROXY_VERSION} \ --set bitnamiKubectl.repository="$IMAGE_REGISTRY/ndk/bitnami-kubectl" \ --set bitnamiKubectl.tag=${KUBECTL_VERSION} \ --set jobScheduler.repository="$IMAGE_REGISTRY/ndk/job-scheduler" \ --set jobScheduler.tag=${NDK_VERSION} \ --set config.secret.name=nutanix-csi-credentials \ --set tls.server.enable=false
helm upgrade -n ntnx-system --install ndk chart/ \ --set manager.repository="harbor.example.com/nkp/ndk/manager" \ --set manager.tag=1.2.0 \ --set infraManager.repository="harbor.example.com/nkp/ndk/infra-manager" \ --set infraManager.tag=1.2.0 \ --set kubeRbacProxy.repository="harbor.example.com/nkp/ndk/kube-rbac-proxy" \ --set kubeRbacProxy.tag=v0.17.0 \ --set bitnamiKubectl.repository="harbor.example.com/nkp/ndk/bitnami-kubectl" \ --set bitnamiKubectl.tag=1.30.3 \ --set jobScheduler.repository="harbor.example.com/nkp/ndk/job-scheduler" \ --set jobScheduler.tag=1.2.0 \ --set config.secret.name=nutanix-csi-credentials \ --set tls.server.enable=false
-
Check if all NDK custom resources are running (4 of 4 containers should be running inside the
ndk-controller-manger
pod)Active namespace is "ntnx-system". $ k get all -l app.kubernetes.io/name=ndk NAME READY STATUS RESTARTS AGE pod/ndk-controller-manager-57fd7fc56b-gg5nl 4/4 Running 0 19m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ndk-controller-manager-metrics-service ClusterIP 10.109.134.126 <none> 8443/TCP 19m service/ndk-intercom-service LoadBalancer 10.99.216.62 10.122.7.212 2021:30258/TCP 19m service/ndk-scheduler-webhook-service ClusterIP 10.96.174.148 <none> 9444/TCP 19m service/ndk-webhook-service ClusterIP 10.107.189.171 <none> 443/TCP 19m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ndk-controller-manager 1/1 1 1 19m NAME DESIRED CURRENT READY AGE replicaset.apps/ndk-controller-manager-57fd7fc56b 1 1 1 19m
NDK Custom Resources for K8s
To begin protecting applications with NDK, it is good to become familiar with the NDK custom resources and how they are used to manage data protection. The following table provides a brief overview of the NDK custom resources and their purposes.
For more information about the NDK custom resources, see the NDK Custom Resources section of the NDK documentation.
Tip
We will be using NDK custom resources throughout the lab for accomplising data protection tasks and show the relationship between these custom resources as well.
Custom Resource | Purpose |
---|---|
StorageCluster |
Defines the Nutanix storage fabric and UUIDs for PE and PC. |
Application |
Defines a logical group of K8s resources for data protection. |
ApplicationSnapshotContent |
Stores infrastructure-level data of an application snapshot. |
ApplicationSnapshot |
Takes a snapshot of an application and its volumes. |
ApplicationSnapshotRestore |
Restores an application snapshot. |
Remote |
Defines a target Kubernetes cluster for replication. |
ReplicationTarget |
Specifies where to replicate an application snapshot. |
ApplicationSnapshotReplication |
Triggers snapshot replication to another cluster. |
JobScheduler |
Defines schedules for data protection jobs. |
ProtectionPlan |
Defines snapshot and replication rules and retention. |
AppProtectionPlan |
Applies one or more ProtectionPlans to an application. |
Configure NDK
The first component we would configure in NDK is StorageCluster
. This is used to represent the Nutanix Cluster components including the following:
- Prism Central (PC)
- Prism Element (PE)
By configuring StorageCluster
custom resource with NDK, we are providing Nutanix infrastructure information to NDK.
-
Logon to Jumphost VM Terminal in
VSCode
-
Get uuid of PC and PE using the following command
-
Add (append) the following environment variables
$HOME/ndk/.env
and save it -
Note and export the external IP assigned to the NDK intercom service on the Primary Cluster
-
Add (append) the following environment variables file
$HOME/ndk/.env
and save it -
Source the
$HOME/ndk/.env
file -
Create the StorageCluster custom resource
Now we are ready to create local cluster snapshots and snapshot restores using the following NDK custom resources:
ApplicationSnapshot
andApplicationSnapshotRestore