Preparing NDK
In this section we will use just one Prism Central (PC)/Prism Element (PE)/K8s cluster to test the data recovery capabilities of NDK.
Pre-requisites
- NDK
2.0.0or later deployed on a Nutanix cluster -
Nutanix Kubernetes Platform (NKP) cluster
v2.16.1[or later] deployed, accessible viakubectl. See NKP Deployment for NKP install instructions. -
Internal Harbor container registry
- See Harbor Installation
- Direct download from Docker.io is also possible [See inline notes in the lab]
-
Nutanix CSI driver installed for storage integration. [pre-configured with NKP install]
- Networking configured to allow communication between the Kubernetes cluster, PC and PE.
- Traefik Ingress controller installed for external access. [pre-configured with NKP install]
- K8s Load Balancer installed to facilitate replication workflows. [ Metallb pre-configured with NKP install]
- Linux Tools VM or equivalent environment with
kubectl,helm,curl,dockerandjqinstalled. See Jumphost VM for details. - PC, PE and NKP access credentials
High-Level Process
Warning
NKP supports NDK v2.0.0 at the time of writing this lab with CSI version 3.3.8.
We will use NDK v2.0.0 with NKP v2.16.1 for this lab.
CSI version 3.3.8 is necessary for Nutanix Files replication and protection.
- Download NDK
v2.0.0binaries that are available in Nutanix Support Nutanix Portal - Get NDK container download credentials from Nutanix Docker Hub private repository
- Install NDK helm charts
- Install NDK
v2.0.0
Setup source NKP in Primary PC/PE/K8s
Make sure to name your NKP cluster appropriately so it is easy to identify
For the purposes of this lab, we will call the source NKP cluster as nkpprimary
Follow instructions in NKP Deployment to setup source/primary NKP K8s cluster.
Prepare for NDK Installation
Are you installing in an air-gap environment?
Follow instructions in NDK Air-Gap Deployment to setup source/primary NKP K8s cluster.
Download NDK Binaries
-
Open new
VSCodewindow on your jumphost VM -
In
VSCodeExplorer pane, click on existing$HOMEfolder -
Click on New Folder name it:
ndk -
On
VSCodeExplorer plane, click the$HOME/ndkfolder -
Login to Nutanix Portal using your credentials
-
Go to Downloads > Nutanix Data Services for Kubernetes (NDK)
-
On top of the download page, get the Docker registry download credentials under Manage Access Token to get Nutanix Docker Hub private repository credentials
-
Obtain the values of the following:
- Username
- Access Token (password)
We will use these values in the
.envfile -
In
VSC, under the newly createdndkfolder, click on New File and create file with the following name: -
Add (append) the following environment variables and save it
-
Source the
.envfile to import environment variables -
Scroll and choose Nutanix Data Services for Kubernetes ( Version: 2.0.0 )
-
Download the NDK binaries bundle from the link you copied earlier
-
Extract the NDK binaries
Install NDK on Primary NKP Cluster
- Login to VSCode Terminal
-
Set you NKP cluster KUBECONFIG
-
Test connection to
nkpprimarycluster$ kubectl get nodes NAME STATUS ROLES AGE VERSION nkpprimary-md-0-vd5kr-ff8r8-hq764 Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-jjpvx Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-md28h Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-xvmf6 Ready <none> 3d4h v1.32.3 nkpprimary-xnnk5-6pnr8 Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-87slh Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-fjdd4 Ready control-plane 3d4h v1.32.3 -
Install NDK
-
Check if all NDK custom resources are running (4 of 4 containers should be running inside the
ndk-controller-mangerpod)Active namespace is "ntnx-system". $ k get all -l app.kubernetes.io/name=ndk NAME READY STATUS RESTARTS AGE pod/ndk-controller-manager-754bcbf7d4-8wn55 4/4 Running 0 77m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ndk-controller-manager-metrics-service ClusterIP 10.96.236.12 <none> 8443/TCP 77m service/ndk-intercom-service LoadBalancer 10.102.58.136 10.x.x.216 2021:30215/TCP 77m service/ndk-scheduler-webhook-service ClusterIP 10.111.99.86 <none> 9444/TCP 77m service/ndk-webhook-service ClusterIP 10.106.40.106 <none> 443/TCP 77m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ndk-controller-manager 1/1 1 1 77m NAME DESIRED CURRENT READY AGE replicaset.apps/ndk-controller-manager-754bcbf7d4 1 1 1 77m
NDK Custom Resources for K8s
To begin protecting applications with NDK, it is good to become familiar with the NDK custom resources and how they are used to manage data protection. The following table provides a brief overview of the NDK custom resources and their purposes.
For more information about the NDK custom resources, see the NDK Custom Resources section of the NDK documentation.
Tip
We will be using NDK custom resources throughout the lab for accomplising data protection tasks and show the relationship between these custom resources as well.
| Custom Resource | Purpose |
|---|---|
StorageCluster |
Defines the Nutanix storage fabric and UUIDs for PE and PC. |
Application |
Defines a logical group of K8s resources for data protection. |
ApplicationSnapshotContent |
Stores infrastructure-level data of an application snapshot. |
ApplicationSnapshot |
Takes a snapshot of an application and its volumes. |
ApplicationSnapshotRestore |
Restores an application snapshot. |
Remote |
Defines a target Kubernetes cluster for replication. |
ReplicationTarget |
Specifies where to replicate an application snapshot. |
ApplicationSnapshotReplication |
Triggers snapshot replication to another cluster. |
JobScheduler |
Defines schedules for data protection jobs. |
ProtectionPlan |
Defines snapshot and replication rules and retention. |
AppProtectionPlan |
Applies one or more ProtectionPlans to an application. |
Configure NDK
The first component we would configure in NDK is StorageCluster. This is used to represent the Nutanix Cluster components including the following:
- Prism Central (PC)
- Prism Element (PE)
By configuring StorageCluster custom resource with NDK, we are providing Nutanix infrastructure information to NDK.
-
Logon to Jumphost VM Terminal in
VSCode -
Get uuid of PC and PE using the following command
-
Add (append) the following environment variables
$HOME/ndk/.envand save it -
Note and export the external IP assigned to the NDK intercom service on the Primary Cluster
-
Add (append) the following environment variables file
$HOME/ndk/.envand save it -
Source the
$HOME/ndk/.envfile -
Create the StorageCluster custom resource
Now we are ready to create local cluster snapshots and snapshot restores using the following NDK custom resources:
ApplicationSnapshotandApplicationSnapshotRestore