Preparing Air-gap NDK
High-Level Process
Warning
NKP supports NDK v2.0.0 at the time of writing this lab with CSI version 3.3.8.
We will use NDK v2.0.0 with NKP v2.16.1 for this lab.
CSI version 3.3.8 is necessary for Nutanix Files replication and protection.
- Download NDK
v2.0.0binaries that are available in Nutanix Support Nutanix Portal - Upload NDK containers to an internal Harbor registry
- Enable NDK to trust internal Harbor registry. See here
- Install NDK
v2.0.0
Setup Harbor Internal Registry
Follow instructions in Harbor Installation to setup internal Harbor registry for storing NDK
v2.0.0containers.
- Login to Harbor
- Create a project called
nkpin Harbor
Prepare for NDK Installation
Download NDK Binaries
-
Login to Nutanix Portal using your credentials
-
Go to Downloads > Nutanix Data Services for Kubernetes (NDK)
-
Scroll and choose Nutanix Data Services for Kubernetes ( Version: 2.0.0 )
-
Open new
VSCodewindow on your jumphost VM -
In
VSCodeExplorer pane, click on existing$HOMEfolder -
Click on New Folder name it:
ndk -
On
VSCodeExplorer plane, click the$HOME/ndkfolder -
On
VSCodemenu, selectTerminal>New Terminal -
Browse to
ndkdirectory -
Download the NDK binaries bundle from the link you copied earlier
-
Extract the NDK binaries
-
Source the
.envfile to import environment variables -
Load NDK container images to local Docker instance
-
In
VSC, under the newly createdndkfolder, click on New File and create file with the following name: -
Add (append) the following environment variables and save it
export NDK_VERSION=_your_ndk_version # (1)! export KUBE_RBAC_PROXY_VERSION=_your_kube_kube_rbac_proxy_version # (2)! export KUBECTL_VERSION=_your_kubectl_version # (3)! export IMAGE_REGISTRY=_your_harbor_registy_url/nkp- Get
NDKtag version information fromdocker load -i ndk-${NDK_VERSION}.tarcommand output - Get
KUBE_RBAC_PROXY_VERSIONtag version information fromdocker load -i ndk-${NDK_VERSION}.tarcommand output - Get
KUBECTL_VERSIONtag version information fromdocker load -i ndk-${NDK_VERSION}.tarcommand output
- Get
-
Load NDK container images and upload to internal Harbor registry
docker login ${IMAGE_REGISTRY} for img in ndk/manager:${NDK_VERSION} \ ndk/infra-manager:${NDK_VERSION} \ ndk/job-scheduler:${NDK_VERSION} \ ndk/kube-rbac-proxy:${KUBE_RBAC_PROXY_VERSION} ndk/kubectl:${KUBECTL_VERSION}; \ do docker tag $img ${IMAGE_REGISTRY}/${img}; \ docker push ${IMAGE_REGISTRY}/${img}; \ donedocker login harbor.example.com/nkp for img in ndk/manager:2.0.0 ndk/infra-manager:2.0.0 ndk/job-scheduler:2.0.0 ndk/kube-rbac-proxy:v0.17.0 ndk/bitnami-kubectl:1.30.3; do docker tag ndk/bitnami-kubectl:1.30.3 harbor.example.com/nkp/ndk/bitnami-kubectl:1.30.3; docker push harbor.example.com/nkp/ndk/bitnami-kubectl:1.30.3;done
Install NDK on Primary NKP Cluster
- Login to VSCode Terminal
-
Set you NKP cluster KUBECONFIG
-
Test connection to
nkpprimarycluster$ kubectl get nodes NAME STATUS ROLES AGE VERSION nkpprimary-md-0-vd5kr-ff8r8-hq764 Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-jjpvx Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-md28h Ready <none> 3d4h v1.32.3 nkpprimary-md-0-vd5kr-ff8r8-xvmf6 Ready <none> 3d4h v1.32.3 nkpprimary-xnnk5-6pnr8 Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-87slh Ready control-plane 3d4h v1.32.3 nkpprimary-xnnk5-fjdd4 Ready control-plane 3d4h v1.32.3 -
Install NDK
helm upgrade -n ntnx-system --install ndk chart/ \ --set manager.repository="$IMAGE_REGISTRY/ndk/manager" \ --set manager.tag=${NDK_VERSION} \ --set infraManager.repository="$IMAGE_REGISTRY/ndk/infra-manager" \ --set infraManager.tag=${NDK_VERSION} \ --set kubeRbacProxy.repository="$IMAGE_REGISTRY/ndk/kube-rbac-proxy" \ --set kubeRbacProxy.tag=${KUBE_RBAC_PROXY_VERSION} \ --set bitnamiKubectl.repository="$IMAGE_REGISTRY/ndk/bitnami-kubectl" \ --set bitnamiKubectl.tag=${KUBECTL_VERSION} \ --set jobScheduler.repository="$IMAGE_REGISTRY/ndk/job-scheduler" \ --set jobScheduler.tag=${NDK_VERSION} \ --set config.secret.name=nutanix-csi-credentials \ --set tls.server.enable=falsehelm upgrade -n ntnx-system --install ndk chart/ \ --set manager.repository="harbor.example.com/nkp/ndk/manager" \ --set manager.tag=2.0.0 \ --set infraManager.repository="harbor.example.com/nkp/ndk/infra-manager" \ --set infraManager.tag=2.0.0 \ --set kubeRbacProxy.repository="harbor.example.com/nkp/ndk/kube-rbac-proxy" \ --set kubeRbacProxy.tag=v0.17.0 \ --set bitnamiKubectl.repository="harbor.example.com/nkp/ndk/bitnami-kubectl" \ --set bitnamiKubectl.tag=1.30.3 \ --set jobScheduler.repository="harbor.example.com/nkp/ndk/job-scheduler" \ --set jobScheduler.tag=2.0.0 \ --set config.secret.name=nutanix-csi-credentials \ --set tls.server.enable=false -
Check if all NDK custom resources are running (4 of 4 containers should be running inside the
ndk-controller-mangerpod)Active namespace is "ntnx-system". $ k get all -l app.kubernetes.io/name=ndk NAME READY STATUS RESTARTS AGE pod/ndk-controller-manager-57fd7fc56b-gg5nl 4/4 Running 0 19m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ndk-controller-manager-metrics-service ClusterIP 10.109.134.126 <none> 8443/TCP 19m service/ndk-intercom-service LoadBalancer 10.99.216.62 10.122.7.212 2021:30258/TCP 19m service/ndk-scheduler-webhook-service ClusterIP 10.96.174.148 <none> 9444/TCP 19m service/ndk-webhook-service ClusterIP 10.107.189.171 <none> 443/TCP 19m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ndk-controller-manager 1/1 1 1 19m NAME DESIRED CURRENT READY AGE replicaset.apps/ndk-controller-manager-57fd7fc56b 1 1 1 19m
NDK in air-gap enviroment is now install.
Proceed to the configuring NDK here.