Deploying Nutanix Enterprise AI (NAI) NVD Reference Application
Version 2.5.0
This version of the NAI deployment is based on the Nutanix Enterprise AI (NAI) v2.5.0 release.
stateDiagram-v2
direction LR
state DeployNAI {
[*] --> DeployNAIAdmin
DeployNAIAdmin --> InstallSSLCert
InstallSSLCert --> DownloadModel
DownloadModel --> CreateNAI
CreateNAI --> [*]
}
[*] --> PreRequisites
PreRequisites --> DeployNAI
DeployNAI --> TestNAI : next section
TestNAI --> [*]
Prepare for NAI Deployment
Changes in NAI v2.5.0
- Istio Ingress gateway is replaced with Envoy Gateway
- Knative is removed from NAI
- Kserve has been upgraded to 0.15.0
Enable NKP Applications
Enable these NKP Applications from NKP GUI.
Note
In this lab, we will be using the Management Cluster Workspace to deploy our Nutanix Enterprise AI (NAI)
However, in a customer environment, it is recommended to use a separate workload NKP cluster.
Info
The helm charts and the container images for these applications are stored in internal Harbor registry. These images got uploaded to Harbor at the time of install NKE in this section.
- In the NKP GUI, Go to Clusters
- Click on Management Cluster Workspace
- Go to Applications
-
Search and enable the following applications: follow this order to install dependencies for NAI application
- Kube-prometheus-stack: version
71.0.0or later (pre-installed on NKP cluster) - Cert-manager - v1.17.2
Note
The following application are pre-installed on NKP cluster with Pro license
- Cert Manager
v1.17.2or higher
Check if Cert Manager is installed (pre-installed on NKP cluster)
If not installed, use the following command to install it
- Kube-prometheus-stack: version
-
Login to VSC on the jumphost VM, append the following environment variables to the
$HOME\airgap-nai\.envfile and save it -
IN VSC,go to Terminal and run the following commands to source the environment variables
-
Enable Envoy Gateway
v1.5.0using the following commandPulled: harbor.10.x.x.134.nip.io/nkp/gateway-helm:v1.5.0 Digest: sha256:4e49511296e23e3d1400c92cfb38a5c26030501ec7353883e4ccad9fd7cc4c2c NAME: eg LAST DEPLOYED: Thu Feb 26 00:58:39 2026 NAMESPACE: envoy-gateway-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: ************************************************************************** *** PLEASE BE PATIENT: Envoy Gateway may take a few minutes to install *** ************************************************************************** Envoy Gateway is an open source project for managing Envoy Proxy as a standalone or Kubernetes-based application gateway. Thank you for installing Envoy Gateway! 🎉 Your release is named: eg. 🎉 Your release is in namespace: envoy-gateway-system. 🎉 To learn more about the release, try: $ helm status eg -n envoy-gateway-system $ helm get all eg -n envoy-gateway-system To have a quickstart of Envoy Gateway, please refer to https://gateway.envoyproxy.io/latest/tasks/quickstart. To get more details, please visit https://gateway.envoyproxy.io and https://github.com/envoyproxy/gateway. -
Check if Envoy Gateway resources are ready
-
Create an Envoy Proxy resource for the Envoy Gateway to pull image from local private registry:
cat <<EOF | kubectl apply -f - apiVersion: gateway.envoyproxy.io/v1alpha1 kind: EnvoyProxy metadata: name: nai-envoyproxy namespace: envoy-gateway-system spec: provider: type: Kubernetes kubernetes: envoyDeployment: pod: imagePullSecrets: - name: registry-image-pull-secret container: image: "${REGISTRY_HOST}/nutanix/nai-envoy:distroless-v1.35.0" EOF -
Run the Kserve CRD installation
-
Run the Kserve installation
helm install kserve \ oci://${REGISTRY_HOST}/kserve \ --version v0.15.0 \ -n kserve \ --set controller.image.repository=${REGISTRY_HOST}/kserve-controller \ --set controller.image.tag=v0.15.0 \ --set kserve.controller.deploymentMode=RawDeployment \ --set kserve.controller.gateway.disableIngressCreation=truePulled: harbor.10.x.x.134.nip.io/nkp/kserve:v0.15.0 Digest: sha256:cafd90ab1d91a54a28c1ff2761d976bdda0bb173675ef392a16ac250b044d15f I0226 01:59:34.229355 555781 warnings.go:110] "Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`." NAME: kserve LAST DEPLOYED: Thu Feb 26 01:59:33 2026 NAMESPACE: kserve STATUS: deployed REVISION: 1 TEST SUITE: None -
Run the OpenTelemetry operator installation
helm upgrade --install opentelemetry-operator oci://${REGISTRY_HOST}/opentelemetry-operator \ --version 0.93.0 \ -n opentelemetry --create-namespace --wait \ --set manager.image.repository=${REGISTRY_HOST}/nutanix/nai-opentelemetry-operator \ --set manager.collectorImage.repository=${REGISTRY_HOST}/nutanix/nai-opentelemetry-collector-k8s \ --set kubeRBACProxy.image.repository=${REGISTRY_HOST}/nutanix/nai-kube-rbac-proxyLAST DEPLOYED: Thu Feb 26 02:07:06 2026 NAMESPACE: opentelemetry STATUS: deployed REVISION: 1 NOTES: [WARNING] No resource limits or requests were set. Consider setter resource requests and limits via the `resources` field. opentelemetry-operator has been installed. Check its status by running: kubectl --namespace opentelemetry get pods -l "app.kubernetes.io/instance=opentelemetry-operator" Visit https://github.com/open-telemetry/opentelemetry-operator for instructions on how to create & configure OpenTelemetryCollector and Instrumentation custom resources by using the Operator.
Deploy NAI
-
Source the environment variables (if not done so already)
-
In
VSCodeExplorer pane, browse to$HOME/airgap-naifolder -
Run the following command to create a helm values file:
cat << EOF > nai-operators-override-values.yaml imagePullSecret: credentials: registry: ${REGISTRY_HOST} naiRedis: naiRedisImage: name: ${REGISTRY_HOST}/nutanix/nai-redis naiJobs: naiJobsImage: image: ${REGISTRY_HOST}/nutanix/nai-jobs nai-clickhouse-operator: operator: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-operator metrics: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-metrics-exporter ai-gateway-helm: extProc: image: repository: ${REGISTRY_HOST}/nutanix/nai-ai-gateway-extproc tag: c4f26a8 controller: image: repository: $REGISTRY_HOST}/nutanix/nai-ai-gateway-controller tag: c4f26a8 EOFimagePullSecret: credentials: registry: harbor.10.x.x.134.nip.io/nkp naiRedis: naiRedisImage: name: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-redis naiJobs: naiJobsImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-jobs nai-clickhouse-operator: operator: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-operator metrics: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-metrics-exporter ai-gateway-helm: extProc: image: repository: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-ai-gateway-extproc tag: c4f26a8 controller: image: repository: harbor.10.x.x.134.nip.io/nkp}/nutanix/nai-ai-gateway-controller tag: c4f26a8 -
Install nai-operators helm chart in the nai-system namespace
helm upgrade --install nai-operators oci://${REGISTRY_HOST}/nai-operators --version=2.5.0 \ -n nai-system --create-namespace --wait \ --set imagePullSecret.credentials.username=${REGISTRY_USERNAME} \ --set imagePullSecret.credentials.email=${REGISTRY_USERNAME} \ --set imagePullSecret.credentials.password=${REGISTRY_PASSWORD} \ --insecure-skip-tls-verify -f nai-operators-override-values.yamlhelm upgrade --install nai-operators oci://harbor.10.x.x.134.nip.io/nkp/nai-operators --version=2.5.0 \ -n nai-system --create-namespace --wait \ --set imagePullSecret.credentials.username=admin \ --set imagePullSecret.credentials.email=admin \ --set imagePullSecret.credentials.password=_XXXXXXX --insecure-skip-tls-verify -f nai-operators-override-values.yaml -
Run the following command to create a helm values file:
cat << EOF > nai-core-override-values.yaml imagePullSecret: credentials: registry: ${REGISTRY_HOST} naiIepOperator: iepOperatorImage: image: ${REGISTRY_HOST}/nutanix/nai-iep-operator modelProcessorImage: image: ${REGISTRY_HOST}/nutanix/nai-model-processor naiInferenceUi: naiUiImage: image: ${REGISTRY_HOST}/nutanix/nai-inference-ui naiJobs: naiJobsImage: image: ${REGISTRY_HOST}/nutanix/nai-jobs naiApi: naiApiImage: image: ${REGISTRY_HOST}/nutanix/nai-api logger: logLevel: debug supportedTGIImage: ${REGISTRY_HOST}/nutanix/nai-tgi supportedKserveRuntimeImage: ${REGISTRY_HOST}/nutanix/nai-kserve-huggingfaceserver supportedVLLMImage: ${REGISTRY_HOST}/nutanix/nai-vllm supportedKserveCustomModelServerRuntimeImage: ${REGISTRY_HOST}/nutanix/nai-kserve-custom-model-server # Details of super admin (first user in the nai system) superAdmin: username: ${NAI_USER} password: ${NAI_TEMP_PASS} # At least 8 characters # email: admin@nutanix.com # firstName: admin naiIam: iamProxy: image: ${REGISTRY_HOST}/nutanix/nai-iam-proxy iamProxyControlPlane: image: ${REGISTRY_HOST}/nutanix/nai-iam-proxy-control-plane iamUi: image: ${REGISTRY_HOST}/nutanix/nai-iam-ui iamUserAuthn: image: ${REGISTRY_HOST}/nutanix/nai-iam-user-authn iamThemis: image: ${REGISTRY_HOST}/nutanix/nai-iam-themis iamThemisBootstrap: image: ${REGISTRY_HOST}/nutanix/nai-iam-bootstrap naiLabs: labsImage: image: ${REGISTRY_HOST}/nutanix/nai-rag-app nai-clickhouse-keeper: clickhouseKeeper: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-keeper oauth2-proxy: image: repository: "${REGISTRY_HOST}/nutanix/nai-oauth2-proxy" nai-clickhouse-server: clickhouse: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-server initContainers: addUdf: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-udf waitForKeeper: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-jobs nai-clickhouse-schemas: image: registry: ${REGISTRY_HOST}/nutanix repository: nai-clickhouse-schemas naiMonitoring: opentelemetry: collectorImage: ${REGISTRY_HOST}/nutanix/nai-opentelemetry-collector-contrib:0.136.0 targetAllocator: image: repository: ${REGISTRY_HOST}/nutanix/nai-target-allocator naiDatabase: naiDbImage: image: ${REGISTRY_HOST}/nutanix/nai-postgres:16.1-alpine EOFimagePullSecret: credentials: registry: harbor.10.x.x.134.nip.io/nkp naiIepOperator: iepOperatorImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iep-operator modelProcessorImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-model-processor naiInferenceUi: naiUiImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-inference-ui naiJobs: naiJobsImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-jobs naiApi: naiApiImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-api logger: logLevel: debug supportedTGIImage: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-tgi supportedKserveRuntimeImage: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-kserve-huggingfaceserver supportedVLLMImage: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-vllm supportedKserveCustomModelServerRuntimeImage: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-kserve-custom-model-server # Details of super admin (first user in the nai system) superAdmin: username: admin password: _XXXXXXXXX # At least 8 characters # email: admin@nutanix.com # firstName: admin naiIam: iamProxy: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-proxy iamProxyControlPlane: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-proxy-control-plane iamUi: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-ui iamUserAuthn: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-user-authn iamThemis: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-themis iamThemisBootstrap: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-iam-bootstrap naiLabs: labsImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-rag-app nai-clickhouse-keeper: clickhouseKeeper: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-keeper oauth2-proxy: image: repository: "harbor.10.x.x.134.nip.io/nkp/nutanix/nai-oauth2-proxy" nai-clickhouse-server: clickhouse: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-server initContainers: addUdf: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-udf waitForKeeper: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-jobs nai-clickhouse-schemas: image: registry: harbor.10.x.x.134.nip.io/nkp/nutanix repository: nai-clickhouse-schemas naiMonitoring: opentelemetry: collectorImage: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-opentelemetry-collector-contrib:0.136.0 targetAllocator: image: repository: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-target-allocator naiDatabase: naiDbImage: image: harbor.10.x.x.134.nip.io/nkp/nutanix/nai-postgres:16.1-alpine -
Append the following environment variables to the
$HOME\airgap-nai\.envfile and save it -
Install nai-operators helm chart in the nai-system namespace
helm upgrade --install nai-core oci://${REGISTRY_HOST}/nai-core --version=2.5.0 \ -n nai-system --create-namespace --wait \ --set imagePullSecret.credentials.username=${REGISTRY_USERNAME} \ --set imagePullSecret.credentials.email=${REGISTRY_USERNAME} \ --set imagePullSecret.credentials.password=${REGISTRY_PASSWORD} \ --insecure-skip-tls-verify \ --set naiApi.storageClassName=${NAI_API_RWX_STORAGECLASS} \ --set defaultStorageClassName=${NAI_DEFAULT_RWO_STORAGECLASS} \ --set naiMonitoring.nodeExporter.serviceMonitor.namespaceSelector.matchNames[0]=${NKP_WORKSPACE_NAMESPACE} \ --set naiMonitoring.dcgmExporter.serviceMonitor.namespaceSelector.matchNames[0]=${NKP_WORKSPACE_NAMESPACE} \ --set naiMonitoring.opentelemetry.common.resources.requests.cpu=0.1 \ -f nai-core-override-values.yaml \ --set nai-clickhouse-keeper.clickhouseKeeper.resources.limits.memory=1Gi \ --set nai-clickhouse-keeper.clickhouseKeeper.resources.requests.memory=1Gihelm upgrade --install nai-core oci://harbor.apj-cxrules.win/nkp/nai-core --version=2.5.0 \ -n nai-system --create-namespace --wait \ --set imagePullSecret.credentials.username=admin \ --set imagePullSecret.credentials.email=admin \ --set imagePullSecret.credentials.password=_XXXXXXXXXX --insecure-skip-tls-verify \ --set naiApi.storageClassName=nai-nfs-storage \ --set defaultStorageClassName=nutanix-volume \ --set naiMonitoring.nodeExporter.serviceMonitor.namespaceSelector.matchNames[0]=kommander --set naiMonitoring.dcgmExporter.serviceMonitor.namespaceSelector.matchNames[0]=kommander --set naiMonitoring.opentelemetry.common.resources.requests.cpu=0.1 \ -f nai-core-override-values.yaml \ --set nai-clickhouse-keeper.clickhouseKeeper.resources.limits.memory=1Gi \ --set nai-clickhouse-keeper.clickhouseKeeper.resources.requests.memory=1Gi -
Check if all NAI operator pods are running
Active namespace is "nai-system". NAME READY STATUS RESTARTS AGE chi-nai-clickhouse-server-chcluster1-0-0-0 1/1 Running 0 2m41s chk-nai-clickhouse-keeper-chkeeper-0-0-0 1/1 Running 0 2m24s iam-database-bootstrap-puuxv-2zcgr 0/1 Completed 0 2m55s iam-proxy-7cd5489d49-k4hx9 1/1 Running 0 2m55s iam-proxy-control-plane-6cc94cbf9c-dzvvt 1/1 Running 0 2m55s iam-themis-857f4db466-j4zcb 1/1 Running 0 2m55s iam-themis-bootstrap-labqc-pvlc9 0/1 Completed 0 2m55s iam-ui-587c6b44bb-sbbvr 1/1 Running 0 2m55s iam-user-authn-64776599c-7jl79 1/1 Running 0 2m55s nai-api-79d496bb9b-llknr 1/1 Running 0 2m55s nai-api-db-migrate-diuy5-sxwmk 0/1 Completed 0 2m55s nai-clickhouse-schema-job-1772077473-ztd7b 0/1 Completed 0 2m55s nai-db-0 1/1 Running 0 2m55s nai-iep-model-controller-664f759dcf-62cvb 1/1 Running 0 2m55s nai-labs-85c86d45f8-vs2mt 1/1 Running 0 2m55s nai-oauth2-proxy-64cb4fcdf5-fksgw 1/1 Running 0 2m55s nai-oidc-client-registration-rgmqv-c9zsx 0/1 Completed 0 2m55s nai-operators-nai-clickhouse-operator-67bb54cf48-47xdf 2/2 Running 0 64m nai-otel-collector-collector-bfrkn 1/1 Running 0 2m53s nai-otel-collector-collector-ctr9h 1/1 Running 0 2m53s nai-otel-collector-collector-dn5kc 1/1 Running 0 2m53s nai-otel-collector-collector-f5pxd 1/1 Running 0 2m53s nai-otel-collector-collector-gf7t9 1/1 Running 0 2m53s nai-otel-collector-collector-lk7fg 1/1 Running 0 2m53s nai-otel-collector-collector-s7r4z 1/1 Running 0 2m53s nai-otel-collector-targetallocator-6c76477c9c-m4zhq 1/1 Running 0 2m53s nai-ui-89c96b5ff-s5scb 1/1 Running 0 2m55s redis-standalone-8568f5c645-t2sqm 2/2 Running 0 64m
Install SSL Certificate and Gateway Elements
In this section we will install SSL Certificate to access the NAI UI. This is required as the endpoint will only work with a ssl endpoint with a valid certificate.
NAI UI is accessible using the Ingress Gateway.
The following steps show how cert-manager can be used to generate a self signed certificate using the default selfsigned-issuer present in the cluster.
If you are using Public Certificate Authority (CA) for NAI SSL Certificate
If an organization generates certificates using a different mechanism then obtain the certificate + key and create a kubernetes secret manually using the following command:
Skip the steps in this section to create a self-signed certificate resource.
-
Get the NAI UI ingress gateway host using the following command:
NAI_UI_ENDPOINT=$(kubectl get svc -n envoy-gateway-system -l "gateway.envoyproxy.io/owning-gateway-name=nai-ingress-gateway,gateway.envoyproxy.io/owning-gateway-namespace=nai-system" -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}' | grep -v '^$' || kubectl get svc -n envoy-gateway-system -l "gateway.envoyproxy.io/owning-gateway-name=nai-ingress-gateway,gateway.envoyproxy.io/owning-gateway-namespace=nai-system" -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}') -
Get the value of
NAI_UI_ENDPOINTenvironment variable -
We will use the command output e.g:
10.x.x.216as the IP address for NAI as reserved in this section -
Construct the FQDN of NAI UI using nip.io and we will use this FQDN as the certificate's Common Name (CN).
-
Create the ingress resource certificate using the following command:
cat << EOF | k apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: nai-cert namespace: nai-system spec: issuerRef: name: selfsigned-issuer kind: ClusterIssuer secretName: nai-cert commonName: nai.${NAI_UI_ENDPOINT}.nip.io dnsNames: - nai.${NAI_UI_ENDPOINT}.nip.io ipAddresses: - ${NAI_UI_ENDPOINT} EOF -
Patch the Envoy gateway with the
nai-certcertificate details -
Create EnvoyProxy
-
Patch the
nai-ingress-gatewayresource with the newEnvoyProxydetails
Accessing the UI
-
In a browser, open the following URL to connect to the NAI UI
-
Use the
${NAI_USER}and${NAI_TEMP_PASS}values set in${ENVIRONMENT}-values.yamlfiles duringhelminstallation of NAIv.2.4.0 -
Change the password for the
adminuser -
Login using
adminuser and password.
Download Model
We will download and user llama3 8B model which we sized for in the previous section.
- In the NAI GUI, go to Models
- Click on Import Model from Hugging Face
- Choose the
meta-llama/Meta-Llama-3.1-8B-Instructmodel -
Input your Hugging Face token that was created in the previous section and click Import
-
Provide the Model Instance Name as
Meta-Llama-3.1-8B-Instructand click Import -
Go to VSC Terminal to monitor the download
Get jobs in nai-admin namespacekubens nai-admin ✔ Active namespace is "nai-admin" kubectl get jobs NAME COMPLETIONS DURATION AGE nai-c0d6ca61-1629-43d2-b57a-9f-model-job 0/1 4m56s 4m56Validate creation of pods and PVCkubectl get po,pvc NAME READY STATUS RESTARTS AGE nai-c0d6ca61-1629-43d2-b57a-9f-model-job-9nmff 1/1 Running 0 4m49s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE nai-c0d6ca61-1629-43d2-b57a-9f-pvc-claim Bound pvc-a63d27a4-2541-4293-b680-514b8b890fe0 28Gi RWX nai-nfs-storage <unset> 2dVerify download of model using pod logskubectl logs -f nai-c0d6ca61-1629-43d2-b57a-9f-model-job-9nmff /venv/lib/python3.9/site-packages/huggingface_hub/file_download.py:983: UserWarning: Not enough free disk space to download the file. The expected file size is: 0.05 MB. The target location /data/model-files only has 0.00 MB free disk space. warnings.warn( tokenizer_config.json: 100%|██████████| 51.0k/51.0k [00:00<00:00, 3.26MB/s] tokenizer.json: 100%|██████████| 9.09M/9.09M [00:00<00:00, 35.0MB/s]<00:30, 150MB/s] model-00004-of-00004.safetensors: 100%|██████████| 1.17G/1.17G [00:12<00:00, 94.1MB/s] model-00001-of-00004.safetensors: 100%|██████████| 4.98G/4.98G [04:23<00:00, 18.9MB/s] model-00003-of-00004.safetensors: 100%|██████████| 4.92G/4.92G [04:33<00:00, 18.0MB/s] model-00002-of-00004.safetensors: 100%|██████████| 5.00G/5.00G [04:47<00:00, 17.4MB/s] Fetching 16 files: 100%|██████████| 16/16 [05:42<00:00, 21.43s/it]:33<00:52, 9.33MB/s] ## Successfully downloaded model_files|██████████| 5.00G/5.00G [04:47<00:00, 110MB/s] Deleting directory : /data/hf_cache -
Optional - verify the events in the namespace for the pvc creation
$ k get events | awk '{print $1, $3}' 3m43s Scheduled 3m43s SuccessfulAttachVolume 3m36s Pulling 3m29s Pulled 3m29s Created 3m29s Started 3m43s SuccessfulCreate 90s Completed 3m53s Provisioning 3m53s ExternalProvisioning 3m45s ProvisioningSucceeded 3m53s PvcCreateSuccessful 3m48s PvcNotBound 3m43s ModelProcessorJobActive 90s ModelProcessorJobComplete
The model is downloaded to the Nutanix Files pvc volume.
After a successful model import, you will see it in Active status in the NAI UI under Models menu

Create and Test Inference Endpoint
In this section we will create an inference endpoint using the downloaded model.
- Navigate to Inference Endpoints menu and click on Create Endpoint button
-
Fill the following details:
- Endpoint Name:
llama-8b - Model Instance Name:
Meta-LLaMA-8B-Instruct - Use GPUs for running the models :
Checked - No of GPUs (per instance):
- GPU Card:
NVIDIA-L40S(or other available GPU) - No of Instances:
1 - API Keys: Create a new API key or use an existing one
- Endpoint Name:
-
Click on Create
-
Monitor the
nai-adminnamespace to check if the services are coming up -
Check the events in the
nai-adminnamespace for resource usage to make sure all$ kubectl get events -n nai-admin --sort-by='.lastTimestamp' | awk '{print $1, $3, $5}' 110s FinalizerUpdate Updated 110s FinalizerUpdate Updated 110s RevisionReady Revision 110s ConfigurationReady Configuration 110s LatestReadyUpdate LatestReadyRevisionName 110s Created Created 110s Created Created 110s Created Created 110s InferenceServiceReady InferenceService 110s Created Created -
Once the services are running, check the status of the inference service
Troubleshooting Endpoint ISVC
TGI Imange and Self-signed Certificates
Only follow this procedure if this isvc is not starting up.
KNative Serving Image Tag Checking
From testing, we have identified that KServe module is making sure that there are no container image tag discrepencies, by pulling image using SHA digest. This is done to avoid pulling images that are updated without updating the tag.
We have avoided this behavior by patching the config-deployment config map in the knative-serving namespace to skip image tag checking. Check this Prepare for NAI Deployment sectionfor more details.
kubectl patch configmap config-deployment -n knative-serving --type merge -p '{"data":{"registries-skipping-tag-resolving":"${REGISTRY_HOST}"}'
If this procedure was not followed, then the isvc will not start up.
-
If the
isvcis not coming up, then explore the events innai-adminnamespace.$ kubectl get isvc NAME URL READY PREV LATEST PREVROLLEDOUTREVISION LATESTREADYREVISION AGE llama8b http://llama8b.nai-admin.svc.cluster.local False $ kubectl get events --sort-by='.lastTimestamp' Warning InternalError revision/llama8b-predictor-00001 Unable to fetch image "harbor.10.x.x.111.nip.io/nkp/nutanix/nai-tgi:2.3.1-825f39d": failed to resolve image to digest: Get "https://harbor.10.x.x.111.nip.io/v2/": tls: failed to verify certificate: x509: certificate signed by unknown authorityThe temporary workaround is to use the TGI images SHA signature from the container registry.
This site will be updated with resolutions for the above issues in the future.
-
Note the above TGI image SHA digest from the container registry.
docker pull harbor.10.x.x.111.nip.io/nkp/nutanix/nai-tgi:2.3.1-825f39d 2.3.1-825f39d: Pulling from nkp/nutanix/nai-tgi Digest: sha256:2df9fab2cf86ab54c2e42959f23e6cfc5f2822a014d7105369aa6ddd0de33006 Status: Image is up to date for harbor.10.x.x.111.nip.io/nkp/nutanix/nai-tgi:2.3.1-825f39d harbor.10.x.x.111.nip.io/nkp/nutanix/nai-tgi:2.3.1-825f39d -
The SHA digest will look like the following:
-
Create a copy of the
isvcmanifest -
Edit the
isvc -
Search and replace the
7. After replacing the image's SHA digest, the image value should look as follows:imagetag with the SHA digest from the TGI image. -
Save the
isvcconfiguration by writing the changes to the file and exiting the vi editor using:wq!key combination. -
Verify that the
isvcis running
This should resolve the issue the issue with the TGI image.
Report Other Issues
If you are facing any other issues, please report them here in the NAI LLM GitHub Repo Issues page.