Deploying GPT-in-a-Box NVD Reference Application using GitOps (FluxCD)
stateDiagram-v2
direction LR
state TestLLMApp {
[*] --> CheckInferencingService
CheckInferencingService --> TestFrontEndApp
TestFrontEndApp --> TestRAG
TestRAG --> [*]
}
[*] --> PreRequisites
PreRequisites --> DeployLLMV1
DeployLLMV1 --> TestLLMApp : previous section
TestLLMApp --> [*]
Accessing LLM Frontend
Once the bootstrapping is done in the previous section. We can access and test our LLM application.
-
In VSC Terminal, check the status of inferencing service
-
Access the URL to check status and make sure it is alive and well
-
On VSC terminal, get the LLM Frontend ingress endpoints
-
Copy the HOSTS address
frontend.dev-cluster.10.x.x.216.nip.io
from the above output and paste it in your browser. You should be able to see the LLM chat interface. Start asking away.
Testing LLM Frontend Chat App
-
Type any question in the chat box. For example: give me a python program to print the fibonacci series?