How to Run NVIDIA Chat with RTX - Locally Run an LLM AI Chatbot
If you are looking for ways to run a LLM (Large Language Model) AI chatbot locally, NVIDIA has recently released Chat with RTX an easy to use LLM AI chatbot that can run locally on any Windows 10 and 11 computer. The only requirement is that you have at least 16 GB of RAM and a 30-series or 40-series gpu. So follow along as we guide you through the Chat with RTX setup process and how to use Chat with RTX on a Windows PC.
How to Download and Install Chat with RTX on Windows.
Chat with RTX by Nvidia is an innovative AI chatbot tool that operates solely on your PC, with complete local functionality that doesn't require a connection to the cloud. In this guide, we'll walk you through the process of installing and using Chat with RTX.
-
Visit the Nvidia website to download Chat with RTX.
-
The download file is 35 gigabytes, which is quite big so make sure you have sufficient space on your PC. You'll need at least 70-80GB is a good start.
-
Once the download is complete, extract the files and run the executable (.exe) file.
-
Follow the on-screen instructions, agreeing to the terms and selecting your installation preferences.
Note: Let it install to the default directory it wants to. I tried to install it in a different location on a separate drive and it kept failing to install.
-
The installation process may take some time, potentially up to an hour, as it downloads additional components. If you see any error messages about python.exe - Entry Point Not Found, just click ok to pass them. You might see quite a few...
- Once all the python stuff is out of the way the command window will process a lot more stuff then open the tool. If the tool doesn't open you will see a URL you can copy and paste into your browser to get access. It will look something like the example below:
http://127.0.0.1:25253?cookie=d362cbc5-5952-41e0-8246-f5cdb720c927&__theme=dark
Setting Up Chat with RTX on Windows
-
Launch the Chat with RTX application and accept the first prompt about Phyton. If it doesn't automatically launch you can open it
-
You'll be prompted to provide a local folder path. If it doesn't ask you can manually add a path using the Dataset section This is where the AI will analyze files. Choose a folder containing relevant documents such as Word, PDF, or text files. I suggest setting up one dedicated folder to use rather than giving it access to an entire drive or something. Allow the program some time to analyze the selected folder, as this process may take a while depending on the size of the files. If you don't set a folder it will give you a response like the one seen below.
How to Use Chat with RTX on Windows
-
Once the setup is complete, you'll see a text box where you can interact with the AI.
-
Type your queries or commands into the text box, such as searching for specific terms within your documents.
-
For example, you can inquire about the presence of certain keywords like "Hotmail" within your documents, and the AI will provide relevant results.
-
Chat with RTX can also be used to analyze videos. Simply click the Dataset section and paste the URL of a YouTube video, and the AI will provide a condensed transcript of its contents.
Important Information about NVIDIA chat with RTX
While Chat with RTX is decent you'll probably get bored with it pretty quickly, but I do believe NVIDIA has some pretty big plans for it in the future. It's the kind of thing I can see them turning into a virtual assistant or something along those lines. The kind of thing that Cortana was supposed to be all those years ago. As we mentioned at the very start of this guide Chat with RTX is compatible with Windows 10 and 11. Though Windows 11 is preferred, and It requires 3000 or 4000 series Nvidia GPUs. You'll also need the latest Nvidia drivers installed. For the best results, you'll want hardware with at least 8 GB of vram, and 80 GB of free storage space.
Can't Install Chat with RTX Error: NVIDIA Installer Failed Mistral 7B INT4 Failed.
At this stage, this is no working solution for this problem. We are just waiting for NVIDIA to Fix this error message. I did have random success trying to install it 3 times.... On the 3rd attempt it installed fully... So give it a go.
Reference: The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of publicly available conversation datasets.