Nvidia has released a free tool you can use to create a custom chatbot that quickly searches your computer for answers to your questions — all while ensuring your private data stays private.
The challenge: Large language models (LLMs) learn to understand and generate text written in natural language by “reading” tons of data. The LLM behind OpenAI’s ChatGPT, for example, was trained on text pulled from the internet, as well as other sources.
Because an LLM’s “knowledge” is limited to the content included in its training data, the original ChatGPT couldn’t speak with authority on anything that happened after the cutoff date for its training data (January 2022).
A custom chatbot: Nvidia — the third biggest tech company in the world — has now released a free demo of a tool, called Chat with RTX, that lets you easily customize an open-source LLM, such as Meta’s Llama, with text files and videos.
You can give your custom chatbot access to a folder of PDFs on your computer, for example, and then ask it questions related to their content. If you feed it a link to a YouTube playlist, it can hunt through the videos’ transcripts for answers to your questions about the clips.
While you could replicate this to an extent with ChatGPT — by copying and pasting text from a personal file into a chat before asking questions about it, for example — that AI does all of its processing in the cloud, meaning you’d be risking someone gaining access to the information.
Besides that, cloud-based AIs usually have hard limits on how much data you can prompt them with at any given time, so even one long PDF file might be too long for it to read.
Chat with RTX is different. It’s free, so you don’t need a subscription, and it runs directly on your Windows PC. That not only protects your privacy, but can potentially lead to faster answers, as you aren’t beholden to busy servers.
The cold water: Chat with RTX can’t run on just any PC. Your system will need to meet Nvidia’s hardware requirements: “In addition to a GeForce RTX 30 Series GPU or higher with a minimum 8GB of VRAM, Chat with RTX requires Windows 10 or 11, and the latest NVIDIA GPU drivers.”
Early reviews suggest the tool is still a bit buggy, too.
When one reviewer fed their custom chatbot a video link, it downloaded a transcript for a different video, and in another review, it answered a question correctly, but cited the wrong source for its answer. Imperfections are to be expected with a free demo, though.
The big picture: Nvidia is already a key player in the AI revolution. As of February 2023, it made 95% of the graphics cards needed to train and deploy chatbots, and more recently, it’s been developing and releasing hardware purpose-built for running generative AIs locally.
While it has released generative AI software, it’s been geared toward enterprise customers. If Nvidia keeps developing Chat with RTX in future versions, it could be hugely appealing to individuals looking for a safer, faster, cheaper AI.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].