NVIDIA has just updated its ChatRTX AI chatbot with support for new LLMs, new media search abilities, and speech recognition technology. Check it out:
The latest version of ChatRTX supports more LLMs including Gemma, the latest open, local LLM trained by Google. Gemma was developed by the same research and technology that Google used to create Gemini models, and is built for responsible AI development.
ChatRTX now supports ChatGLM3, an open, bilingual (English and Chinese) LLM based on the general language model framework. The updated version of ChatRTX now lets users interact with image data through Contrastive Language-Image Pre-training from OpenAI. CLIP is a neural network that, as NVIDIA explains, through training and refinement will learn visual concepts from natural lanage supervsion — a model that recognizes what the AI is “seeing” in image collections.
- ChatRTX adds to its growing list of supported LLMs, including Gemma, Google’s latest LLM, and ChatGLM3, an open, bilingual (English and Chinese) LLM, providing users with additional flexibility.
- New photo support enables ChatRTX users to easily search and interact locally with their photo data without the need for complex metadata labeling, thanks to OpenAI’s Contrastive Language-Image Pre-training (CLIP).
- ChatRTX users can now speak with their data, with added support for Whisper, an AI automatic speech recognition system that now enables ChatRTX to understand verbal speech.
On the speech recognition side of things, ChatRTX now lets you use your voice through support with Whisper, an automatic speech recognition system that uses AI to process spoken language, letting users send voice requests to the application that sees ChatRTX providing text-based responses.
You can download ChatRTX right here (11.6GB download).