Google has recommended that users of its artificial intelligence chatbot, Gemini, which is positioned as a rival to ChatGPT and Bing, verify the tool’s responses for accuracy.
This is not the first instance of Google cautioning about the limitations of its chatbot, Gemini. The company suggests that anyone utilizing productive AI tools, including Gemini, should cross-verify the responses with Google search to ensure their accuracy.
Avoid Using Gemini for Factual Information
Chatbots such as Gemini and ChatGPT are sometimes known to generate incorrect responses, a phenomenon often referred to as “hallucinating.” Recognizing this issue, Google, the developer behind Gemini, encourages users to verify the accuracy of the information provided by Gemini.
Debbie Weinstein, Google’s UK head, mentioned to the BBC’s Today program that Gemini is “not the go-to source for specific information.” Instead, Weinstein recommends using Gemini for tasks like problem-solving or brainstorming.
Indeed, AI tools like these often remind users of their tendency to create “facts” out of thin air. For example, ChatGPT displays a disclaimer on its homepage, warning that it might produce incorrect information about people, places, or events. Similarly, Gemini alerts its users to its limitations and the potential for inaccuracies.
Gemini initially gave an incorrect response during its debut demonstration in February. Following tests with a limited scope, Gemini has recently become available in English too. It’s important to note that Google’s caution regarding the use of chatbots is not new.
Last month, Alphabet, Google’s parent company, advised its employees to be careful when using tools like Gemini, particularly warning against entering sensitive information into AI tools. The company also directed its engineers not to directly use code generated by these services.
You may also like this content
- Google Unveils New Reasoning AI Model
- NVIDIA Announces New Jetson Orin Nano Super Kit That Could Change All AI Applications
- Artificial Intelligence Models Have Been Discovered To Fool Humans