By purportedly saying that it was trained using customers’ Gmail data, Google’s AI assistant Bard, which competes with ChatGPT, has stirred up some controversy. As ChatGPT becomes more and more popular, Bard was developed to compete with it. ChatGPT is built on the GPT-3.5 (for free users) and GPT 4 architecture and has similar functionalities. The data set that artificial intelligence uses, however, raises privacy issues.
Details about a rival to ChatGPT, Bard AI accidentally reveals that it is monitoring Gmail data
Microsoft researcher Kate Crawford tweeted a screenshot of her conversation with the chatbot, which sparked the debate around Bard. Crawford inquired about Bard’s dataset, and the chatbot apparently responded with a list of publicly accessible datasets from sites like Wikipedia and GitHub, as well as private information from Google products like Gmail and other independent businesses.
Anyone feeling a little uneasy with Bard’s claim that its training dataset contains… Gmail? Crawford tweeted, “I’m guessing that’s categorically false; otherwise, Google is seriously stepping outside the bounds of the law.
The allegations have been refuted by Google, which claimed on Twitter that Bard is a “early experiment” and was not trained using Gmail data.
The incident has, however, brought to light the limitations of generative AI programs like Bard and ChatGPT. Both businesses have issued warnings that the chatbots might not always deliver factually accurate information and might “hallucinate” facts or commit logical mistakes.
The GPT-4 language model, which was just released by OpenAI, ChatGPT’s parent firm, is claimed to have constraints similar to those of older GPT models but on a much smaller scale. The company advised customers to use human review, ground language model outputs with extra context, or steer clear of high-stakes usage altogether when using language model outputs, especially in high-stakes scenarios.