Google warns staff members not to give AI chatbots like Bard access to private information

Google warns staff members not to give AI chatbots like Bard access to private information

According to reports, Google is cautioning staff members against discussing sensitive material with AI chatbots like ChatGPT and Bard, which is developed in-house. The caution, according to Reuters, is intended to protect private data that LMM models like Bard and Google may use to train themselves and reveal later. Human reviewers who act as moderators can also examine sensitive information. According to the story, Google developers have been cautioned against using codes produced by AI chatbots.

A brief about Google warns staff members not to give AI chatbots like Bard access to private information

According to Google Bard’s FAQ, whenever a user interacts with the chatbot, the business records usage data, feedback, and conversation history. The statement on the page is, “That data helps us provide, improve, and develop Google products, services, & machine-learning technologies.”

The report claims that Google staff members can still utilise Bard for other tasks. Google’s warning considerably differs from its previous stance with Bard. Employees were tasked with thoroughly testing the AI chatbot after the software company released Bard earlier this year to compete with ChatGPT.

The caution Google issued to its staff also matches a security policy that many businesses are implementing. Utilising publicly accessible AI chatbots is prohibited by some businesses. Following the discovery of certain employees discussing sensitive information, Samsung is said to be one of the businesses that has banned the use of ChatGPT.

Google said in a statement that it wished to be “transparent” regarding Bard’s limits. According to the business, “Bard can make undesirable code suggestions, but it still aids programmers.” The AI chatbot can also quickly create pictures, analyse code, edit lengthy documents, and even write emails.

Cloudflare CEO Matthew Prince compared sharing private information with chatbots to “letting a bunch of PhD students loose in all of your private records” when discussing security issues with free-to-use AI chatbots.

Cloudflare, a company that provides cybersecurity services to organisations, promotes a feature that allows companies to tag and block specific data from flowing externally. Microsoft is also developing a ChatGPT private chatbot with the same name for business clients. Through their agreement, Microsoft and OpenAI are able to advertise and develop platforms under the ChatGPT brand. Microsoft is claimed to have constructed the proprietary ChatGPT chatbot on its own cloud infrastructure. It is yet unknown if Microsoft has set the same limitations on using Bing Chat that Google has for Bard.

Yusuf Mehdi, chief marketing officer for Microsoft’s consumer division, is quoted in the report as saying that “companies are taking a duly conservative standpoint.” Mehdi was making a reference to the business’s work on the exclusive ChatGPT services for corporate clients.

Leave a Reply