Google Warns Employees About Chatbot Usage
Google has recently advised its employees against using chatbots such as ChatGPT or its own Bard. The company’s decision stems from concerns that these artificial intelligences may inadvertently expose confidential information when interacting with third parties or being reviewed by developers.
According to Reuters, Alphabet Inc. (Google) is cautioning its employees about the use of chatbots, including its own Bard, while simultaneously promoting the program worldwide. This move aims to prevent the potential exposure of proprietary information.
LOOK: Hotel ChatGPT: Switzerland Embraces AI to Serve Guests Amid Staff Shortage
Google’s parent company has explicitly advised employees not to input sensitive materials into AI chatbots, as confirmed by both insiders and the company itself. This policy aligns with their long-standing commitment to data protection.
The concern arises from the fact that the information provided to AI chatbots is stored for their learning purposes, which means that other users could potentially access it. Researchers have discovered that human reviewers can read these chats, and there is a risk that a similar AI could reproduce the absorbed data, leading to a potential leak.
LOOK: This AI Imagines Real-Life Versions of The Simpsons Characters
Furthermore, Alphabet has also cautioned its engineers against directly using computer code generated by chatbots. This additional warning aims to prevent any unintended consequences that may arise from programming with these artificial intelligences.
This is not the first instance where a company has discouraged the use of chatbots for work. Samsung faced a notable case where some employees utilized these AI models and inadvertently exposed confidential information to third parties. It is worth mentioning that Samsung is currently developing its own artificial intelligence with similar capabilities.