Microsoft Warns Employees Not to Share Sensitive Data with ChatGPT
Microsoft has warned its employees not to share sensitive data with an artificially intelligent (AI) chatbot, ChatGPT from OpenAI. Employees of American multinational tech giants had asked in an internal forum whether ChatGPT or any other AI tools from OpenAI were appropriate to use at their work, Business Insider reported.
In response to that inquiry, a senior engineer from Microsoft’s CTO office allowed to use ChatGPT but couldn’t share confidential information with the AI chatbot.
“Please don’t send sensitive data to an OpenAI endpoint, as they may use it for training future models,” the senior engineer wrote in an internal post, per Insider.
ChatGPT, here only for two months, is already raising concerns in the academic sector. Microsoft has become a partner of OpenAI, the parent company of ChatGPT, and has confirmed an investment of ten billion dollars.
Microsoft is planning to integrate OpenAI’s technology into its products, including the Bing search engine and other software, to enhance their capabilities, as reported previously.
The major concern of Microsoft regarding “sensitive information” may include sharing internal software code and seeking checks and advice from the chatbot.
Amazon’s Same Concern
ChatGPT has continuously made headlines since its launch last November but has also faced bans, especially in the academic sector as it became the cheating partner for students’ schoolwork. Recently, the tech giants have also raised their concerns over its use.
Amazon warned its employees to beware of ChatGPT last week, as reported by Insider. Insider claims that an Amazon lawyer has urged employees not to share code with ChatGPT via an internal communication form.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote.
The lawyer placed more emphasis on requesting that employees not share “any Amazon confidential information” (including Amazon code they are working on) with ChatGPT via Slack.
Personal Data Concern
However, it’s quite impossible for OpenAI to identify and remove all the personal information from the data provided to ChatGPT, says Emily Bender, who teaches computational linguistics at the University of Washington.
“OpenAI is far from transparent about how they use the data, but if it’s being folded into training data, I would expect corporations to wonder: After a few months of widespread use of ChatGPT, will it become possible to extract private corporate information with cleverly crafted prompts?” said Bender.
Vincent Conitzer, a computer science professor and director of an AI lab at Carnegie Mellon University, said, “All of us together are going to have to figure out what should be expected of everyone in these situations. Is the responsibility on employees to not share sensitive information, or is the responsibility on OpenAI to use information carefully, or some combination?”
This article is originally from MetaNews.