After catching snippets of text generated by OpenAI’s powerful ChatGPT tool that looked a lot like company secrets, Amazon is now trying to head its employees off from leaking anything else to the algorithm.
According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.
This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.
While this isn’t necessarily a problem from a proprietary data perspective, it’s a different story when employees start using the AI to improve upon existing internal code — which is already happening, according to the lawyer.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”
Read More at Futurism
Read the rest at Futurism