In less than three weeks of Samsung lifting the ban to use the AI bot ChatGPT, the employees have reportedly leaked some sensitive company information on the chatbot.
Watch ARY News live on live.arynews.tv
The ban which was in place to protect the company’s data was lifted on March 11, to improve productivity and for staff to keep up with the world’s latest tech tools.
However, the employees at Samsung have accidentally leaked the company’s own secrets, at least three times in the past days, including the measurement and other confidential information of an in-development semiconductor.
As per a Korean report, the leaked secrets also include yield data from the conglomerate’s device solution and semiconductor business unit.
According to an employee of the company, in one of the three instances, they copied all the problematic source code of a semiconductor database download program, and entered it into ChatGPT, to inquire about the solution.
Another team member uploaded the program code, designed to identify defective equipment, for ‘code optimization’, while a third one shared a meeting recording with the bot to ‘auto-generate’ the minutes.
As the FAQs for ChatGPT clearly state, “Your conversations may be reviewed by our AI trainers to improve our systems,” therefore, these leaked secrets will now be accessible to Open AI.
Soon after the incident was reported, Samsung started the ’emergency measures’ – including limiting the upload capacity to 1024 bytes per question. Moreover, the giant has reportedly also warned their employees that “If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network.”
Additionally, the reports also suggest that Samsung is now considering building an in-house AI service to prevent such incidents in future.
Leave a Comment