AI is here, and how we work will never be the same. Some people are ignoring OpenAI’s ChatGPT and pretending it doesn’t exist. Others are diving all in, outsourcing all of their most boring work to AI as much as possible. At first glance, the latter seems like the best option; after all, AI isn’t going away any time soon, so what’s the point of ignoring it? But you still need to be careful… as Samsung found out the hard way.
The South Korean manufacturing conglomerate recently experienced not one, not two, but three leaks of confidential data. The culprits? Employees who wanted ChatGPT to make their lives easier. Two employees asked ChatGPT to help them fix wonky code, while another shared internal meeting content with the language-learning model.
If these leaks had occurred in a pre-AI world, the fix would have been simple: Take down the leaked data. Sure, someone might have copied the data and shared it elsewhere, but at least you could remove the original.
But mid-AI revolution, the fix isn’t so simple. Scratch that: there is no fix. As OpenAI very clearly states in their FAQ page:
“No, we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.”
Moreover, anything you put into ChatGPT (or GPT-4, for that matter) is fair game for OpenAI to use to train other AI services. In other words, every time you use ChatGPT, you’re training it to take over your job.
So Samsung, unfortunately, has no way to do damage control on its leaked data. This is especially embarrassing for a company that only recently started letting employees use ChatGPT due to concerns that they would leak confidential information.
Samsung is a big company with a big solution: They’re going to start creating their own AI models that employees can use.
But if you’re running a small business, you probably don’t have the resources to create your own AI model. So instead, get ahead of the problem by creating company policies around how employees are permitted to use emerging AI technologies. Hopefully, employees who know exactly what they’re risking by inputting confidential data into ChatGPT will be less tempted to do so.