Artificial intelligence is being used to complete more and more human tasks. But as of right now, news stories you read online – including all the articles here on American Genius – have been written by real human beings.
Until recently, even the most intelligent computers couldn’t be trained to recreate the complex rules and stylistic subtleties of language. AI-generated text would often wander off topic or mix up the syntax and lack context or analysis.
However, a non-profit called OpenAI says they have developed a text generator that can simulate human writing with remarkable accuracy.
The program is called GPT2. When fed any amount of text, from a few words to a page, it can complete the story, whether it be a news story or a fictional one.
You already know about video deepfakes, but these “deepfakes for text” stay on subject and match the style of the original text. For example, when fed the first line of George Orwell’s 1984, GPT2 created a science-fiction story set in a futuristic China.
This improved text generator is much better at simulating human writing because it has learned from a dataset that is “15 times bigger and broader” than its predecessor, according to OpenAI research director, Dario Amodei.
Usually researchers are eager to share their creations with the world – but in the case, the Elon Musk-backed organization has, at least of the time being, withheld GPT2 from the public out of fear of what criminals and other malicious users might do with it.
Jack Clark, OpenAI’s head of policy, says that the organization needs more time to experiment with GPT2’s capabilities so that they can anticipate malicious uses. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do,” he says. “There are many more people than us who are better at thinking what it can do maliciously.”
Some potential malicious uses of GPT2 could include generating fake positive reviews for products (or fake negative reviews of competitors’ products); generating SPAM messages; writing fake news stories that would be indistinguishable from real news stories; and spreading conspiracy theories.
Furthermore, because GPT2 learns from the internet, it wouldn’t be hard to program GPT2 to produce hate speech and other offensive messages.
As a writer, I can’t think of very many good reasons to use an AI story generator that doesn’t put me out of job. So I appreciate that the researchers at OpenAI are taking time to fully think through the implications before making this Pandora’s box of technology available to the general public.
Says Clark, “We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”