Connect with us

Tech News

A deepfakes creator for text so realistic it can’t be made public yet

(TECHNOLOGY) You know about video deepfakes, but the technology exists for doing convincing deepfakes for text. It’s so good that they aren’t ready to release it to the public yet…

Published

on

deepfakes text

Artificial intelligence is being used to complete more and more human tasks. But as of right now, news stories you read online – including all the articles here on American Genius – have been written by real human beings.

Until recently, even the most intelligent computers couldn’t be trained to recreate the complex rules and stylistic subtleties of language. AI-generated text would often wander off topic or mix up the syntax and lack context or analysis.

However, a non-profit called OpenAI says they have developed a text generator that can simulate human writing with remarkable accuracy.

The program is called GPT2. When fed any amount of text, from a few words to a page, it can complete the story, whether it be a news story or a fictional one.

You already know about video deepfakes, but these “deepfakes for text” stay on subject and match the style of the original text. For example, when fed the first line of George Orwell’s 1984, GPT2 created a science-fiction story set in a futuristic China.

This improved text generator is much better at simulating human writing because it has learned from a dataset that is “15 times bigger and broader” than its predecessor, according to OpenAI research director, Dario Amodei.

Usually researchers are eager to share their creations with the world – but in the case, the Elon Musk-backed organization has, at least of the time being, withheld GPT2 from the public out of fear of what criminals and other malicious users might do with it.

Jack Clark, OpenAI’s head of policy, says that the organization needs more time to experiment with GPT2’s capabilities so that they can anticipate malicious uses. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do,” he says. “There are many more people than us who are better at thinking what it can do maliciously.”

Some potential malicious uses of GPT2 could include generating fake positive reviews for products (or fake negative reviews of competitors’ products); generating SPAM messages; writing fake news stories that would be indistinguishable from real news stories; and spreading conspiracy theories.

Furthermore, because GPT2 learns from the internet, it wouldn’t be hard to program GPT2 to produce hate speech and other offensive messages.

As a writer, I can’t think of very many good reasons to use an AI story generator that doesn’t put me out of job. So I appreciate that the researchers at OpenAI are taking time to fully think through the implications before making this Pandora’s box of technology available to the general public.

Says Clark, “We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”

Ellen Vessels, a Staff Writer at The American Genius, is respected for their wide range of work, with a focus on generational marketing and business trends. Ellen is also a performance artist when not writing, and has a passion for sustainability, social justice, and the arts.

Continue Reading
Advertisement
2 Comments

2 Comments

  1. Pingback: A deepfakes creator for text so realistic it can’t be made public yet – tech news

  2. Did You Know?

    March 13, 2019 at 2:44 pm

    They did this because they went from non-profit to for-profit; putting a lesser of their product out there so they can sell the 3x stronger AI to those who wish to purchase it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech News

Career consultants help job seekers beat AI robot interviews

(TECH NEWS) With the growth of artificial intelligence conducting the job screening, consultants in South Korea have come up with an innovative response.

Published

on

job screening by robot

When it comes to resume screenings, women and people of color are regularly passed over, even if they have the exact same resume as a man. In order to give everyone a fair try, we need a system that’s less biased. With the cool, calculating depictions of artificial intelligence in modern media, it’s tempting to say that AI could help us solve our resume screening woes. After all, nothing says unbiased like a machine…right?

Wrong.

I mean, if you need an example of what can go wrong with AI, look no further than Microsoft’s Tay, which went from making banal conversation to spouting racist and misogynistic nonsense in less than 24 hours. Not exactly the ideal.

Sure, Tay was learning from Twitter, which is a hotbed of cruelty and conflict, but the thing is, professional software isn’t always much better. Google’s software has been caught offering biased translations (assuming, for example, if you wrote “engineer” you were referring to a man) and Amazon has been called out for using job screening software that was biased against women.

And that’s just part of what could go wrong with AI scanning your resume. After all, even if gender and race are accounted for (which, again, all bets are off), you’d better bet there are other things – like specific phrases – that these machines are on the lookout for.

So, how do you stand out when it’s a machine, not a human, judging your work? Consultants in South Korea have a solution: teach people how to work around the bots. This includes anything from resume work to learning what facial expressions are ideal for filmed interviews.

It helps that many companies use the same software to do screening. Instead of trying to prepare to impress a wide variety of humans, if someone knew the right tricks for handling an AI system, they could potentially put in much less work. For example, maybe one human interviewer likes big smiles, while the other is put off by them. The AI system, on the other hand, won’t waver from company to company.

Granted, this solution isn’t foolproof either. Not every business uses the same program to scan applicants, for instance. Plus, this tech is still in its relative infancy – a program could easily be in flux as requirements are tweaked. Who knows, maybe someday we’ll actually have application software that can more accurately serve as a judge of applicant quality.

In the meantime, there’s always AI interview classes.

Continue Reading

Tech News

Google chrome: The anti-cookie monster in 2022

(TECH NEWS) If you are tired of third party cookies trying to grab every bit of data about you, google has heard and responded with their new updates.

Published

on

3rd party cookies

Google has announced the end of third-party tracking cookies on its Chrome browser within the next two years in an effort to grant users better means of security and privacy. With third-party cookies having been relied upon by advertising and social media networks, this move will undoubtedly have ramifications on the digital ad sector.

Google’s announcement was made in a blog post by Chrome engineering director, Justin Schuh. This follows Google’s Privacy Sandbox launch back in August, an initiative meant to brainstorm ideas concerning behavioral advertising online without using third-party cookies.

Chrome is currently the most popular browser, comprising of 64% of the global browser market. Additionally, Google has staked out its role as the world’s largest online ad company with countless partners and intermediaries. This change and any others made by Google will affect this army of partnerships.

This comes in the wake of rising popularity for anti-tracking features on web browsers across the board. Safari and Firefox have both launched updates (Intelligent Tracking Prevention for Safari and the Enhanced Tracking Prevention for Firefox) with Microsoft having recently released the new Edge browser which automatically utilizes tracking prevention. These changes have rocked share prices for ad tech companies since last year.

The two-year grace period before Chrome goes cookie-less has given the ad and media industries time to absorb the shock and develop plans of action. The transition has soften the blow, demonstrating Google’s willingness to keep positive working relations with ad partnerships. Although users can look forward to better privacy protection and choice over how their data is used, Google has made it clear it’s trying to keep balance in the web ecosystems which will likely mean compromises for everyone involved.

Chrome’s SameSite cookie update will launch in February, requiring publishers and ad tech vendors to label third-party cookies that can be used elsewhere on the web.

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!