Fake News may have been all the rage in 2017, but 2018 has the potential to be all about the Fake Photo.
While artificial intelligence (AI) has been able to procedurally generate false images of people for years, they’ve never looked so convincing as they do now. Computer chip manufacturer Nvidia has been hard at work in pushing this research along.
“We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” Durk Kingma, a researcher at OpenAI told NYT, citing Nvidia’s work in Finland.
The system takes celebrity photographs and synthesizes them until there is a high resolution end result. These super powerful computers take approximately 18 days and millions of tiny revisions before its final synthesis is one that the AI believes will be believable. One reason for this push for these fake people is because of video gaming and other new media.
“We think we can push this further, generating not just photos but 3-D images that can be used in computer games and films,” Nvidia researcher Jaakko Lehtinen said.
Increasing believability helps sell the product, whether it be animated films or gaming. For many years, computer generated images of humans tend to fall into what is called the “uncanny valley,” a term coined in the 1970s by famous roboticist Masahiro Mori to describe a human’s gut-level revulsion to things that are almost human looking, but still retain some unnaturalness in appearance.
However, despite advances in this technology, many researchers are concerned at the ethical implications that this has, especially in the age of “fake news.” AI policy is indeed a public interest, and the director of the Ethics and Governance of Artificial Intelligence Fund Tim Hwang is deeply concerned about the impact of these images.
“These techniques will rise to the point where it becomes very difficult to discern truth from falsity,” Director Tim Hwang said, “You might believe that it accelerates problems we already have.”
Considering that earlier last year, researchers at the University of Washington developed technology that can fabricate former President Obama and President Trump’s voices and sync them up with false video, this is can be alarming.
Time will only tell if these new found AI powers are used for the betterment of society, or for society’s detriment.
Snap a business card pic, Microsoft app finds ’em on LinkedIn
(TECH NEWS) Microsoft Pix is teaming with LinkedIn in a neat way that will benefit networking, especially if you have any lazy bones in your body.
Have you ever been watching some sort of action-adventure movie where there’s a command center with all sorts of unbelievable technology that kind of blows your mind? Well, every day we come closer and closer to living within that command center.
You may think that I’m talkin’ crazy, but check this out – there is a new technology that can scan a business card, and find the business card’s owner on LinkedIn. (Can I get a “say what????!”)
This app is courtesy of Microsoft and goes by the name Pix (it’s not new, but this function is).
The way it works is simple: Bill Jones hands you his business card, you fire up the Pix app (currently only on the iPhone. Sorry, Droids), you snap a picture of the card and the app takes the details (phone number, company, etc.) and finds Bill on LinkedIn. Bingo.
It also will automatically take that information and will create a new profile for Bill Jones within your phone’s contacts. After you scan the business card through Pix, Microsoft will ask if you want to take action.
At this point, Pix will recognize and capture phone numbers, email addresses, and URLs. If your phone is logged into LinkedIn, the apps will work together to find Bill’s profile. Part of me wants to think that this is kind of creepy but a larger part of me thinks that it’s really cool.
According to Microsoft Research’s Principal Program Manager, Josh Weisberg, “Pix is powered by AI to streamline and enhance the experience of taking a picture with a series of intelligent actions: recognizing the subject of a photo, inferring users’ intent and capturing the best quality picture.”
“It’s the combination of both understanding and intelligently acting on a users’ intent that sets Pix apart. Today’s update works with LinkedIn to add yet another intelligent dimension to Pix’s capabilities.”
Pix itself originally launched in 2016 as a way to compete against AI’s ability to edit a photo by use of exposure, focus, and color. This new integration in working with LinkedIn is a time saver, and is beneficial for those who collect business cards like candy and forget to actually do something with them.
Walmart and the blockchain, sitting in a tree
(TECH NEWS) Say goodbye to #foodwaste with Walmart’s new smart package delivery proposal featuring everyone’s favorite pal, blockchain.
Following the trend of adding “smart” as a prefix to any word to make it futuristic, Walmart now proposes “smart packages.” The retail giant filed for a new patent to improve their shipping and package tracking process using blockchain.
Last week, the U.S. Patent and Trademark Office (USPTO) released the application, which was filed back in August 2017.
Officially, the application notes the smart package will have “a body portion having an inner volume” and “a door coupled to the body portion” that can be open or closed to restrict or allow access to the package contents.
In other words, they’ve patented a box with a door on it that also has lots of monitoring devices.
Various iterations lay claim to all versions of said box include smart packaging utilizing a combination of monitoring devices, modular adapters, autonomous delivery vehicles, and blockchain.
Monitoring devices would regulate location tracking, inner content removal, and environmental conditions of the package like temperature and humidity. This could help reduce loss of products sensitive to environmental changes, like fresh produce.
Modular adapters perform these actions as well, and also ensure the package has access to a power source and the delivery vehicle’s security system to prevent theft.
Blockchain comes into play with a delivery encryption system, monitoring, authenticating, and registering packages. As it moves through the supply chain, packages will be registered throughout the process.
The blockchain would be hashed with private key addresses of sellers, couriers, and buyers to track the chain of custody. Every step of the shipping process would be documented, providing greater accountability and easier record keeping.
This isn’t Walmart’s first foray into the world of blockchain. Last year they teamed up with Nestle, Kroger, and other food companies in a partnership with IBM to improve food traceability with blockchain.
Walmart also took part in a similar food tracking program in China with JD.com last year as well.
And let’s not forget Walmart’s May 2017 USPTO application to use blockchain tech for package delivery via unmanned drones. Their more recent application builds on the drone idea, which also proposed tracking packages with blockchain and monitoring product conditions during delivery.
In their latest application, Walmart notes, “online customers many times seek to purchase items that may require a controlled environment and further seek to have greater security in the shipping packaging that the items are shipped in.”
Implementing blockchain and smart package monitoring as part of the shipping process could greatly reduce product loss and improve shipment tracking.
Experts warn of actual AI risks – we’re about to live in a sci fi movie
(TECH NEWS) A new report on AI indicates that the sci fi dystopias we’ve been dreaming up are actually possible. Within a few short years. Welp.
Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.
Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.
Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”
The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.
In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.
Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.
Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.
AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.
The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.
That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.
The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up.
How top performers work smarter, not harder
The real key to working smarter, not harder
How to quickly make your LinkedIn profile stand out from the masses
Innovative widget places Instagram Stories right on your website
Turns out the secret to brand success on Instagram is Stories
5 ways voice is changing the SEO game
How blockchain has the power to fix democracies
Is insecurity the root of overworking in today’s workforce?
How to quickly make your LinkedIn profile stand out from the masses
How employee perks give competitive companies a serious edge
Amy’s Ice Cream founder on Austin’s business risks and rewards #WhyAustin
Turns out a lot of people are in between introverted and extroverted
P. Terry’s founder on the booming economy in Austin #WhyAustin
Ladies and gentlemen, the U.S. National Anthem
Indeed President, Chris Hyams tells us #WhyAustin [video]
News neatly in your inbox
Join thousands of AG fans and SUBSCRIBE to get business and tech news updates, breaking stories, and MORE!
Thank you for subscribing.
Oh boy... Something went wrong.