Connect with us

Tech News

Facial Recognition thinks you might be a toaster, really

(TECH NEWS) Facial Recognition is still a log way from being perfect. Ceci n’est pas une toaster. Really. Repeat it with me: I am not a toaster.

Published

on

Facial recognition failure

Using facial recognition seems pretty seamless, think of your iPhone. Yet, a human face has actually been confused with a toaster, according to a facial recognition technology expert.

If a computer, which is thought to be highly reliable, will confuse a human face for a toaster, what might that mean for facial recognition accuracy when seeking out suspects of crimes? Possibly, not so reliable.

“Obviously, the technology has immense value in promoting societal interests such as efficiency and security but it also represents a threat to some of our individual interests, particularly privacy,” Nessa Lynch, associate professor of law at Victoria University of Wellington, New Zealand. Lynch and other experts are part of a research project that will be completed in mid-2020. The researchers presented some of their findings during a panel recently held at the university.

Some of the very first images used to test data were those of convicted felons in Florida. They had abused meth and had great cheekbones. But, that presented problems when using facial recognition on actual real folk without a meth habit.

The cheekbones are very different than the average person, which can happen when you eat food. Data from such a source was not useful when training a system to recognize normal people, said Rachel Dixon, Privacy and Data Protection Deputy Commissioner at the Office of the Victorian Information Commissioner in Australia.

Companies who sell the technology products often claim they are highly reliable, but Dixon said, often they are reliable because of the environments where they are used, which may be unvarying. And, the systems are tuned for these specific environments.

“…Picking you out walking randomly down the street can be quite challenging. There’s a whole bunch of environmental factors there that go to essentially reducing the confidence level,” Dixon said in a story published on Ideasroom. “None of this is absolute. There is no one-to-one match. And by perturbing an image even a small amount you can make the machine-learning system think the person is a toaster. I’m not joking.”

If a computer recognizes a face, for example, as person of interest in a crime, it is very hard to change that perception, even if it is wrong, because humans have a hard time believing a machine can make a mistake, especially if it has said it is the correct match, Dixon explained.

In the United States, a conservative estimate is that roughly a quarter of all the 18,000 law enforcement agencies have access to facial recognition systems, particularly for the use in investigations. Yet, Georgetown Law Professor Clare Garvie said there are no laws – at the state or federal level – governing its use.

Garvie, a senior associate at the center on privacy and technology at Georgetown said, “As a result, this technology has been implemented largely without transparency to the public, without rules around auditing or public reporting, without rules around who can be subject to a search. As a result, it is not just suspects of a criminal investigation that are the subject of searches. In many jurisdictions, witnesses, victims or anybody associated with a criminal investigation can also be the subject of a search.”

Because there is little reporting and auditing of the use of the technology, it’s unclear if agencies are checking to determine if it’s being misused or if it is actually a helpful and successful tool, Garvie said. Are law enforcement officials “catching the bad guys” or is the use of the technology a waste of money, which she said she suspects it is in some jurisdictions.

Meanwhile, it may come as no surprise to some, those often caught in the crosshairs are from lower socio-economic status or marginalized populations.

In one instance, a person who was ranked 319th for being a likely match based on the algorithmic ranking, was the one police arrested. The police also failed to provide the ranking evidence to the defense lawyers.

In the United Kingdom, the technology has been used extensively and with mixed results by law enforcement and businesses in order to search for people on watch lists, according to Dr. Joe Purshouse from the School of Law at the University of East Anglia in the UK.

“The human rights implications for privacy, freedom of assembly – those are chilling, Purshouse said, adding the marginalized are caught in the middle such as, “Suspects of crime, people of lower socio-economic status who are forced to use public space and rely more heavily on public space than people who have economic advantages, perhaps.”

Mary Ann Lopez earned her MA in print journalism from the University of Colorado and has worked in print and digital media. After taking a break to give back as a Teach for America corps member and teaching science for a few years, she is back with her first love: writing. When she's not writing stories, reading five books at once, or watching The Great British Bakeoff, she is walking her dog Sadie and hanging with her cats, Bella, Bubba, and Kiki. She is one cat short of full cat lady status and plans to keep it that way.

Tech News

Google chrome: The anti-cookie monster in 2022

(TECH NEWS) If you are tired of third party cookies trying to grab every bit of data about you, google has heard and responded with their new updates.

Published

on

3rd party cookies

Google has announced the end of third-party tracking cookies on its Chrome browser within the next two years in an effort to grant users better means of security and privacy. With third-party cookies having been relied upon by advertising and social media networks, this move will undoubtedly have ramifications on the digital ad sector.

Google’s announcement was made in a blog post by Chrome engineering director, Justin Schuh. This follows Google’s Privacy Sandbox launch back in August, an initiative meant to brainstorm ideas concerning behavioral advertising online without using third-party cookies.

Chrome is currently the most popular browser, comprising of 64% of the global browser market. Additionally, Google has staked out its role as the world’s largest online ad company with countless partners and intermediaries. This change and any others made by Google will affect this army of partnerships.

This comes in the wake of rising popularity for anti-tracking features on web browsers across the board. Safari and Firefox have both launched updates (Intelligent Tracking Prevention for Safari and the Enhanced Tracking Prevention for Firefox) with Microsoft having recently released the new Edge browser which automatically utilizes tracking prevention. These changes have rocked share prices for ad tech companies since last year.

The two-year grace period before Chrome goes cookie-less has given the ad and media industries time to absorb the shock and develop plans of action. The transition has soften the blow, demonstrating Google’s willingness to keep positive working relations with ad partnerships. Although users can look forward to better privacy protection and choice over how their data is used, Google has made it clear it’s trying to keep balance in the web ecosystems which will likely mean compromises for everyone involved.

Chrome’s SameSite cookie update will launch in February, requiring publishers and ad tech vendors to label third-party cookies that can be used elsewhere on the web.

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading

Tech News

Xiaomi accidentally sent security video from one home to another

(TECH NEWS) Xiaomi finds out that while modern smart and security devices have helped us all, but there are still plenty of flaws and openings for security breeches.

Published

on

Xiaomi home device

The reason for setting up security cameras around your home is so the photos can get streamed to your neighbor’s device, right?

Okay, that’s obviously not why most (if any) of us get security cameras, but unfortunately, that scenario of the leaked images isn’t a hypothetical. Xiaomi cameras have been streaming photos to the wrong Google Home devices. This was first reported on Reddit, with user Dio-V posting a video of it happening on their device.

Xiaomi is a Chinese electronics company that has only recently started to gain traction in the U.S. markets. While their smartphones still remain abroad, two of Xiaomi’s security cameras are sold through mainstream companies like Wal-Mart and Amazon for as low as $40. Their affordable prices have made the products even more popular and Xiaomi’s presence has grown, both nationally and abroad.

To be fair, when the leaked photos surfaced, both Google and Xiaomi responded quickly. Google cut off access to Xiaomi devices until the problem was resolved to ensure it wouldn’t happen again. Meanwhile, Xiaomi worked to identify and fix the issue, which was caused by a cache update, and has since been fixed.

But the incident still raises questions about smart security devices in the first place.

Any smart device is going to be inherently vulnerable due to the internet connection. Whether it’s hackers, governments, or the tech companies themselves, there are plenty of people who can fairly easily gain access to the very things that are supposed to keep your home secure.

Of course, unlike these risks, which involve people actively trying to access your data, this most recent incident with Xiaomi and Google shows that your intimate details might even be shared to strangers who aren’t even trying to break into your system. Unfortunately, bugs are inevitable when it comes to keeping technology up to date, so it’s fairly likely something like this could happen again in the future.

That’s right, your child’s room might be streamed to a total stranger by complete accident.

Granted, Xiaomi’s integration mistake only affected a fraction of their users and many risks are likely to decrease as time goes on. Still, as it stands now, your smart security devices might provide a facade of safety, but there are plenty of risks involved.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!