Connect with us

Tech News

AI: Attempts to understand and regulate it fail because we’re human

(TECH NEWS) NYC created an AI task force to attempt to understand and regulate growing AI, but it failed because we are humans with odd motives and slower thinking

Published

on

AI regulation

Artificial intelligence (AI) is racist. Okay, fine, not all AI are racist (#NotAllAI), but they do have a tendency towards biases. Just look at Microsoft’s chatbot, Tay, which managed to start spewing hateful commentary in under 24 hours. Now, a chatbot isn’t necessarily a big deal, but for better or worse, AI is being used in increasingly crucial ways.

A biased AI could, for instance, change how you are policed, whether or not you get a job, or your medical treatment.

In an attempt to understand and regulate these systems, New York City created a task force called the Automated Decision Systems (ADS) Task Force. There’s just one problem: this group has been a total disaster.

When it was formed in May of 2018, the outlook was hopeful. ADS was comprised of city council members and industry professionals in order to inspect the city’s use of AI and hopefully come to meaningful calls for action. Unfortunately, the task force was plagued with troubles since the beginning.

For example, although ADS was created to examine the automated systems New York City is using, they weren’t granted access. Albert Cahn, one of the members of the task force, said that although the administration had all the data they needed, ADS was not given any information.

Sounds frustrating, right? One potential reason for this massive hiccup is the fact the administration was relying on third party systems which were sold by companies looking to make a profit. In this case, it makes sense that a company would like to avoid being scrutinized: it could easily lead to greater regulation or at the very least, a broader understanding of how their systems really worked.

Another overarching problem was the struggle to define artificial intelligence at all. Some automated systems do rely on complex machine learning, but others are far simpler. What counts as an automated system worth examining? The verdict is still out.

In the big scheme of things, AI tech is still in its infancy. We’re just starting to grasp what it’s currently capable of, much less what it could be capable of accomplishing. To add to the complications, technology is evolving at a break-neck speed. What we want to regulate now might be entirely obsolete in ten years.

Then again, it might not. Machines might be fast to change, but their human creators? Less so. Left unchecked, it’s debatable about whether or not creators will work to remove biases.

NYC’s task force might have failed – their concluding write-up was far from ideal – but the creation of this group signals a growing demand for a closer look at the technology making life-changing decisions. The question now? Who is best suited for this task.

Brittany is a Staff Writer for The American Genius with a Master's in Media Studies under her belt. When she's not writing or analyzing the educational potential of video games, she's probably baking.

Tech News

Career consultants help job seekers beat AI robot interviews

(TECH NEWS) With the growth of artificial intelligence conducting the job screening, consultants in South Korea have come up with an innovative response.

Published

on

job screening by robot

When it comes to resume screenings, women and people of color are regularly passed over, even if they have the exact same resume as a man. In order to give everyone a fair try, we need a system that’s less biased. With the cool, calculating depictions of artificial intelligence in modern media, it’s tempting to say that AI could help us solve our resume screening woes. After all, nothing says unbiased like a machine…right?

Wrong.

I mean, if you need an example of what can go wrong with AI, look no further than Microsoft’s Tay, which went from making banal conversation to spouting racist and misogynistic nonsense in less than 24 hours. Not exactly the ideal.

Sure, Tay was learning from Twitter, which is a hotbed of cruelty and conflict, but the thing is, professional software isn’t always much better. Google’s software has been caught offering biased translations (assuming, for example, if you wrote “engineer” you were referring to a man) and Amazon has been called out for using job screening software that was biased against women.

And that’s just part of what could go wrong with AI scanning your resume. After all, even if gender and race are accounted for (which, again, all bets are off), you’d better bet there are other things – like specific phrases – that these machines are on the lookout for.

So, how do you stand out when it’s a machine, not a human, judging your work? Consultants in South Korea have a solution: teach people how to work around the bots. This includes anything from resume work to learning what facial expressions are ideal for filmed interviews.

It helps that many companies use the same software to do screening. Instead of trying to prepare to impress a wide variety of humans, if someone knew the right tricks for handling an AI system, they could potentially put in much less work. For example, maybe one human interviewer likes big smiles, while the other is put off by them. The AI system, on the other hand, won’t waver from company to company.

Granted, this solution isn’t foolproof either. Not every business uses the same program to scan applicants, for instance. Plus, this tech is still in its relative infancy – a program could easily be in flux as requirements are tweaked. Who knows, maybe someday we’ll actually have application software that can more accurately serve as a judge of applicant quality.

In the meantime, there’s always AI interview classes.

Continue Reading

Tech News

Google chrome: The anti-cookie monster in 2022

(TECH NEWS) If you are tired of third party cookies trying to grab every bit of data about you, google has heard and responded with their new updates.

Published

on

3rd party cookies

Google has announced the end of third-party tracking cookies on its Chrome browser within the next two years in an effort to grant users better means of security and privacy. With third-party cookies having been relied upon by advertising and social media networks, this move will undoubtedly have ramifications on the digital ad sector.

Google’s announcement was made in a blog post by Chrome engineering director, Justin Schuh. This follows Google’s Privacy Sandbox launch back in August, an initiative meant to brainstorm ideas concerning behavioral advertising online without using third-party cookies.

Chrome is currently the most popular browser, comprising of 64% of the global browser market. Additionally, Google has staked out its role as the world’s largest online ad company with countless partners and intermediaries. This change and any others made by Google will affect this army of partnerships.

This comes in the wake of rising popularity for anti-tracking features on web browsers across the board. Safari and Firefox have both launched updates (Intelligent Tracking Prevention for Safari and the Enhanced Tracking Prevention for Firefox) with Microsoft having recently released the new Edge browser which automatically utilizes tracking prevention. These changes have rocked share prices for ad tech companies since last year.

The two-year grace period before Chrome goes cookie-less has given the ad and media industries time to absorb the shock and develop plans of action. The transition has soften the blow, demonstrating Google’s willingness to keep positive working relations with ad partnerships. Although users can look forward to better privacy protection and choice over how their data is used, Google has made it clear it’s trying to keep balance in the web ecosystems which will likely mean compromises for everyone involved.

Chrome’s SameSite cookie update will launch in February, requiring publishers and ad tech vendors to label third-party cookies that can be used elsewhere on the web.

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!