Connect with us

Tech News

Google is showing their second face by backing robot reporters

(TECH NEWS) Google spent years pushing people to blog, to share, to index, to feed it information and have no switched it up and are undermining themselves.

Published

on

indeed top 50 reporter news google

RIP human reporters

So! Turns out I’m doomed. Evidently, Google is funding a machine to write news articles. And after all the good press I gave the robot apocalypse, too.

bar
As my future is naught but despair and devastation, I suppose there’s nothing to talk about but history. Ever hear of Ned Ludd?

Old Ned Ludd

Even if you haven’t, you probably have. “Ludd” as in “Luddite,” which is to say, per Dr. Wik I. Pedia, “one opposed to industrialisation, automation, computerisation or new technologies in general.”

That’s really not fair.

First of all, Ned Ludd shouldn’t be remembered in history at all, because he isn’t. He wasn’t real. Ned Ludd, which may or may not be rural English for Edward Ludlam, was a Robin Hood-type fictional figure. They even both hung out in Nottingham, albeit some centuries apart.

Much like brave Sir Robin (extra geek cred for catching the reference), old Ned was both an empowerment fantasy and a cautionary tale. Robin Hood stories warned about the depredations of power-hungry nobles: here is what honest fellows who would farm and hunt may expect when rich men come for their land, and here is what may be done about it.

Likewise Ludd, who was the hero and horror story of 18th century industrialization.

Poking the bear

The story goes that Ned Ludd was an abused, developmentally disabled teenager. An “idiot boy,” in the charming idiom of the time. He worked for a weaver, and after either being mocked by children because he was due to lose his job, or failing to keep up with the pace that technology set for his job and being flogged for idleness by his masters – yes, masters, and yes, they could flog him; the 18th century sucked – he quite reasonably got a big stick and bashed said technology to scrap.

That brings us to second of all.

Ludd was right.

I mean, obviously he was right in the short term. If my options are “break something” or “starve,” give me 5 minutes, I got a wrench in the car.

But the real Luddites were right too.

The actual, historical Luddites weren’t kneejerk anti-technologists. They were skilled artisans, mostly weavers, which is to say, they required tech to do their jobs. And yet, they masked up and stomped out a bunch of machines, and when they got busted, remembering the tale, they’d say “Ned Ludd did it.”

Those workers weren’t afraid of technology.

They were afraid of what was being done with technology by people who didn’t understand the work they were doing.

Masters of their craft, they knew important aspects couldn’t be automated, and that those aspects, those fundamentally human qualities, could vanish in a generation if not systematically tended. People forget. Skills die.

The cost of robots

Now the dread machines are coming for me and like the Luddites, my first concern isn’t my job as such. I am hubristically hopeful no machine made by man can match my curling chestnut locks or inexhaustible supply of geek wisdom. Besides, if an evil robot does take my job, know what I used to do? Tech support. Thinking that’s gonna come up in the AI Age.

What I’m afraid of is what the Luddites were afraid of.

I’m afraid of what we lose.

The Luddites, those master artisans, understood the value of work. Work demands craft, experience and inspiration. Those things cannot be automated, and trying is not only silly but dangerous. They can only be acquired by doing the work, with the help of people who already know it.

Happily, neither that kind of learning nor that kind of work are in short supply, at least not yet.

Both went digital with the rest of humanity.

Ironically, the best example in the entire world is Google, whose core business model is acquiring, assessing and presenting the results of that work. On the whole, Google doesn’t create things, not even knowledge. It just aggregates, sorts and presents it better than anyone else. That’s a remarkable achievement, and it shouldn’t be undersold.

It also doesn’t change the fact that Google doesn’t do the work. Other people do.

The Ludd Question

Google’s business is connection, linking questions with answers, needs with solutions, people with people. Connection is the best part of the digital revolution. The worst, by far, is hacking the human parts out of vital systems and pretending they’re OK. Companies hack out employees. News outlets hack out fact checkers and failsafes.

It has become possible to have things that still work (barely) after you pull the humans out of them.

As it stands, and fair dues, it’s early days on this thing, that’s exactly what you get with the Google news robot. It produces nothing. To quote the article, it “turns news data into palatable content.” It’s built on the universal, deadly dangerous assumption of the digital age: somebody else has done the work. I just have to find it.

The trouble is that good journalism is about doing the work, and good work requires humans. Taking humans out of the equation means losing things that cannot be replaced.

At the end of the day, that’s the Ludd Question. How much can you afford to lose?

#RobotReporter

Matt Salter is a writer and former fundraising and communications officer for nonprofit organizations, including Volunteers of America and PICO National Network. He’s excited to put his knowledge of fundraising, marketing, and all things digital to work for your reading enjoyment. When not writing about himself in the third person, Matt enjoys horror movies and tabletop gaming, and can usually be found somewhere in the DFW Metroplex with WiFi and a good all-day breakfast.

Tech News

Career consultants help job seekers beat AI robot interviews

(TECH NEWS) With the growth of artificial intelligence conducting the job screening, consultants in South Korea have come up with an innovative response.

Published

on

job screening by robot

When it comes to resume screenings, women and people of color are regularly passed over, even if they have the exact same resume as a man. In order to give everyone a fair try, we need a system that’s less biased. With the cool, calculating depictions of artificial intelligence in modern media, it’s tempting to say that AI could help us solve our resume screening woes. After all, nothing says unbiased like a machine…right?

Wrong.

I mean, if you need an example of what can go wrong with AI, look no further than Microsoft’s Tay, which went from making banal conversation to spouting racist and misogynistic nonsense in less than 24 hours. Not exactly the ideal.

Sure, Tay was learning from Twitter, which is a hotbed of cruelty and conflict, but the thing is, professional software isn’t always much better. Google’s software has been caught offering biased translations (assuming, for example, if you wrote “engineer” you were referring to a man) and Amazon has been called out for using job screening software that was biased against women.

And that’s just part of what could go wrong with AI scanning your resume. After all, even if gender and race are accounted for (which, again, all bets are off), you’d better bet there are other things – like specific phrases – that these machines are on the lookout for.

So, how do you stand out when it’s a machine, not a human, judging your work? Consultants in South Korea have a solution: teach people how to work around the bots. This includes anything from resume work to learning what facial expressions are ideal for filmed interviews.

It helps that many companies use the same software to do screening. Instead of trying to prepare to impress a wide variety of humans, if someone knew the right tricks for handling an AI system, they could potentially put in much less work. For example, maybe one human interviewer likes big smiles, while the other is put off by them. The AI system, on the other hand, won’t waver from company to company.

Granted, this solution isn’t foolproof either. Not every business uses the same program to scan applicants, for instance. Plus, this tech is still in its relative infancy – a program could easily be in flux as requirements are tweaked. Who knows, maybe someday we’ll actually have application software that can more accurately serve as a judge of applicant quality.

In the meantime, there’s always AI interview classes.

Continue Reading

Tech News

Google chrome: The anti-cookie monster in 2022

(TECH NEWS) If you are tired of third party cookies trying to grab every bit of data about you, google has heard and responded with their new updates.

Published

on

3rd party cookies

Google has announced the end of third-party tracking cookies on its Chrome browser within the next two years in an effort to grant users better means of security and privacy. With third-party cookies having been relied upon by advertising and social media networks, this move will undoubtedly have ramifications on the digital ad sector.

Google’s announcement was made in a blog post by Chrome engineering director, Justin Schuh. This follows Google’s Privacy Sandbox launch back in August, an initiative meant to brainstorm ideas concerning behavioral advertising online without using third-party cookies.

Chrome is currently the most popular browser, comprising of 64% of the global browser market. Additionally, Google has staked out its role as the world’s largest online ad company with countless partners and intermediaries. This change and any others made by Google will affect this army of partnerships.

This comes in the wake of rising popularity for anti-tracking features on web browsers across the board. Safari and Firefox have both launched updates (Intelligent Tracking Prevention for Safari and the Enhanced Tracking Prevention for Firefox) with Microsoft having recently released the new Edge browser which automatically utilizes tracking prevention. These changes have rocked share prices for ad tech companies since last year.

The two-year grace period before Chrome goes cookie-less has given the ad and media industries time to absorb the shock and develop plans of action. The transition has soften the blow, demonstrating Google’s willingness to keep positive working relations with ad partnerships. Although users can look forward to better privacy protection and choice over how their data is used, Google has made it clear it’s trying to keep balance in the web ecosystems which will likely mean compromises for everyone involved.

Chrome’s SameSite cookie update will launch in February, requiring publishers and ad tech vendors to label third-party cookies that can be used elsewhere on the web.

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading
Advertisement

Our Great Partners

The
American Genius
news neatly in your inbox

Subscribe to our mailing list for news sent straight to your email inbox.

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!