Connect with us

Tech News

Dominos and Ford are teaming up to give new meaning to pizza delivery

(BUSINESS NEWS) Dominos had teamed up with Ford to figure out how to deliver pizzas sans drivers.

Published

on

dominos ford

DREAMS DO COME TRUE

How many times have I sent whiny Snapchats begging my friends to magically make pizza appear in my room? Trick question, too many to count. Now with my half-baked The Secret-powered wishful thinking effort, I present to you a dream team: Ford + Domino’s Pizza. You’re welcome.

bar
Ford’s self-driving project is teaming up with Domino’s pizza to pilot an autonomous pizza delivery program in Michigan. Yes, you’ll still have to get off the couch. You will also probably have to put on pants. Sorry, I’m only a wizard in D&D. I haven’t worked out how to get a Spy Kids style microwave for instant pizza delivery yet.

AUTONOMOUS PIZZA FOR DUMMIES

Ford is providing a self-driving Fusion from its trial fleet for the initial delivery trials. The vehicle is white, and clearly marked “self-driving” and “autonomous” in giant black letters. It navigates using cameras and lidar, a radar based on laser beams, housed in a unit on the roof of the car.

Images of the road and surrounding areas are instantly collected and compared with detailed digital maps, ensuring the car knows the journey and destination. The pilot expedition was scheduled to take place on Monday, but the car’s electronics can’t function in heavy rain.

PLAYING WITH FOOD

“It’s going to be a real learning experience,” said Dennis Malloney, Domino’s chief digital officer. He offered some more reassuring words to the public, stating, “No one really knows what’s going to happen when customers walk out to the car. They’re faced with a car. There’s no human interaction. What happens if they approach the car from the wrong direction? Will people mind coming out of their house? We want to understand all that.”

There won’t be a complete lack of human interaction, though.

Each car in the program will have a “safety driver” in the driver’s seat, a Ford engineer on the passenger side, and a Domino’s employee in the back to monitor customer responses. Essentially a car full of people will be staring down confused customers during this trial period, which is essentially a sociology experiment wrapped in a tech test run.

TAKE A LITTLE PIZZA MY HEART

Customers will receive a text alert when the pizza is close, and another when the delivery arrives. The recipient will then be asked to trust a red arrow on the rear passenger side door reading, “start here.” After entering the last four digits of their phone number onto a touch screen, the window opens, revealing a secret insulated pizza compartment.

So yes, you do have to leave the house. But you don’t have to tip the car. Before you cry robots stealing jobs, Domino’s senior VP of e-commerce development Kelly Garcia emphasized, “we will have drivers for a long time. This is not about reducing labor costs.” Instead he explains autonomous cars could be used to fill gaps when there’s a shortage of driver or surges in orders.

NEXT STEPS

“We think there’s a very good business,” stated Sherif Marakby, Ford’s VP of Autonomous Vehicles and Electrification. Although Ford has lagged behind its competitors in the autonomous vehicle game, Marakby said using completely driverless cars for deliveries likely “will take off in 2021.”

By then Ford plans on manufacturing completely driverless cars with no steering wheel or pedals. In the meantime, randomly selected customers in Ann Arbor, Michigan will be the only ones experiencing this self-driving pizza experiment. Here’s to hoping the project expands so we can all experience the joy of pizza and being stared at by strangers in a car pretending you aren’t there.

#DrivingPIZZA

Lindsay is an editor for The American Genius with a Communication Studies degree and English minor from Southwestern University. Lindsay is interested in social interactions across and through various media, particularly television, and will gladly hyper-analyze cartoons and comics with anyone, cats included.

Tech News

AI technology is using facial recognition to hire the “right” people

(TECH NEWS) Artificial intelligence (AI) technology has made its way into the hiring process and while the intentions are good, I vote we proceed with extreme caution.

Published

on

AI technology facial recognition

Artificial intelligence technology has made its way into the hiring process and while the intentions are good, I vote we proceed with extreme caution.

A UK based consumer goods giant, Unilever, is just one of several UK companies who have begun using AI technology to sort through initial job candidates. The goal of this technology is to increase the number of candidates whom a company can interview at the initial stages of the hiring process and to improve response time for those candidates.

The AI, developed by American company Hirevue, analyzes a candidate’s language, tone, and facial expression during a video interview. Hirevue insists that their product is different from traditional facial recognition technologies because it analyzes far more data points.

Hirevue’s chief technology officer, Loren Larsen, says, “We get about 25,000 data points from 15 minutes of video per candidate. The text, the audio and the video come together to give us a very clear analysis and rich data set of how someone is responding, the emotions and cognitions they go through.”
This data is then used to rank candidates on a scale of 1 to 100 against a database of traits identified in previously successful candidates.

There are two main flaws to this system. First, unless this AI technology is pulling from a huge diverse data pool it could be unintentionally discriminating against people without even being aware of it. Human bias is not as easy to remove from the equation as AI proponents would have you believe.

As an example, how does this AI handle people who are disabled or whose facial expressions that read differently than the general population, such as people with Down Syndrome or those who have survived traumatic facial injuries?

Second, seeking to hire someone who possess the same qualities as the person who was previously successful at a role is shortsighted. There are many ways to accomplish the same task with above average results. Companies who adopt this low-risk mentality could be missing out on great opportunities long-term. You will never know what actually works best if you don’t try.

The big question here is whether or not AI technology is ready to influence the job market on this scale.

Continue Reading

Tech News

The ‘move fast and break things’ trend is finally over

(TECH NEWS) Time is running out for this decade — and for a popular Big Tech phrase responsible for a lot of collateral damage. What’s next?

Published

on

big tech move fast break stuff

Time is running out for the decade. With less than 20 days left, it’s got us reflecting on the journeys of different economic sectors in the United States. And no industry has had a more tumultuous time of it than Big Tech.

A lot has changed in ten years. For starters, Americans have become increasingly disillusioned with Silicon Valley. The Pew Research Center found that only 50 percent of Americans believe technology firms have a positive effect on the country. That statistic is not too bad on its own, but that’s down 21 percent from only four years ago. Gallup found in 2019 that 48 percent of Americans also want more regulations on Big Tech. And The New York Times called the 2010s as “the decade Big Tech lost its way”.

Maybe that’s why big wigs at these tech firms have been quietly ditching a concept that was their Golden Rule in the early part of the decade: Move Fast and Break Things.

This concept is a modern take on the adage “you can’t make an omelet without breaking a few eggs.” For most of these firms, any innovation justified some of the collateral damage within its wake. And this scrappy “build it now and worry about it later” philosophy was a favorite of not just Facebook and Twitter, but also of many venture capital firms too.

But not anymore. Outlets from Forbes to HBR are saying this doesn’t work for Big Tech in the 2020s. Here are some reasons why it’s over.

Stability

The Move Fast and Break Things manta encouraged devs to push their coding changes to go live and let the chips fall where they may. But bugs pile up. Enter technical debt.

“Technical debt happens every time you do things that might get you closer to your goal now but create problems that you’ll have to fix later,” said The Quantified VC in an article on Medium. “As you move fast and break things, you will certainly accumulate technical debt.”

If enough technical debt comes into play, any new line of code could be the thing that topples a firm like a house of cards. And now that the consumer is used to tech in their daily routines, interruptions in service are extremely bad news for everyone.

As Mark Zuckerburg himself said it: “When you build something that you don’t have to fix 10 times, you can move forward on top of what you’ve built.”

Trust

To get back some of the trust that has ebbed from Big Tech over the years, firms can’t just keep with the Move Fast and Break Things status quo.

“The public will continue to grow weary of perceived abuses by tech companies, and will favor businesses that address economic, social, and environmental problems,” said Hemant Taneja in his article for Harvard Business Review. “Minimum viable products must be replaced by minimum virtuous products that … build in guards against potential harms.”

It’s not about chasing the bottom dollar at the cost of the consumer. Losing trust will hurt any company if left unchecked for long.

Innovation

There’s a cap on advancement in our current technological state. It’s called Moore’s Law. And we’re rapidly approaching the theoretical limits of it.

“When you understand the fundamental technology that underlies a product or service, you can move quickly, trying out nearly endless permutations until you arrive at an optimized solution. That’s often far more effective than a more planned, deliberate approach,” said Greg Satell in his article for HBR.

Soon enough, Big Tech will be in relatively new waters with quantum computing, biofeedback and AI. There’s no way to move as fast as these technology firms have in the past. And even if they could, should they?

Big Tech has experienced major growing pains since the dawn of our new Millenium. And now that some firms are entering their 20s, there’s a choice to be made. Continue to grow up or keep using an idea that’s worn out it’s welcome with the consumer and that has no guarantee will work with future technologies.

Maybe that’s why Facebook’s motto is now “Move Fast with Stable Infrastructure.”

Continue Reading

Tech News

Computer vision helps AI create a recipe from just a photo

(TECH NEWS) It’s so hard to find the right recipe for that beautiful meal you saw on tv or online. Well computer vision helps AI recreate it from a picture!

Published

on

computer vision recreates recipe

Ever seen at a photo of a delicious looking meal on Instagram and wondered how the heck to make that? Now there’s an AI for that, kind of.

Facebook’s AI research lab has been developing a system that can analyze a photo of food and then create a recipe. So, is Facebook trying to take on all the food bloggers of the world now too?

Well, not exactly, the AI is part of an ongoing effort to teach AI how to see and then understand the visual world. Food is just a fun and challenging training exercise. They have been referring to it as “inverse cooking.”

According to Facebook, “The “inverse cooking” system uses computer vision, technology that extracts information from digital images and videos to give computers a high level of understanding of the visual world,”

The concept of computer vision isn’t new. Computer vision is the guiding force behind mobile apps that can identify something just by snapping a picture. If you’ve ever taken a photo of your credit card on an app instead of typing out all the numbers, then you’ve seen computer vision in action.

Facebook researchers insist that this is no ordinary computer vision because their system uses two networks to arrive at the solution, therefore increasing accuracy. According to Facebook research scientist Michal Drozdzal, the system works by dividing the problem into two parts. A neutral network works to identify ingredients that are visible in the image, while the second network pulls a recipe from a kind of database.

These two networks have been the key to researcher’s success with more complicated dishes where you can’t necessarily see every ingredient. Of course, the tech team hasn’t stepped foot in the kitchen yet, so the jury is still out.

This sounds neat and all, but why should you care if the computer is learning how to cook?

Research projects like this one carry AI technology a long way. As the AI gets smarter and expands its limits, researchers are able to conceptualize new ways to put the technology to use in our everyday lives. For now, AI like this is saving you the trouble of typing out your entire credit card number, but someday it could analyze images on a much grander scale.

Continue Reading

Emerging Stories

Get The American Genius
neatly in your inbox

Subscribe to get business and tech updates, breaking stories, and more!