There’s a certain melancholy to the end of the holiday season, isn’t there? Whichever is your winter festival of choice, it’s easy to be a bit down when the fam heads home and your vacation days, if you have vacation days, dwindle to their end. But as you emerge from holiday coma and trudge to work in the winter lull, take heart! At least it’s not your job to convince a computer that sand dunes aren’t porn.
Because it could be. That’s a thing. We live in the most ridiculous possible future.
Specifically, it’s a British thing. In their ongoing – and laudable! – campaign against child abuse, the Metropolitan Police of London are testing an algorithm that searches seized data for inappropriate sexual content.
Well, that’s what it’s supposed to do. At the moment, it’s shouting at sand. See, sand comes in curving lines and a variety of (literal!) earth tones. Various other activities are also characterized by curving lines and a variety of earth tones. I trust I don’t need to spell it out.
That’s the trouble with algorithms: they do need me to spell it out. As we’ve written before, AI does not do context, and context is the most important human thing. When all you have to work with is “sort of brown and curvy and all over the place,” it becomes possible to mistake a pitiless desert landscape for naked humans engaged in naked human activities. People don’t do that. I mean, I hope. That sounds scratchy and embarrassing.
That’s why it’s currently someone’s job to explain to a robot that sand is not sex. Fair play to the Metropolitan Police, they’re doing that correctly. Their AI solution isn’t scheduled to turn its pitiless steel gaze on British sex for two to three years. Programs are supposed to have hilarious fails in the testing phase. That’s why there’s a testing phase.
The private sector has a habit of leapfrogging that and letting the fail happen right out in public. Just in the last 6 months, premature AI implementation has had Google accusing an innocent person of the Las Vegas shooting and Facebook promoting explicit anti-Semitism.
To state the obvious, the stakes are even higher when the cops are involved. Neither Google nor Facebook has the legal right to shoot you. Yet. And alongside the hilarious fail, the Metropolitan Police are discussing non-hilarious fail, including putting potentially incriminating information on public cloud storage, rather than in a dedicated data center. In case you’re time traveling from 2012, putting private information on a publicly accessible system is a really bad idea. Really.
In short, law enforcement’s experiment with Robocop seems to have run smack into the modestly named Salter’s Law: for every implementation of AI in a people-facing role, you will have to hire a minimum of one real person just to handle the fallout when it screws up.
Matt Salter is a writer and former fundraising and communications officer for nonprofit organizations, including Volunteers of America and PICO National Network. He’s excited to put his knowledge of fundraising, marketing, and all things digital to work for your reading enjoyment. When not writing about himself in the third person, Matt enjoys horror movies and tabletop gaming, and can usually be found somewhere in the DFW Metroplex with WiFi and a good all-day breakfast.

Pingback: How calendars can stop your procrastination, boost productivity • realestate.10ztalk.com
Pingback: How should facial recognition be regulated in America?
Pingback: A hugely dangerous challenge of the Internet of Things - The American Genius Real Estate
Pingback: Advancing Ethical Robots: - WebSystemer.no
Pingback: Deplorapalooza debriefing and special guest Greg Korin! – CounterCultureWISE
Pingback: 122: Kate Fractal and Artificial Intelligence | Coffee For The Brain