The racism-bots are at it again.
Galactica, a large language model type AI writing tool from everyone’s favorite layer-offer Meta, purported to allow legitimate creation of scientifically sound literature and access to a brand new world of generous generation. The service went offline after a spectacular 48 hours of the best and brightest tech types poking easy, gaping holes in its noble claims.
How gaping? Well, how does generating papers saying that ‘gay faces can be picked out from straight faces’, relaying that science finds ‘black people have no language’ and complete, factless gibberish about our most sacred dinosaurs sound?
Though many hole pokers did feed the service bad faith prompts, the fact remains that ‘sciency sounding’ articles with such horrific premises as ‘The Benefits of Eating Glass’ could be generated by a system advertising itself as having the most lauded and lofty results.
The fact that the bot could spin these unethical and untruthful claims isn’t so much news to me as…the expectation from higher-ups that it could ever do otherwise is. Standing up to techies and real live scientists willing to Beta Test with no safeword in sight?
It just wasn’t going to happen.
Humans aren’t totally unsusceptible to lies, ignorance, biases, and the will to do harm…and that means AI modeled on our writing patterns won’t be either. It’s why they cannot and should never be trusted to write anything with any level of authority, as it seems Meta promised.
Does that sentiment smack of Luddism?
Well, I am a bit biased against newer tech. But it’s not without reason…mostly.
Not only am I the rarest of all millennials, the one who prefers to call in for appointments and food, but I’m also someone who’s worked with AI writing before. The service itself shall remain nameless, but after several hours spent correcting basic, egregious, and downright dangerous mistakes in its articles, I was thoroughly unimpressed.
An article generated in five minutes with no input seemed to give me results just as cogent as articles that took a half hour of prompts and inputs – all of which suffered from repetition, incorrectly stitched-together half-data, and long-cleared misconceptions from faulty sources. The rage rush I got from seeing it tell indoor gardeners to daisy chain their high-wattage appliances near drip pans got my WPM up to personal bests, but at what cost?
It’s not a shock to see this repeated at the household name level, mind you.
After all, even after you take into account their limitations as non-humans, the bots are drawing from resources that can’t be effectively scrubbed. Echoes of echoes roll on in online caves, so nothing ever really disappears on the internet. How many nutritional resources still cite spinach as an especially iron-rich food despite the truth? How many of your friends still think The Beatles wrote “Telephone” before Lady Gaga did?
Okay, with copyright claims some content can be scrubbed, but that’s a separate screed.
It’s kind of like losing a child who can read at a higher level into a library to do research without telling them how indispensable publishing dates and edition numbers are in picking which tomes to trawl. That’s possibly something else I might have experience with. Maybe.
What’s burning my biscuits about it is engineers’ and tech lovers alike defending the same AI content flaws we’ve been seeing for years as purely user error and malice.
Chief AI Scientist at Meta, Yann LeCun, tweeted: “Galactica demo is offline for now. It’s no longer possible to have some fun by casually misusing it. Happy?”
No, actually. But hey, I’m hardly surprised.