Recently, I wrote about the world’s first AI-powered dating app assistant. The article didn’t fully address the moral (or potential legal) dead space associated with the concept of a bot that writes one’s romantic messages for them, nor did it at any point condemn that same process. I’d like to set the record straight: I believe we should be concerned about this kind of application of artificial intelligence.
When it comes to consensual content creation, AI ethics are a dumpster fire of epic proportions. At best, using AI for this purpose leaves the “creators” in a moral gray area, perhaps not intending to violate boundaries but ultimately having no real control over what makes it into their generated draft.
At worst, entire relationships could one day be predicated on a single misleading first impression – which is exactly the direction in which AI dating assistance is pushing us.
I’ll confess a complete lack of first-hand experience with dating apps (it seems I’m not missing out on much). But while I’ve personally avoided the process entirely, many of my friends have gone through it, emerging with arguably hilarious (but potentially traumatizing) stories of dates and virtual conversations gone wrong.
Their general takeaways were simple: 1) that online or app-based dating is a real gamble, and 2) that almost every person with whom they conversed was less than truthful about something at some point.
It’s a huge shock, of course, that people looking to find love (etc.) through an app would exaggerate their accomplishments or interest. Like all problems, though, AI complicates the situation – and, I would argue, renders the whole exchange nonconsensual.
If one person thinks another person is attractive based on their personality, and that personality is AI-curated, then their interest was manipulated against their consent, and purposefully so. That isn’t the kind of thing we should be celebrating in any arena, regardless of the addition of AI.
Clearly I’m biased toward skepticism when it comes to online interactions, so I asked my advanced English classes about their input on the idea of an AI-powered dating assistant. Because they’re certified Zoomers, they’ll have the distinct privilege of becoming adults in a world that has – for better or worse – made up its mind about AI content generation.
Their responses had all the predictable hilarity of sleep-deprived geniuses (“What if the AI has more rizz than you and you can’t compete with it when you actually show up to the date?”) but they were also unanimous: This concept is a problem.
Even the students who recognized the potential upsides of an AI assistant swiftly came to the conclusion that it would ultimately be to the detriment of socialization (and possibly humanity as a whole).
“Imagine if you were using the AI and you found out the person you were talking to was using it, too,” one student pondered. “At that point, isn’t it just two robots on a date?”
And that, while clearly not the first problem I think of, is the overall persuasion of the larger issue. Dating apps were never going to be the peak frontier for socialization, and the argument could certainly be made that AI-generated pick-up lines and conversation helpers could actually stabilize some of the more volatile social interactions.
Unfortunately, that’s not a good-enough reason to resort to a paradigm in which computers talk to each other while we watch from the sidelines.
Some would go so far as to argue that using AI to make yourself sound smoother than you actually are could be considered a form of catfishing.
Now, according to MVSK Law, catfishing in and of itself is not a crime unless it strays into romance scam territory; with the FTC’s definition of a romance scam requiring money to be requested or extorted from a potential victim, the use of AI by itself isn’t enough to constitute a crime.
However, the other side of that coin is suitably terrifying: That AI assistance will only serve to make existing catfishing attempts and romance scams (the bulk of which are already too effective) even more convincing.
I don’t have any particularly deep insights past “this is a bad idea and we should probably contain it,” which, aside from maybe earning me a cameo spot in a Terminator fanfic, doesn’t do much for the problem at hand.
All I can hope is that someone with regulatory power somewhere along the line will recognize that this notion is as dangerous as quickly and effectively as my 16- and 17-year-old students did – if not for our sake, then for theirs.