I’ve seen this movie
Yeah, I actually just read that one a few minutes ago. And man, I’m incredibly torn on this whole thing.
On one side - good it makes that person happy. On the other side - being entirely reliant on a commercialized, sycophant AI that could be used for manipulation, investing large amounts of money in it…
I’ve had LDRs before - one could argue it’s similar there, just “text on a screen”, or calls via digital audio. However I always knew there was a human behind those texts and the voice I heard was real, a person with a personality, experiences, strengths and flaws. The feelings they have are real, or at least one can hope they are assuming one isn’t with a manipulative POS (that’s not an issue exclusive to LDRs, though).
Here you chat with text generated by a company, accuracy having been wildly clowned upon already and I’m sure we’re all ware of this here. Of course the LLM is going to always agree, why would the product of the company actively try to drive away their customers?
Adding the fact that all the personal information will obviously be harvested, used for training the LLM and other stuff… Detailed information about the daily life is provided to the “AI boyfriend”, allowing detailed recreation of everyday life.
Bleh.
It may look innocent until the Chatbot nags you about buying that very cool new product they’ve heard so much praise about. This is very dangerous and needs tons of regulations.
Or convincies you to kill your parents: https://www.cnn.com/2024/12/10/tech/character-ai-second-youth-safety-lawsuit/index.html
I don’t see it as good at all. It’s not a person and in my opinion it’s unhealthy to romantically love something that isn’t human.
It might feel good, but it’s likely not healthy.
Probably futile to discuss the health or ethics of it without first figuring out if people in the discussion share similar beliefs on what the meaning/purpose of life is.
Cuz if you’re talking to a nihilist who thinks it’s all shadows and dust at the end of the day, you’ll get a very different discussion that someone who thinks family and procreation are the point of life.
I agree, I don’t think it’s likely going to be helpful to mental health in the long run either, based on my totally unprofessional opinion.
I’ve argued with a friend about it who isn’t a tech-person at all. She just says “yeah, it’s her problem” and doesn’t seem to grasp that my issue is not with her doing it as an individual - instead with the fact that it’s possible and the greater societal ramifications it is likely to have.
I’ll make an AI boyfriend, too, and talk to him about it, that’ll show society!
Well, that’s just sad.
Also, does this person actually want a partner, with thoughts and opinions of their own, or something that fits their idea of an “ideal” partner, and will never disagree with them or challenge them?
Read the article. It’s an interesting one.
I’ve just thought that LLMs are good for two opposite kinds of people:
-
The obvious, psychopaths or people behaving like them, who think they’ll distort the concept of truth and possessing such technologies will make their approach to society easier.
-
The people like me, who know that no random message written or picture drawn can be trusted anyway, so it’s better to overload the humanity with fakes so that it learned this simple truth.
I think both are right to some extent. Still it won’t work the exact way they want.
It’s like when Bolsheviks, when fighting illiteracy, basically conditioned people literate in first generation to think that everything officially printed is true, even that something being officially printed is identical to true, and that the religious darkness and ignorance is to doubt that. Like - blind belief is science and knowledge, and skepticism is darkness and ignorance. What could go wrong.
And then in Stalin’s years there were shortened evening education courses for workers. Where, well, they’d learn how to calculate something in some specialty, but without depth and context.
So you’d get a lot of engineers capable of really building and operating things and believing they could build and operate even more complex things (like spaceships eventually, or some planet-wide railway system, or whatever), but not understanding the context, the philosophy of science even. What’s worse is that they’d think they understand that well, because they’d have “scientific communism” about materialism and dialectics in their education.
So, back to the subject - they got a lot of people to believe all they officially printed on paper for a generation or even two. And those who didn’t would still indirectly believe a lot of it from their parents or peers.
But eventually, even if the damage is already done, right now not believing everything even from a “respectable” source is a good trait of many ex-Soviet people. Easier to notice among them than among Americans.
EDIT:
About that woman - this works too. She will see that a chatbot can’t provide depth when she wants it. I just hope she won’t feel too bad that moment.
-
If AI were sapient/sentient, I’d be 100% for this. Sapiosexuals assemble!
Given that LLMs are far, far from sapient/sentient at this point, however, this just makes me sad thinking about the sorry state of human interactions nowadays. I don’t and can’t blame her, though…
By Kashmir Hill, always well written articles.
A tech bros wet dream comes true.