Understood. I don’t think the question needs to be asked. Particularly when the failure mode is unnecessary harm to a kid.
Fuck, man. I know someone is going to accuse me of stanning for pedophiles, but here goes.
Why is “harming” imaginary kids unwanted? It’s only unwanted, if it makes things worse for real humans in some way.
Again you’re assuming the conclusion. For all we know, maybe there would be less crimes against kids if pedophiles had this outlet. Maybe the failure is not allowing access in a controlled way.
I don’t know the answers to any of these things. Despite that I’m very comfortable Brazil did the right thing here.
We have absolutely no idea what this shit does to the human brain (early indications suggest it isn’t great!), but sure, let’s use it for experimental therapy.
The experiment with social media has taught us that humans are not prepared for any of this shit, LLMs are explicitly designed to mimic humans and tear down the barrier between fantasy and reality, Brazil pumping the brakes on this is the smartest tech innovation I’ve heard of in months.
Okay. Related note.
Why is AI generated porn of real people bad?
Like I think it is bad, and lots of women get very upset by it, and that’s enough for me. But I’m not entirely sure what the mechanism is.
Scenario A. Student makes AI porn of a teacher and sends it around the school and it gets to her. This is obviously some version of sexual harrassment, unwanted sexual behaviour.
Scenario B. Student makes up sex fantasies about a teacher. Writes them in his journal. They never see the light of day unless someone goes and looks for them
Scenario B somehow seems like it’s fine? But does it therefore follow that if he made AI porn that was was for his own viewing only that it’s somehow okay? I’m not sure it does.
Is it a probability and impact of harm thing?
I mean. I don’t disagree. But drawing the line at pedo bots and kiddy sex bots is a very very very light touch of the brakes.
I’m definitely not calling you out for being pro-pedophile in this. I’m disagreeing with your position on whether letting people molest imaginary kids is a good idea or not.
If you can come up with a way to do this that guarantees no harm to children then that’s a different conversation. I’m just not seeing how that could be done, and sense neither one of us knows whether trying it will increase or decrease the harm to real children I don’t think the juice is worth the squeeze in finding out.
tbh I’m leaning toward the idea that any AI representation of people is a bad idea. Like, what’s the actual socially beneficial use for this technology again?
I’m happy with using the creepy factor. You do something in the privacy of your room that never gets out only gets creepy when other people find out about it. So if it is a personal journal that no one ever sees but you it is creepy but unknowable.
Sharing drawings of real people doing the porn is creepy AF because it allows other people to start asking questions that are definitely answered by your creepy porn drawings. It becomes super creepy if the subject of the drawings actually sees them.
AI facilitated porn of real people is creepy because other people are involved, namely the folks who run the site you’re using to generate it. And because you have no idea who that site is sharing it with in various galleries or discord boards.
If you built your own LLM on your own hardware to generate such images you’ve got problems that are our of scope of this discussion…
I’ve been uploading photos of menus to ChatGPT while in France and it has mostly been amazing. Like it can OCR and translate some pretty curly handwriting on boards out the front. Tonight though I tried to use it to translate a pizza menu that was in three columns and something made it lose the plot. It hallucinated pizzas that were not on the menu. The worst thing is that I actually wanted to order one of its inventions but could not because it was not a real thing on the menu.
Virtually nothing that you type into the box should be a crime unless it attempts to harm actual people who actually exist.
Anybody that wants to have sexy time chat with a child is not going to be satisfied with chatting with just the bot. It would most likely make them seek out victims in the real world. These people are sick.
They have a very serious addiction they need to feed and what they really need is mental health services to combat that. The only way in which I would condone a bot like that would be to identify people with this problem and get them help. That’s how you keep children safe.
I know we’re kind of beating a dead horse here, but you’re really just assuming this to be true. Why would a pedophile be satisfied with no chat at all. Might that not make them even more likely to seek out actual kids.
No one really knows what the truth is (or maybe some researcher has the answer). I suspect the reality is complex. For some it may increase likelihood and for others it may decrease. On aggregate, who knows?
If SVU has taught us anything, it is that pedophiles don’t deescalate when they start looking at kiddie porn.
Well, it has also taught us that cops almost universally have extremely high ethical standards and are nearly always selflessly trying to do the right thing to make the world a better place. So, we might need to take those SVU lessons with a grain or two of salt.
Ii guess I’m just trying to understand the mindset of someone who wants to prey on a child. The point is to have a victim. I don’t see it any other way. I could be wrong but just the nature of the act is satisfying a need to victimize someone. A child. To steal there inocence. It’s not any kind of actual sexual attraction. At least that’s my read. Maybe, in some way, under the supervision of a trained professional could an AI bot be useful for therapy. But umm, self medicating, I don’t know seems risky.
It always hallucinates much more than you think, and especially on this stuff.. And of course they are completely coherent hallucinations, because that’s the whole point.
For a while I was using it for whole page translations of a Tagalog book I was struggling with. It took me a while to realise it would hallucinate whole paragraphs and plot points.
Yeah. I think maybe the expectation of privacy vs harm is the key factor.
Producing something with a high probability that it will harrass the person seems to be good enough to make it bad.
I know some places have made it a specific crime, which seems about right.
By the way, is there anyone else who has had the epiphany that Melkerson is cleary like AI 1.0? Internet User, who posts non stop 24/7 yet constantly is confused by the most basic things not only on the internet but also straight up confused about human life fundamentals.
Someone put up that account back in the day to get feedback, and is just laughing themselves to sleep that us sheep thought it was a human.
That’s just Melk being Melk. This forum would be a little less interesting w/o him imo.
In before Gary Marcus says this isn’t a big deal because they’re just predicting the next behavior token, not “really” performing tasks.