Claude Thread - Politics (AI Welcome)

They’re just regurgitating previously observed behaviors; “true” dexterity can never be accomplished by linear algebra!

Actually, the paper is kind of bizarre. They don’t “just predict the next action.” Instead, they use a diffusion model that predicts a whole sequence of actions at once. Now that I think about it a bit more, I guess that makes sense, in that you need to envision the actions you plan to take later to select an appropriate action to take now. The obvious problem is that you need to adapt to errors and noise over time to keep your plan on track. It seems they way they work around this is that they have the model produce a sequence of steps, execute the first few, throw out the rest, and then start from scratch to find the next actions to take.

It’s a bit ironic, because the LeCun criticism of autoregressive LLMs is that they accumulate errors because they can’t backtrack if it subsequently becomes clear that some previous tokens were suboptimal. The AR paradigm is a bad fit for language (because erasers and the backspace key exist), but it fits perfectly with behavior, because you can’t undo the past.

Possible explanation is that there are some experiments suggesting that diffusion is fundamentally better than AR, but it’s much more computationally expensive. However, if you have a limited data pool, which is true in robotics, you can throw a lot more compute at it without the model just memorizing the data.

I can remember a news story of a paroled pedophile going back to prison for writing fantasies about children in a journal but google is not helping me find it.

I know a lawyer in Melbourne and he had some sort of tangential involvement in a case where a guy on parole (or probation or something, I forget) for possession of child abuse material copped two years for reading sex stories involving minors. Not sure I think that should even be illegal. There’s an empirical question of whether it makes people more likely to offend, but other than that just seems like thought crime.

1 Like

i mean films like lolita exist, this seems weird to prosecute, but i understand the reasoning

Regardless of the empirics, in the post-gen AI world, it seems like you need to criminalize generated child pornography, otherwise it’s auto-reasonable doubt to claim that real CP was created by AI

4 Likes

That’s probably the best reason to do it.

3 Likes

I agree. I might accept treating it differently with someone who has previously been convicted of sex crimes. But if we start convicting people for what they fantasize about doing, I imagine most people would be in jail (with me) for murder.

2 Likes

I’m waiting for the Trump admin to latch onto pedophile crimes like they are now with anti-semitism, and Planatir to have everyone’s browsing history. Watched a vid with Traci Lords naked, ever*? That 70s Romeo and Juliet movie with Olivia Hussey at 15? Taxi Driver? Straight to jail. Sex offender for life.

*(Except for that R-rated movie and the one porno she did after she turned 18)

Imagine how many judges they could turn with that kind of blackmail.

Writer and director of the ‘Jeepers Creepers’ series had previously been convicted of child sexual abuse of an actor on one of his previous films. He has said that without the outlet of the ‘Jeepers Creepers’ films that he would have offended again.

It’s not what you fantasize about, it’s how you fantasize about it. Keep it in your head and everything is fine, start writing it down where it can be shared and it becomes a problem.

I’m not sure we should be taking the word of a convicted pedo as having much predictive value. Especially with a data sample of one.

Also. There was more than one Jeepers creepers?

1 Like

Also. Ew.

The guy groomed the kid since he was seven. Got three years in prison and still had a post pedo career.

Not enough cancel culture here

2 Likes

Ann Arbor during sorority rush is a pedos dream. I was horrified and so was my daughter, so job well done I guess.

I’m surprised the victims family didn’t sue him into the poor house.

This guy built a llm based on London texts from 1800-1875. It was able to piece together a historical event it was not specifically trained on.

2 Likes

The current generation of large language models has hit a wall that’s become increasingly obvious to anyone working with them daily. They’re impressive pattern matchers and text generators, but they remain fundamentally limited by their inability to maintain coherent context across sessions, their lack of persistent memory, and their stochastic nature that makes them unreliable for complex multi-step reasoning.

The human brain isn’t a single neural net—it’s a collection of specialized systems working in concert: memory formation, context management, logical reasoning, spatial navigation, language processing. Each system has evolved specific purposes, and they operate asynchronously with complex feedback loops between them.

2 Likes

Seems interesting.

A few related thoughts. The machine learning models outperform engineered models for language learning etc.

All of those systems and the interfaces are complex neural nets in humans. The idea that you can engineer these seems to be a big assumption.

Similarly. He seems to think we can build all of those with existing systems and technology. Which again is a big assumption.

The moderators of this message board, otatop, L.Washington, WichitaDM, Yuv, JonnyA, RiskyFlush, and SvenO, are cowards who let abusers dox and harrass other long time posters.