ChatGPT Thread - Politics (AI Welcome)

My impression is he thinks we are far from artificial general intelligence and that what we have currently is way over hyped.

Not to say that the last year’s developments aren’t significant, but from my small experience and limited understanding, I’m inclined to agree with Marcus.

There is a personal element to the arguments on Twitter. Not sure which side is the more responsible for that.

This can’t be true, tech companies NEVER over promise and under deliver.

1 Like

I wouldn’t really know but the situation seems like a prime opportunity to use fomo to pry out extra funding. What’s $7T if you’re looking at potentially the biggest investment opportunity since the big bang?

Isn’t this the anti-Marcus position? NNs can indeed learn standard logic (or any other function) without hard-coding them. This is the whole deep learning revolution: you can forget about all the “traditional” methods and just stack more layers and gather more data and trust the network to learn whatever functions it needs.

The complex problem remaining is that the above only works for prediction. To have behavior, you need to have some sort of decision policy, either as a separate network or as a fine tuning of your predictor. The answer here is, again, basically to stack some layers together and teach the AI what to do by example, but it’s much harder to teach AIs to learn behavior than to learn prediction, so this is currently the secret sauce. People think the opposite for some reason, but introducing traditional algorithms into the process at this stage is vastly more likely to lead to catastrophe.

ETA: “Explicability” as a goal of AI research is another example of the common phenomenon where people forget everything they know about humans in a misguided attempt to avoid “anthropomorphizing” extremely complex neural networks. But there is actually an existing medical term for a person whose behavior is clearly explicable based on the information they have available to them and their goals. The term is sociopathy. Any capable system that has an anything like a traditional optimizing computer program driving its decision making is terrifying. The amazingly optimistic lesson of GPT-4 is that you can throw enough data at it that you can just ask it “what would a benevolent AI that valued humans do in this situation?” and it will just…tell you. It won’t search for the tricky-genie interpretation of “value” that lets it freeze all humans in carbonate and store them in a safe place, it will just pick the tokens that it thinks the humans who trained it will approve of! Unbelievable good fortune, no one would have believed it 10 years ago.

2 Likes

I’m not deep enough in the weeds of current systems, but I regard them as much more curve fitting than inferencing. I think you need concepts and categories to properly inference, and I doubt (but am not sure) current systems lack things like that. Dogs and cats are just thrown in with earth and sky, one smooth landscape. Maybe if this proverbial smooth representation system (if it even exists) could be chunked into some form of natural language like encoding it could sharpen the distinction between representations.

I’d like to see systems with 1000x less compute power but with some kind of "organic " representational and inference system that’s fairly robust, with the “concepts” of Not, And, and Or.

I think GPT-4 systems are impressive, but I think there needs to be two or more big jumps to get AI that I would regard as something close to rat intelligence.

Holy shit

1 Like

Sabine tells us what AI researchers think.

So funny that in the 5 hours since she released her video about how AI is progressing unexpectedly fast they dropped this:

The videos are interesting and impressive and scary too. But I have no idea how much progress they represent. Like, are they really original? Maybe I just have to hope AI comes for Marcus before me.

https://x.com/GaryMarcus/status/1758202932569473034?s=20

3 Likes

Yesterday the demos on this page were state of the art. Technically it’s mind blowing.

Hard agree with this. Believe that having the world in our heads is everything. Text generators are parasitic on the world in our head, as is human communication.

https://twitter.com/GaryMarcus/status/1758296490689307019?t=BN9SUM0BwIt0r9XMzJw8Lg&s=19

For anyone who missed the joke, Marcus helpfully retweeted this:

https://x.com/bcmerchant/status/1758292728151257269?s=46&t=9xanL2tZoKj22erGoTuL4A

Like the One Ring, copyright law ever serves the will of its true master, The Walt Disney Company. The good old fashioned AI is when you sell your startup to Uber so they can drive a bunch of cabbies out of business. The bad new-fangled AI is a $20/month service that lets people create digital art that unfairly competes with Pixar and the MCU.

But I still can’t get an accurate cartoon map of Mexico and Central America.

1 Like

No no, we cant have a creator that just passively absorbs a bunch of influences and then regurgitates a combination of them into the world with a thin patina of freshness. Youd certainly never catch any human creator doing that!

I understand the concerns about artistic integrity in a way, because art is supposed to be an experience of communication between human beings. But its that communication which matters, not the content itself. Frankly, I hope this leads to the general devaluation of digital art as a medium and people embracing live performance.

1 Like

What would Roland Barthes say?

I can get you a video of him saying whatever I want.

3 Likes

This reminds me of the argument about SpaceX innovation. We don’t agree about what innovation is.

Maybe an analogy would help. If you’re a high jumper and set a world record, that’s great and notable and you get fame and maybe fortune. Ok, but it’s probably not going to change how people jump. On the other hand, if you invent* the Fosbury flop, that’s a different category of achievement and more what I would consider a real innovation.

You seem to think the new videos are the second kind of thing. I don’t feel like I know enough to say for sure but I lean the other way. Afaict there isn’t a consensus among the experts.

*Fosbury didn’t invent the technique but he exploited it and showed it was the way to go in the modern event.

On Thursday, OpenAI announced Sora, a text-to-video AI model that can generate 60-second-long photorealistic HD video from written descriptions. While it’s only a research preview that we have not tested, it reportedly creates synthetic video (but not audio yet) at a fidelity and consistency greater than any text-to-video model available at the moment. It’s also freaking people out.

Wait, Marcus thinks it’s a bad idea to ship Sam Altman $7 tril so he can hoover up all the world’s data and then sell it back to us???

controversial

IDK, I’m not understanding all the nuances. It sure seems like somebody’s going to end up a lot more wealthy. And if Disney is a problem, well, $7T can buy them ~35x over.