ChatGPT Thread - Politics (AI Welcome)

I think hard-headed physicists consider that kind of thing mental masturbation, which is fun, and they do plenty of it, but it’s better if it’s connected to something in the real world. For that, you need experiments. Observing water in a glass and measuring shadows are experiments.

I’m not convinced that an AI would be much better at physics than we are. Ofc that could be because I’m human, and not one of the smarter ones.

With Altman, I’d take him seriously if he’s talking about GPT, not seriously if he’s saying stuff like that.

My point is basically that there are billions of experiments going on every second, ie, the unfolding of reality, and that information would presumably be represented in real AI. If it were very, very smart, it could tell us what this means for understanding the world.

Basically, the picture is this: humans–>AI–>God. God does not need experiments, because he knows everything and how it all fits together. Humans need experiments because we basically take baby steps, because (sorry Aristotle, Leibniz, Hegel, et al.) we can’t work out the nature of reality just by cogitating with our feeble brains. Roughly, we try to isolate and manipulate individual variables and see how they change when we alter circumstances. Genuine AI, of the type envisioned by a singularity, would certainly not be god, but it would be a lot further away from a genius human than a genius human is from, say, Donald Trump, probably further away from a genius human than a genius human is from a cat.

People don’t take the notion of general AI seriously. If we had general AI, along the lines of a singularity, it wouldn’t threaten jobs or whatever, it would completely rework human existence. It may also render humans extinct, but that’s the price of progress I guess.

1 Like

Just saw some disturbing sexual abuse accusations that are apparently not new so I don’t know if they would have led to this but if the story goes that way, I’m out.

This is delightfully insane

1 Like

Maybe. I mean this is just idle speculation on my part and I can’t say that an AI wouldn’t seem magical to us in the way it understood the world, but it seems to me what you’re describing is something like Laplace’s demon.

If such an AI existed, it would have to have complete knowledge of the state of the universe. If it doesn’t start with that information, it can get part way there by training. But that’s just the results of experiments we’ve done. To go beyond that requires more experiments (of the ordinary kind). Assuming it cares about the real world, AI could do its own or it could suggest them to us, but it can’t avoid them. Afaik we don’t have good reason to expect it can do this process better than we can.

Edit: by process, I mean guess a theory ↔ experiment.

OMG that is fantastic! :transmet_smiley: How can we bring gambling into this? :vince2:

edit: lol just noticed the “bet” buttons… :person_facepalming:

https://twitter.com/gdb/status/1725667410387378559

Screenshot 2023-11-17 163339

Oh, OK.

1 Like

I’m just describing a system that can hold, say, 20 distinct ideas in short term memory and ascertain their relationship like 7 levels deep for 8 hours without distraction. It need only have a human-level interaction with the world. I suspect a lot of “advanced” knowledge is fairly low-hanging even if we only had twice the brain power we currently have.

Take something like Schrodinger’s What Is Life, he tried to understand what hereditary material must be like given what we know generally a few decades before DNA. He was largely wrong, but there are probably 1000 problems that could benefit from such analysis based only on known information.

Like give it what we generally know and tell it to design a fusion reactor. If it can’t do that it’s either dumb AI or fake AI (ie, GPT).

1 Like

https://x.com/karaswisher/status/1725678074333635028?s=46&t=9xanL2tZoKj22erGoTuL4A

The board statement is a bit sharply worded for this to be the explanation imo.

Who will scoop these guys up? Won’t be Microsoft most likely but a lot of potential players, plus they could do something on their own, contracts not withstanding.

I tried googling just to see where they could hypothetically end up and it’s crazy just how many potential players are in this space. There are the big players like Google,Amazon and IBM ect. and there’s also tons of smaller companies and start ups. There’s just so many of them trying to get a piece of the pie.

Ya those contracts and potential ownership stakes could affect things quite a bit as far as I can tell. Like if these guys have ownership stakes and get bought out by Microsoft they could have actual legit non-compete clauses that are enforcable.

Still, while noncompetes can be enforceable with the sale of a business, Calif. has the most liberal anti-noncompete laws in the country (and those laws apply even to contracts purporting to apply non-California law). In any event, this likely settles one way or another, eventually.

Could see some elite trolling opportunities good natured fun with this type of bot.

https://twitter.com/charliebholtz/status/1724815159590293764

4 Likes

Cheers, reminds me of a recent Sean Carrol Mindscape episode discussing AI and why the guest says explanatory knowledge is a unique feature of intelligence.

I think we earlier discussed it and you thought some of the stuff he said was nonsense vs yes probably.

I am a Sean Carroll fanboy and agree with about 90% of his takes. He’s basically a good example of how smart a “normal” human can be. He isn’t some kind of super genius, but he’s a genius by any measure and if he is wrong about something it’s at least indicative of the standard limits of human intelligence. (I am dubious about his “many worlds” fanboydom, though I am am no position to judge, and I think Bayesianism is lame.) Something I really like about Carroll is that he strongly identifies as a philosophical pragmatist and is very open-minded for someone with his level of accomplishment. I also basically agree with his “poetic naturalism” worldview.

That said, I’m not sure you mentioned the mindscape episode. (Though I’ve probably listened to all his episodes on AI.) That prior post related to David Deutsch’s comments about brains being Turing complete, which I regard as fairly trivial (it may be necessary but it’s not explanatory of anything), and his comment that he thinks that explanation is essential for AI, which I agree with. I take prediction/inference and explanation to be the core of rational thought and science, which is a pretty standard position in philosophy of science for the last 60 years or so. (Also, thanks partly to Penrose, I take comments about mental phenomena from physicists with a huge grain of salt, Carroll excluded because he has a strong background in philosophy.)

I think if we “solved” representation and explanation that would go a long way solving real AI. A problem is that when we talk about representation we use words to describe the thing represented, and words are generally deeply vague and contextual. (A point best driven home by Lakoff’s Women, Fire, and Dangerous Things and Putnam’s Meaning of Meaning). I wish I were up to date on philosophy of language/meaning, as I suspect some of the theories could be integrated into current efforts at AI, and I know some people are working on that, but they are generally not the GPT crowd that gets all the press. I think explanation is generally also sublinguistic, as can be seen by “ah ha” problem solving with real-world manual tasks. Nothing puts me on tilt faster than hearing about some natural language inference system. I would think that if we could simulate some bit of representation and explanation in NN software models, and maybe we unknowingly already are, that would move the AI ball further and that’s what I’d like to hear more about.

1 Like

There is a commercial I am getting spammed on YouTube that says we are all entitled to like $6850. They use Joe Biden’s voice to pitch it and then follow up with Joe Rogan.

I am shocked YouTube lets this run, but then again I am not.

Microsoft apparently pissed about the firing and resignations.

https://x.com/emilychangtv/status/1726025717077688662?s=61&t=CwVKdl7e5GoYqphDmQHrPg

Absolutely insane if he comes back. A bit blasphemous if he returns on a Sunday honestly.

Where was “jk, takesy bakesy” on the betting markets?

That’s cool. I would say it as I like that Sean exercises as much humility as rigor. He aspires to the limits of knowledge while also acknowledging whatever limitations are there on our ability to know anything.

I’m not sure if there is a better way to phrase this, but do you have a sense for where theories of consciousness are regarding Dissociative Identity Disorder? I am thinking of this paper in particular.

They specifically say

Budson and his coauthors consider a number of neurologic, psychiatric, and developmental disorders to be disorders of consciousness including Alzheimer’s disease and other dementias, delirium, migraine, schizophrenia, dissociative identity disorder, certain types of autism, and more.

This is from the primary thesis

“In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us flexibly and creatively imagine the future and plan accordingly,” explained corresponding author Andrew Budson, MD, professor of neurology. “What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them.”

Their research says to me that the key to building a sentient mind means to build the subconscious that the conscious mind is reflecting. So the logic the subconscious follows may be more like dream or Inception logic, but the language used to express these intuitively networked experiences in the subconscious is limited to a pre determined symbol encyclopedia and language dictionary.

The forced interpretation gets at what has been my experience with DID or just people attempting to explain their awareness of things that are experienced in a manner that don’t fit conscious or social engagements, or even just to have a table talk according to debate rules. Which is to say consciousness in DID as in individual identity involved a fair amount of retro fitted confabulation to interpret what’s happening underneath the hood, such as in DID with each identity having a fully formed history of what happened while they were away from the conscious mind in their internal worlds, even tho it is not possible for the single brain each identity shares to have negotiated full consciousness for all of the identities who were awake.

I am only speaking from my experience with DID and understand others may explain it differently.