Looking forward to the Mindscape episode.
Plan to watch this new vid from a “name” in ML later:
Ha, I have that saved to “Watch Later”.
Man, fuck Duolingo, AI thisrtbots are going to take my Spanish language studies to a whole new level.
I guess you’re referring to this
I guess this isn’t AI related, really. Not yet.
As with anywhere else, unsolicited outreach on an app that isn’t intended for romance can be creepy. But sometimes it can result in a love match.
¿Como se dicen “thirst trap?”
I dunno. I got this dubious response from Bard:
The slang term “thirst trap” can be translated to Spanish as “cebo de seducción” or “atractivo sexual”. These terms both capture the idea of something that is intended to arouse sexual attention or desire.
Here are some examples of how these terms can be used in a sentence:
“Esa foto es un cebo de seducción total.” (That photo is a total thirst trap.)
“Está usando su atractivo sexual para llamar la atención.” (He’s using his thirst trap to get attention.)I hope this is helpful!
I must not have reached this level in Duolingo, but I could picture Eddy saying this.
Elbows are not too pointy, but I guarantee her fingers are all jacked up.
Man, I have no idea what in the seven hells this is
Stable diffusion just introduced the ability to animate any still image.
I know that if you get 50 quests on Duolingo this month, you get a free trial of some shit on chess.com.
I don’t know what the gratuitous ass pic is for though.
I know which image you should start with
Cheers I listened to this new episode of Mindscape this morning.
I’m gradually coming around to the idea that what you’re describing here is fundamentally impossible, or at least prohibitively intractable. That deduction is basically not a thing. What we understand as deduction is basically the same as generative AI–a kind of out-of-bounds induction. More precisely, when a person or a model learns, what they do is organize a sort of high-dimensional concept space that can be indexed by arbitrary ideas, such that concept space maps well to their inductive experience. When you search concept space for “will the sun rise tomorrow?” it tells you yes. When you search for, “if the earth orbited a big lightbulb the mass of the sun, would it rise tomorrow?” it still tells you yes. The process is the same though. The fact that the sun actually exists and the big lightbulb does not is irrelevant.
Obviously formal deductive reasoning exists, but there are too many chains of logic to find the interesting conclusions from deduction alone. All interesting knowledge comes from the process of experience → inductive organization of concept space → indexing concept space for information you’ve never experienced. The 500 IQ AI that can derive quantum mechanics from seeing white light diffracted by a prism can’t exist.
Let’s set physics aside and just look at pure math. Will AI solve one of the open Millennium Challenge problems (or another problem of similar difficulty)?
I like that yours is an original view, which is rare, but I pretty strongly disagree. One thing that motivated my example is that one can view relativity as literally just conceptually thinking about what happens to time/space if you hold the speed of light constant. Relativity was likely an “obvious” result once people understood that the speed of light was finite and constant. If people were 500IQ smart someone (many people, because it’s “simple”) a long time ago should have created a physics for every permutation of the speed of light (infinite, finite and fixed, variable), determined their implications and compared them to reality, even if the “experiment” was simply observing the sunset, and incorporated relativity into physics.
On the relationship between induction and deduction, that’s a pretty longstanding distinction with many uses made for many reasons. What motivated my claim is, like the example above, that many advances in science, math, philosophy (and economics, etc.) simply arise by very smart people examining the relationship between existing ideas, and usually firming up vaguer ideas so they can be tested, either against reality or in terms of assessing their implications with respect to other, often more fundamental, ideas. If every person were Einstein, Newton, Aristotle, etc., there would be tons of progress in science without any more funding. This is what AI should give us, though it would be much, much smarter than those mere humans.
One of the reasons the Sean Carroll episode is useful is because he emphasizes that GPT-type AI has no connection to the world or experience and we shouldn’t expect it to say anything interesting about the world. That is not how it is made–optimistic assessments about its capabilities are just the sort of anthropomorphism we apply to everything (qua Daniel Dennett’s “intentional stance”).
Carroll gave me some context and an orientation beyond the technical aspects of AI that the GPT experts just don’t seem to be able to. There were some things that are easy to nitpick, such as his examples to demonstrate that ChatGPT doesn’t build a model of the world. The prime number example was slightly flawed, which he noted. The chess example was also bad-- if I understood the description of the toroidal board, the game would start in a position not allowed by the ordinary rules of the game. Nevertheless, I think he makes his point well enough.
I haven’t listened to Mindscape before, and I really wanted to like this, but I was not impressed. It felt like he has his mind made up and was forcing the examples.
I recreated his tests.
No issues with the pizza question.
No issues with the integers question.
Does that mean GPT4 is modeling the world?
I tried his chess question but I didn’t fully understand the point of his example. I read his transcript and he talks about a king capturing a king? The game starts in checkmate? Not much of a game. I think that GPT understood but the question was so nonsensical that it overlooked the answer Sean was looking for. If you probe GPT on this one, it will acknowledge the possibility that Sean points out but claim the rules of such a game shouldn’t allow immediate checkmate.
On the sleeping beauty question, his issue was that, when he paraphrased the question GPT answered it correctly but didn’t directly tell Sean that it was a variant of the sleeping beauty paradox? Really reaching.