ChatGPT Thread - Politics (AI Welcome)

The prompt says that you only have enough money to buy one thing. Asking for more information would be an unhelpful answer. What I was looking for in the answer is the understanding that if you need to prioritize things over the scale of an hour or so, plumbing is more important than nails or a fire extinguisher, but over months, it’s the fire extinguisher. It figured that out. That’s not just grammar or surface-leave plausibility, it’s applying abstract principles to given facts.

I have an important meeting at work tomorrow. How can I figure out what traffic will be like so I’m not late?

There are several ways you can estimate the traffic situation for your journey to work tomorrow:

  1. Google Maps: Google Maps provides an estimate of current and future traffic conditions. You can enter your work’s location and it will show you the traffic situation. You can also set the ‘Depart at’ time to the time you plan on leaving tomorrow and it will give an estimate of what the traffic might be like based on historical data.
  2. Waze: Waze is a community-driven GPS navigation app which provides real-time traffic and road information. Waze also provides traffic forecasts based on the time and day of the week.
  3. Local News Websites or Radio Stations: Local news websites or radio stations often provide traffic updates, especially during rush hours.
  4. City or Regional Traffic Websites: Depending on where you live, your city or regional government might provide traffic forecasts on their websites.
  5. Traffic Apps: Other traffic apps like INRIX or TomTom can provide real-time traffic information as well as predictions for future traffic based on past data.
  6. Public Transportation Apps: If you’re considering using public transport, apps specific to those services often provide estimated arrival times and potential delays.

Remember that these tools provide estimates and there might be unforeseen circumstances like accidents or roadworks that can still cause delays. Thus, it’s always a good idea to leave a bit earlier than the suggested time to account for potential issues.

EDIT: Also, I think the “one right answer” prompts are actually not that enlightening. Your questions are really about recall of facts, which is something that computers have always been good at. ChatGPT just didn’t have enough parameters to waste any on remembering obscure bowling facts, and GPT-4 did. What is exciting about GPT-4 especially is that it’s not just remembering more stuff, it’s also using some of its parameters to support an increased ability to handle ambiguity and draw new conclusions.

It left out the early is on time and time is late quote

So these are chat gpt 4 responses?

My son is playing Gorilla Tag with an Australian kid. I can hear the accent through the VR headset.

Technology is both amazing and terrifying.

This explanation is one of the better summaries of bowling ball dynamics that I have ever come across. It’s very thorough.

I would note that obviously anyone that bowls would rate the weight of the ball as 10/10 on importance. If I gave you a 6 pound ball it would be be completely ineffective. So I give it a “strike” there

It’s actually a very significant logical test.

Here is a prompt.

If we gave a three groups of professional bowlers three types of balls which group would score the lowest.

Group one can use the ball of there choice

Group two can use any cover stock and weight block combination but the ball can only weigh 8lbs

Group three can use the ball weight of there choice but can only use a plastic cover stock with a basic weight block.

Can use guess what the response might be? Would it use a lot of qualifiers with it’s answer?

The group with the 8lb balls would without exception be the lowest scoring group. The highest scoring group would be the ball of their choice group but maybe not all would be in the top third. But it would be pretty close. Other factors like the amount of lane conditioner would play a part as well.

So I think we have proven here the weight of the ball is a 10 out of 10 of importance. But we didn’t need this exercise to tell us that. Just ask any bowler and you would get the answers. The other factors would have qualifiers and different bowlers might have different opinions.

But what happens with the AI is it just guesses. It doesn’t know what it doesn’t know and that is a very limiting factor for it’s abilities.

Now I’m not trashing it completely I just don’t think that it’s using logic the way you think it might be. You are likely just fooling yourself into thinking it’s using logic and that’s why I think these LLM’s are so dangerous.

The ability to influence humans into believing something is real when its not. And I mean this in the very broad sense if how eloquently these things can fool people. Trumpism has taught us that humans have difficulty with logic especially when emotions are involved. So I’m scared as shit of these things. We as a society are not ready to let these things loose in the wild.

I would say that GPT is not as “smart” as an ant or a bee. It’s about as intelligent as an encyclopedia.

This is an underestimation of GPT4. Microsoft wrote a white paper on gpt4 soon after their stake in OpenAI. Here is an approx 45min video (summary of the white paper) I’d recommend which tries to address where on the spectrum of intelligence GPT4 currently is.

I know MS wrote a white paper, from what I skimmed it seemed like BS. I may watch this video later, however. Chat GPT is like the opposite of Helen Keller, all presentation and no reasoning.

From the paper (https://arxiv.org/pdf/2303.12712.pdf):

“This highlights the fact that, while GPT-4 is at or beyond human-level for many tasks, overall its patterns of intelligence are decidedly not human-like. However, GPT-4 is almost certainly only a first step towards a series of increasingly generally intelligent systems, and in fact GPT-4 itself has improved throughout our time testing it, see Figure 1.3 for the evolution of the unicorn drawing over the course of a month of training.”

You should see how good my calculator is at arithmetic. I expect it to be proving novel theorems in the near future.

I’ve watched the video and the presenter, a mathematician, qualifies his findings quite a bit. There are very measurable improvements from the previous version.

One of the more remarkable parts of the video is the part where they hooked up people to some kind kind of machine that scans brain activity, show the person an image, then show the AI the scan of the person’s brain and ask what image is the person looking at and the AI is able to answer with scary precision. But how it does that they can’t really explain. I don’t know enough about it to call BS. But I hope its BS.

I think there’s some cognitive neuroscience literature going back 40ish years showing consistent patterns of brain activation related to the presentation of different shapes. To me what you describe is no more difficult that an image recognition computer associating copies of images that have been processed with their source image or labeling images with words. It’s a nice computing accomplishment of neural networks and training, and it may be a necessary condition for intelligence, but it’s not anything like general intelligence.

Here’s a paper: Deep image reconstruction from human brain activity

My guess is that it had a different reason for giving weight a 6/10. I didn’t give it much guidance. I’ll ask it to explain the logic behind that rating tonight when I’m at my PC along with your other question.

I think that is a different video

Assuming the bowler has it’s choice of weights then the 6/10 isnt to far off the metrics. The difference between 14 and 15 lbs, the two most used weights, is pretty minimal compared something like cover stock material.

Weight block shape and density are probably one of the least consequential dynamics. It’s no secret that in the bowling ball manufacturing industry new types of weight blocks are mostly just a marketing ploy to get people to buy the next best bowling ball ever made.
Its really how you set up where you put the holes in it relative to how the weight block sits in the ball that will determine performance.

I would agree that surface is the most consequential dynamic of ball performance.

But how do we know you’re not an AI just pretending to know about bowling?

1 Like

I have some crazy thoughts about simulation theory and how we all might be AIs. Especially now that at least some people are convinced that the latest generations are showing the ability to use logic and reason.

Did someone say something about grammatically correct utterances that don’t actually mean anything? How would you tell the difference between a GPT-like AI that possessed ant-level intelligence and one that didn’t?

Have you heard of occam’s razor? Why posit a simulation when it doesn’t offer any explanatory power any non-simulation theory would? To get to a simulation you first need to develop a world in which there is no simulation so that a simulation can be created. Also, creating a simulation would no doubt be considered unethical to creatures capable of creating a simulation and who have not already used their technical knowledge to destroy everything (including themselves). In fact, I think one may need to assume any species with intelligence superior to humans would need to be ethical, in the sense of taking active efforts not to harm things, or they would cease to exist.

Also, akin to the problem of evil, why would simulation creators focus a simulation of grubby, dull witted entities that literally eat other living things, shit, and ooze various substances? Humans are just smart enough to do agriculture and writing after 100k years of hitting things with rocks and sticks. They are the intellectual equivalent of an 8-bit processor in comparison of the 10k bit capacity of an entity that presumably could create a simulation.

And why would they create the simulation? For amusement? That’s like assuming the giga genius simulation creators harm thigns for amusement, which is prima facie incompatible with the creation of the simulation.

1 Like

Not sure I understand your first question. Chomsky said such things in the context of generative grammar, such as “Colorless green ideas sleep furiously” to motivate the dissociation between syntax and semantics.

As for your second question, it surely wouldn’t be an ant/robot Turing test, which was always a dumb idea motivated by the then-popular behaviorist psychology. I suspect their are good papers out there that discuss why ants are smarter than GPT, and why birds are smarter than ants, and dogs are smarter than birds (well, maybe not crows), and gorillas are smarter than dogs, and humans are smarter than gorillas. It’s not some BS about the “apparent” use of logic. Ant behavior can be understood as eminently logical. The ant even knows that its body takes up space and that its space cannot overlap with other ants or twigs. It “knows” what sort of thigns it can cut with its mandibles and what it can carry and how far. Hell, understood a certain way, ants are practically metaphysicians and, in a similar way of understanding, GPT is “intelligent.” So is my thermostat, which heats the house only when it is cold.

I mean, this is basic “Chinese Room” (Chinese room - Wikipedia) stuff, about which thousands of papers have been written.

The claim is implicit in some of the statements of early AI researchers and analysts. For example, in 1955, AI founder Herbert A. Simon declared that “there are now in the world machines that think, that learn and create”.[23] Simon, together with Allen Newell and Cliff Shaw, after having completed the first “AI” program, the Logic Theorist, claimed that they had “solved the venerable mind–body problem, explaining how a system composed of matter can have the properties of mind.”

1 Like