So... What's the real impact of chatGPT

Folks.

One of the best things about this forum is reaching early understanding of the realities we face globally.

As a group. We were ahead on.

  • Covid
  • The reality of Trump
  • America’s inevitable slide into fascism

Any I missed?

I’ve been spending time with chatGPT (primarily around language learning) and I’m starting to get a feel for how powerful it is.

I’m keen to really draw out the “so what”. What does this mean?

My own view is that chat GPT has basically solved the general intelligence program. It’s solved language and logic.

The progression from ChatGPT to full scale, thinking AI is not only now inevitable, it’s probably much sooner than everyone thinks. The hard work is done.

One you have language and logic, it’s now just a matter of applying and combining language and logic in clever ways, and getting chatGPT to comment on and advise its own thinking. I.e. the future general AI is just a few hundred (or thousand, or hundred thousand) chatGPTs talking to itself.

So. What?

Obviously the market has just thrown billions at anything with AI in the name.

But what does this mean for markets? For politics? For us?

What would you change if you knew with absolute certainty that there would be sci-fi level thinking AI fully operational in 3 years time?

Of course, whether you agree with my thesis is an interesting question, and I expect a lot of the conversation to rightly go there.

I do however ask that you also engage with the question of what it would mean if I was. Because THAT is fascinating.

4 Likes

The gap between haves and have-nots increases as rich people have high-level AIs that can effectively function as personal assistants and poor people have either weak AIs or none at all to help them in life. Politics involve elites making sure they maintain this AI advantage. This probably includes limiting government ability to use AI to police corporations. Markets involve using AI to exploit people without good AIs.

Basically, if you can think of a way for poor people to be fucked by AI, rich people are going to make sure they get fucked.

1 Like

It’s radically wrong to say that chatGPT or any current AI has solved either language or logic. I see its main effect as allowing for plagiarism, with the efficiencies that offers, without direct copying by the user.

If you want to know more read some Gary Marcus https://garymarcus.substack.com/ or Melanie Mitchell https://melaniemitchell.me/.

2 Likes

No way the leading AI model will be cheaper than a premium Netflix subscription. The rich people will never let it happen.

The current state of the art is not quite suitable for fully automated use cases, which makes it less disruptive. The best uses currently are making a human more efficient.

Hardest hit industry is probably any form of graphic design/illustration/digital art. The best designers will get a million times more efficient and there’s already a big class of marginally viable participants who will be completely devastated.

Systemically, expect high interest rates indefinitely, as R&D, data center construction, semiconductor fabs, and AI integration soak up as much capital as anyone can offer. House prices stay high forever, but little new construction, leading to semi-permanent housing crisis.

Dystopian scenarios are very real. Look at this person’s social media and decide if they’re a dissident is a task that current AI is very well suited for. Bad regimes could entrench themselves. Military applications of AI are pretty alarming too.

2 Likes

Some solid discussion here:

The Trickiness of Evaluating LLMs for General Abilities

All of the results I’ve discussed here point to the trickiness of evaluating LLMs for general reasoning abilities, which I wrote about in a recent column for the journal Science. When we test AI systems for humanlike mental abilities, we have to keep in mind possibilities for data contamination (testing on items in or very similar to those seen in the training set) and “shortcuts”—spurious statistical associations or pattern-matching that can produce correct answers without requiring the general underlying abilities the evaluation is supposed to test. The authors of another paper testing LLM reasoning abilities put it this way: “Shortcut learning via pattern-matching may yield fast correct answers when similar compositional patterns are available during training but does not allow for robust generalization to uncommon or complex examples.”

NFTs

1 Like

I think human health could be a really interesting application. Combining large amounts of data from things like gene mapping and fitness monitoring over time seems like it could yield a lot of insights into disease progression/prevention/treatment.

What articles by those authors should I look at?

I read a few.

One interesting one by the first author was exploring how you can combine multiple instances of chatbots with calculators and Wolfram Alpha. (Exactly what I was talking about). While the author was complaining that it didn’t work that well, the approach did seem able to solve a good proportion of challenging maths puzzles.

The second author talks about cross contamination of input data. Including the observation that chatGPT is worse at coding challenges from dates after its training data.

My reflection on that would be that it still capable of solving new coding challenges, just not as good as ones it has seen before.

Now. When I say it’s solved language, I’m not saying that it’s done everything, but I really that we’ve reached a threshold. It’s now just building on top of this.

This is a similar observation to many AI researchers and commentators who’ve gone from “this is really difficult and may never be solved” to “yeah. This is going to be solved now. Possibly sooner” on general intelligence.

The observation that we think all hard problems require real intelligence, right up until the second after a computer can do it, seems relevant here.

Ive been thinking something similar. For some of the hugest problems we havent been able to solve, aging, cancer, heart disease, etc, being able to review and cross refence an untold number of studies would take millions of man hours. If this work can be done by AI in any kind of accelerated time frame, it could be key to unlocking some of the greatest medical challenges of our time.

The same could be said for things like engineering, space travel, and mathematics.

2 Likes

I’m going to take a run at the so what

Thought one.

I think we are going to see rapid mass destruction of white collar industries, at a completely unprecedented pace.

Some of the thought of AI is focusing on customer service type roles. I think this reflects our challenges understanding black swan events and exponential growth.

I.e. it seems unlikely that the state of AI spends any meaningful time just good enough to put a call centre employee out of work, but not good enough to trash 90% of other white collar workers. As those jobs are all pretty close together on the intelligence required scale.

What should we do professionally if we really believed that 90% of our jobs (as in. You, me, everyone on this board) were going to be eradicated in 3 to 5 years?

Thought two.

There’s this idea that one way to protect your job is the be good at talking to AI, understanding it’s capability.

I don’t think this specifically will protect anyone. Mainly because the AI is going to keep getting better at talking to people, so the skill of talking to AI is going to get less and less relevant.

Thought three.

In a period of rapid and mass job destruction. What jobs will there be? Or rather, means of living.

I see the following broad categories

  1. Guide and apply the outputs of the AI.

This is things like process redesign, firing people, implementing AI led projects, etc. These jobs will however shrink as the AI gets better at managing itself.

  1. Physical support to the AI. Physically doing what AI can not.

  2. Own the AI, it’s the means of production. Owing it is going to make people rich

  3. Fill the rapidly disappearing gaps. The thing AI can’t do yet. Whether that’s physical as above, or professionally.

  4. Designing and coding AI. This is not going to prompt engineering and all that low skill stuff. It’s going to be hardcore coding, with the support of AI tools, to further improve base AI and solve specific problems with it

  5. Owning other physically valuable stuff. Oil, land, housing, factories, etc.

For 4 specifically on office and professional jobs. I think there’s not going to be many of these, and we will all be fighting for the same shrinking pool.

A bit unstructured, but I think we need to think about this. Because otherwise we’re all gonna be old and broke.

LLM is nowhere near what people think it is capable off. At a fundamental level it just predicts the next word and doesn’t care if that next word makes sense or is correct. It just needs to be the most likely next word. That is fine for google searches, summarizing and writing stories but even in technical support this is already limiting its impact as it happily recommends you to do things that can have disastrous outcomes.

5 Likes

agree with @Dutch101 from what i’ve seen it’s basically just a google search that comes back in a human readable format as opposed to a list of answers. Its really good at writing essays but not great at actually solving problems.

1 Like

This I 100% agree with. Chris Hayes’s podcast had an hour long interview with a professor that has been thinking about and studying AI and its implications for over 20 years.

The two main takeaways are:

  1. Its best to think of a LLM Response as “What would an answer to this question sound like?” It’s really just trying to mimic the most likely human response to the question or prompt based on what it has been trained on.

  2. We can’t underestimate the human or technological capital required to run and train these models. The backbone of why they work is tens of thousands of man hours from people in sub-Saharan Africa working with them and going through prompt after prompt to tweak them to sound more like human speech. There is also just an incredible amount of computing power required to ingest and process the information that they are trained on that isn’t easily replicable by tons of new companies.

2 Likes

This is not actually true. Pre-training is done with next token prediction, but the key to usability of LLMs is the training stage where they teach the model to not just care about filling in the plausible next word and to follow instructions, be truthful, etc. This is by no means a perfect process, but it’s simply not the case that the models don’t care about being accurate or making sense. They do, they just aren’t smart enough to do so consistently yet.

For jobs, I think we will increasingly rely upon AGI liaisons where our specialty or expertise is the driving factor in public trust with the AI’s function or service. I think as long as we have any say in the matter that this will be an inescapable market function, even if the AI otherwise seems indistinguishable. Though I imagine the AI will reach a stage where we are unable to discern the difference, which is why laws are being passed that requires a kind of watermark on anything produced by AI. It’s just that humans need human connection. We need it so badly that we will accept a poorer decision just because it came from a person.

As for the quality of AI and whether it is intelligent, I think in some sense our questions about AI can be nonsense or just not relevant. Most people are looking for discernable human motivations that drive connections leading to the AI’s output, but an AI may be thinking and networking in ways beyond our awareness simply because we can’t relate to what drives that intelligence to think, explain, and make decisions.

The same thing happens when we go out looking for extraterrestrial life. We may not recognize it even if we find it. Or whether we have already found it.

Research shows a lot of what humans perceive as conscious choice happens on a subconscious level. We perceive ourselves as making decisions, but what’s really happening is we became aware of what we “decided” moments after that decision was made. Things like this illuminate how little we understand human consciousness and intelligence, let alone what those look like for a life form that isn’t composed of our networked biological components. How would we know whether there’s a different kind of aware consciousness in the universe if we take for granted that these many networked processes have the same emergent or networked property of an intelligence that thinks it is causal when really it’s just only aware of things that are happening on their own anyway?

The one thing I am looking for is what David Deutsch calls explanatory knowledge, which is a kind of universal computation that we should expect to see by any kind of intelligence because it is a brute fact he believes is proven by quantum theory (but could be false if quantum theory is false). Memory and processing power and stuff limits an intelligence, but given sufficient resources, the process is the same and the desired computation/output may finally be achieved.

He did a recent interview with Sean Carrol for Mindscape. Here’s some good quotes.

So I prefer when talking about these deep things, I prefer not to refer to humans specifically, because if there are extraterrestrial civilizations, for example, then they will necessarily have this property too. Because they couldn’t have become civilizations and make flying saucers and so on without explanatory knowledge. And the same will be true once we have artificial general intelligence. They will also have this properties. I prefer to talk about all those kinds of things, kinds of entities, as people. Humans are people, extraterrestrials are people, AGIs will be people. And I argue in my book that there’s nothing beyond that. That there may be AGIs that think many times faster than we do, but there aren’t any that are in principle capable of connecting the universe with champagne bottles any more than we can.

I think human brains have two kinds of universality that are essential to this. One of them is fairly uncontroversial among sort of scientifically-minded people, and the other one is very controversial, but I think just as compelling. So the one that’s uncontroversial is that our brains are Turing complete. That is, we can execute any program that can be executed at all. Now, it might take us more than a lifetime. It might require more memory than we have, but we can augment our memory, we can augment our lifetime, either by living longer or by having a tradition of doing certain things over generations. So those things aren’t essential.

You know, we are accustomed to saying that the computers that we are having this conversation over are Turing complete, even though they have only finite speed and finite memory capacity. But we know that that those are trivial restrictions because they can, however complex the program that we want to execute with them, we could do it if we had a bit more memory and a bit more speed.

So we’re as confident as we can be that when the aliens visit us or when the AGI become our new overlords, that they will not be able to compute non-Turing computable functions. So that’s as, or more known to us than other bits of science or bits of physics. So, that’s the uncontroversial part, although you say many people aren’t, you know, that it’s not so familiar to many people. Yes.

The other part is, I think… And that’s a… By the way, Turing completeness is a property of hardware. It’s a property of the brain. It’s a property of computers. The other kind of universality, explanatory universality, is a property of software, which I say we have, our software has that property, and no other surviving organism on Earth has explanatory universality. Although we know, basically, for sure that there used to be species related to us on Earth that also had explanatory universality and they died out, which should be a warning to us.

…like Neanderthals, and I think going back to Homo erectus. Anything that had campfires necessarily has the thing that we have. There is only, again, there aren’t gradations of it in the same way there aren’t gradations of Turing universality. You either have it or you don’t. It’s possible that you are rather impeded in using it because you don’t have enough memory or whatever, but the basic thing is all or nothing. And I think the same thing is true of explanatory universality, because this, if I can put it in my idiosyncratic way, which I like, the… It’s to do with optimism.

Full transcript and episode:

1 Like

Where I net out in the most important questions is:

-The Doomsday Scenarios of AI turning on us are way overblown. So many of the ways that AI could potentially cause actual harm can be stopped with a simple flip of a power switch. They don’t have access to the actual physical infrastructure of our world and only do when humans say they do.

-The thing that concerns me mostly in my lifetime is a continued acceleration of the bullshit that is late stage capitalism that we live in already. A world where lower level white collar jobs where people in their 20s get a chance to build skills, figure out how to work, figure out what interests them enough to turn into a career etc, these are all replaced by AI. Or at least enough where the bar to get your foot in the door in tons of careers becomes the same staffing process as Hollywood Agency Mail Rooms where its 90% Nepotism and 10% Ivy League Gunners.

1 Like

Instructions and training can not make a LLM truthful. If you have information saying otherwise then I would love to see it. Even the latest internal training on our LLM I had continues to stress that point because it means deploying a LLM carries a risk to the company. Who is responsible if our LLM gives an answer our customer uses when that answer causes issues.

I’m not sure if this is a good movie, but I loved the 2014 flick Transcendence. I thought it was the mirror image of Ex Machina, which is also a great AI movie.

In this clip, Paul Bettany (who plays the sentient AI Jarvis and then Vision in the MCU) argues that even if one could transfer a human consciousness into a computer, the real person will always die. What we get will just be a digital approximation.

But that seems to me to horribly misconstrue just how much of a person needs to be complete in order for that person to still be that person, eg Johnny Got His Gun where all that remains is a torso and a brain.

One thing that has always bothered me about that movie - the ending is shown as somehow the good guys (sort-of) winning, but in reality, with all electronics. power, etc. no longer working, you’d have initial deaths in the millions (planes, cars, etc.) and then subsequent deaths in the billions.

1 Like