For jobs, I think we will increasingly rely upon AGI liaisons where our specialty or expertise is the driving factor in public trust with the AI’s function or service. I think as long as we have any say in the matter that this will be an inescapable market function, even if the AI otherwise seems indistinguishable. Though I imagine the AI will reach a stage where we are unable to discern the difference, which is why laws are being passed that requires a kind of watermark on anything produced by AI. It’s just that humans need human connection. We need it so badly that we will accept a poorer decision just because it came from a person.
As for the quality of AI and whether it is intelligent, I think in some sense our questions about AI can be nonsense or just not relevant. Most people are looking for discernable human motivations that drive connections leading to the AI’s output, but an AI may be thinking and networking in ways beyond our awareness simply because we can’t relate to what drives that intelligence to think, explain, and make decisions.
The same thing happens when we go out looking for extraterrestrial life. We may not recognize it even if we find it. Or whether we have already found it.
Research shows a lot of what humans perceive as conscious choice happens on a subconscious level. We perceive ourselves as making decisions, but what’s really happening is we became aware of what we “decided” moments after that decision was made. Things like this illuminate how little we understand human consciousness and intelligence, let alone what those look like for a life form that isn’t composed of our networked biological components. How would we know whether there’s a different kind of aware consciousness in the universe if we take for granted that these many networked processes have the same emergent or networked property of an intelligence that thinks it is causal when really it’s just only aware of things that are happening on their own anyway?
The one thing I am looking for is what David Deutsch calls explanatory knowledge, which is a kind of universal computation that we should expect to see by any kind of intelligence because it is a brute fact he believes is proven by quantum theory (but could be false if quantum theory is false). Memory and processing power and stuff limits an intelligence, but given sufficient resources, the process is the same and the desired computation/output may finally be achieved.
He did a recent interview with Sean Carrol for Mindscape. Here’s some good quotes.
So I prefer when talking about these deep things, I prefer not to refer to humans specifically, because if there are extraterrestrial civilizations, for example, then they will necessarily have this property too. Because they couldn’t have become civilizations and make flying saucers and so on without explanatory knowledge. And the same will be true once we have artificial general intelligence. They will also have this properties. I prefer to talk about all those kinds of things, kinds of entities, as people. Humans are people, extraterrestrials are people, AGIs will be people. And I argue in my book that there’s nothing beyond that. That there may be AGIs that think many times faster than we do, but there aren’t any that are in principle capable of connecting the universe with champagne bottles any more than we can.
I think human brains have two kinds of universality that are essential to this. One of them is fairly uncontroversial among sort of scientifically-minded people, and the other one is very controversial, but I think just as compelling. So the one that’s uncontroversial is that our brains are Turing complete. That is, we can execute any program that can be executed at all. Now, it might take us more than a lifetime. It might require more memory than we have, but we can augment our memory, we can augment our lifetime, either by living longer or by having a tradition of doing certain things over generations. So those things aren’t essential.
You know, we are accustomed to saying that the computers that we are having this conversation over are Turing complete, even though they have only finite speed and finite memory capacity. But we know that that those are trivial restrictions because they can, however complex the program that we want to execute with them, we could do it if we had a bit more memory and a bit more speed.
So we’re as confident as we can be that when the aliens visit us or when the AGI become our new overlords, that they will not be able to compute non-Turing computable functions. So that’s as, or more known to us than other bits of science or bits of physics. So, that’s the uncontroversial part, although you say many people aren’t, you know, that it’s not so familiar to many people. Yes.
The other part is, I think… And that’s a… By the way, Turing completeness is a property of hardware. It’s a property of the brain. It’s a property of computers. The other kind of universality, explanatory universality, is a property of software, which I say we have, our software has that property, and no other surviving organism on Earth has explanatory universality. Although we know, basically, for sure that there used to be species related to us on Earth that also had explanatory universality and they died out, which should be a warning to us.
…like Neanderthals, and I think going back to Homo erectus. Anything that had campfires necessarily has the thing that we have. There is only, again, there aren’t gradations of it in the same way there aren’t gradations of Turing universality. You either have it or you don’t. It’s possible that you are rather impeded in using it because you don’t have enough memory or whatever, but the basic thing is all or nothing. And I think the same thing is true of explanatory universality, because this, if I can put it in my idiosyncratic way, which I like, the… It’s to do with optimism.
Full transcript and episode: