I fear you don’t understand my understanding of the tweets. I am not up on Kambhampati’s work, but he pushes the right buttons for me, because I want AI intelligence, and I want AI to replace human intelligence in many areas. Intelligence is a useful thing and humans are not good at it.
My hope that this will eventually happen is one reason I stopped doing philosophy. What’s the point of iterating on 500 or 2500-year-old disputes when we can develop systems that will actually outperform our quite limited cognitive abilities within, say, one lifetime of the, say, 1000 lifetimes humans have existed in their current form? If I were to start over a career I would probably focus 100% on trying to implement real AI intelligence (despite this being a Sisyphean project at the individual level), not more traditional philosophy, and certainly not trying to squeeze current AI methods for whatever commercial purpose one can sell to people. Like Deep Blue, current AI methods are a reductio ad absurdum of their approach. They use way too much processing power to produce their output, indicating the methods they use are crude.
I like Karpathy’s comment, however. Current AI methods do plenty of interesting and useful things, and we are learning to use them, but what they do has significant but limited relation to “intelligence”, which is strongly related to truth.
As far as anthropomorphism, for reasons I’ve mentioned before (Dennett’s Intentional Stance; Davidson’s Radical interpretation; Quine’s Indeterminacy of translation), humans are strong anthropomorphisers, and people are pretty much going to proclaim any system “intelligent” that matches the behavioral abilities of a slime mold, to say nothing of whether it can recapitulate, say, Milton in a novel context because it’s processed Paradise Lost and other works.
My point is that current AI is doing a lot of interesting things, and is using a bedrock tool (distributed representation of processed inputs) of human thought. But, engines alone do not make an airplane–you need wings, guidance systems, structural integrity, etc. There’s a lot going on with human-level intelligence, not just a big ass engine, but big ass engines are interesting and cool.
It’s telling to me that like 90% of the human output about AI is people theorizing the implications of AI (especially Yudkosky, bayesians, “rationalists”, and anyone trying to make money), because such idle speculations are relatively easy and not testable. If humans were smarter (which we will be in a few generations, even without general AI, thanks to progress in biology), there would be a lot less jibber jabber about “AI” and lot more theory/effort to actualize AI. This is why people like Kambhampati and Gary Marcus are important. They’re doing the real work.