Ha, somewhere no one will ever see it. I gave up trying to get better since I have no real talent but I’m convinced most people can draw if they have good instruction and practice.
I think so too.
I have some of my old artwork saved from when I started college as an art major. Missing the pieces I wish I’d saved though
I think one I’d like to have is a quick caricature of my nephew on a napkin. The brat didn’t appreciate it but his grandmother did.
Well, this is basically what actual ai would be, so your statement is like, “we need to make AI in order to make AI.” Deep learning stuff is just a simplified version of knowledge representation. We need representations that can be employed by other representations across domains.
The lack of latent context to instructions/goals is sometimes referred to as the “frame problem” and is approximately a few days older than the first ignorant attempts to make AI. The Frame Problem (Stanford Encyclopedia of Philosophy)
In fact, all human communication depends largely on the existence of an entire shared world (factually, experientially, socially, ethically, goal oriented) between people. Meaning is radically underdetermined by words or instructions.
I know a few physics professors who would vehemently deny this.
THIS
I replace most of my LEDs (I haven’t kept track of individual lights, so I don’t know how many exactly) just as much as I replaced any other light bulb. I did, though, have a pair of CFLs that lasted for over a decade in our downstairs bathroom.
Probably not as strongly as many programmers.
Dreyfuss made many relevant critiques of AI 60 years ago.Hubert Dreyfus's views on artificial intelligence - Wikipedia He was mainly arguing against symbol manipulation, and there are many problems that extend beyond his concerns. I think we need a robust theory of mental representation that demonstrates how we form categories out of exposure to similarities and how that sort of representation supports inference. One thing is that the entirety of our experience influences our inferences, with some experience more salient than others.
I think this shouldn’t be that complex, as human thought isn’t that fancy, but it’s a stew with a lot of ingredients and it often seems like we realize it has carrots and then try to make a similar stew with 99% carrots.
Is it too much to ask to just get a system that, based on distributed representation, gives you B from A v B, -A and that can also infer that cats are more like dogs than fish or towels?
I don’t know whether it’s more likely that we develop general AI within 50 years or never but if a physics professor explains a topic, it’s a complete and perfect explanation, according to them.
Well, except for friction.
Physics is close to geometry in that it’s mostly a very well defined system with a few well defined variables that can be manipulated. It’s the opposite of the blooming buzzing confusion of most human experience.
This is a wonderful way to add nuance to the phrase that everything is a nail just because you have a hammer.
I am thinking of the discussions in Ex Machina. I think the success of AI is when it fools people into thinking it’s real. That’s less dependent on an autonomous artificial consciousness than an algorithm that understands what evokes sentience validity for the audience more than whether it’s actually happening.
Then again some people say the same thing about people, but I think the whole “consciousness is an emergent illusion” is not really describing anything anymore.
A lot of my skeptical views on AI are well articulated by the guest (Gary Marcus) in this recent Ezra Klein show.
He did Sean Carroll’s podcast about a year ago.
Yeah, I mainly like his stuff. I come out of a similar cognitive science background/approach, but with more of an influence from philosophy.
Just take Hume or Locke. What they were trying to do was describe how thought/reasoning operates. The problem with Hume and Locke is we need a lot more of them, and 1000x fewer Elon musks, Jordan Petersons, and Jerry Fodors.
Once we get real AI it will be hookers and blackjack for everyone (or human extinction, one or the other).
But this doesn’t exist and isn’t anything we’re remotely close to achieving, afaik. People may be fooled and think that’s what they’re observing, but it’s not. Like that Google engineer who got fired for saying their AI was sentient. His conversations with the bot were eerily like what happens in The Moon is a Harsh Mistress when Mannie first talks to Mike, except Mike was sentient but fictional while the Google bot is real but not sentient.
Humans will treat anything as intelligent, thermostats, tamagotshi, cats, even Trump. We are very promiscuous with what Dennett calls the intentional stance.
I suspect they’re out there but they don’t stand out because the Musks and Petersons are louder and more seductive. Maybe a hundred years from now it’ll be clearer who we should have listened to.
It sounds like a form of projection. Like if I were a Roomba, what would I want, how would I feel, what would I be trying to communicate with this behavior? It’s no wonder people get attached and want their Roombas repaired and returned, not replaced.
def previously banned member that turned into anything that has the initials pbm