AI will def have exponential growth. The question is how difficult the problems they need to solve are. Seems that the 99% most difficult problem is very very different from the 99.9% most difficult problem, etc.
People writing these articles never seems to know that BlackRock and Vanguard are fund managers.
Talking about situations AI completely misidentified are not edge cases. They are just examples of AI not being ready for FSD yet. It is rare for any code to get changed after it happens. They will just add that input and similar ones to the training cycle of the AI until it does the right thing. You keep looking at AI the way you look at coding your microservices and it is nothing alike.
Also the stoplight example is just bad visualisation the AI itself didn’t break or respond to it.
Edga cases are break or avoid type situations. Something humans are terrible at as well.
I personally think FSD is one of the most incredible bits of tech I have ever interacted with. Having completed a 50 minute drive in FSD without doing anything more than typing in a destination and keeping my hands on the wheel without it kicking off once, I got 2007 iPhone vibes.
I’m 100% convinced that in my part of the world, FSD is safer than my early 70s father behind the wheel and it’s not particularly close. That’s pretty darn impressive imo.
On the other hand, Elon Musk is a garbage human and is not pretty darn impressive imo.
I maintain that we should be putting all our focus on FSD into gaining acceptance with it on highways only.
The technology is there and it removes so many of the cases that would make people hesitant to accept it.
Once that happens, we can look into using it in denser environments, but cities just have so many variables that it’s so much harder to properly develop the technology.
It’s impressive how many mutually incommensurable numbers he managed to get into one tweet. Gross vs net, book vs market, stocks vs flows, it’s all there!
All I can really say working about FSD as an auto-industry employee who doesn’t work on FSD at all, is that the amount of money and manpower being dumped into this technology across the industry is staggering. This is an industry that’s normally very resistant to change and usually slowly increments new technologies at a glacial pace. I’ve seen many software engineers take jobs working on this tech. The power brokers that control the purse likely wouldn’t be doing this if they didn’t think there was going to be a very big ROI on this soon.
Possibly fake, but I don’t care because it’s funny.
Someone help me remember right, but wasn’t Tesla’s approach to automated driving quite different (and controversially so) from its competitors?
https://www.motortrend.com/news/tesla-autopilot-full-self-driving-system-radar-camera-returns/
Yes this was a core difference in their entire product design. Elon was basically convinced you could do self driving with just cameras because radar is expensive. Everyone else violently disagreed.
This is one of many reasons I don’t think Tesla will ever get to market with a FSD product. By the time they would be done fixing the bad engineering choices Google will have already gotten to market with something clearly superior.
Right and I think that’s the distinction.
Taking a shit on Elon Musk as a titan of the FSD industry is different than taking a shit on the FSD industry itself. FSD doesn’t need to be perfect in order to be better than human drivers by a mile.
I think too a lot of the cases that increase the confounding variables for FSD AI are going to be phased out by creating driving lanes with strict environmental protocols.
The problem will be that making and maintaining the environmental factors for those AI protocols to work perfectly will be just like traffic lights and pot holes. Good enough to transform the global driving paradigm, but obviously there are still tons of accidents everyone knows we could avoid if we just fixed the problems…
You’re thinking of LIDAR. Teslas started out having radar, then they started shipping without them last year.
Yes I am.
Whether it makes any sense or not, society is willing to accept somebody dying because your father made a bad decision. I think it’s MUCH less willing to accept the same outcome from AI.
They have been auctioning off office supplies and other things as well.
Sadly, I don’t disagree. Reason #2615483 why waaf.
I like listening to Bloomstran talk about Berkshire, but this is an utterly baffling tweet and I have no idea what his point is.