Douchebag 2.0—an Elon Musk company

AI is probably better than humans at 99% of driving. But those 1% edge case is where you need the context of human knowledge and full range of human perception about the world.

Edge cases suck. Programmers hate them. They’re usually some scenario that the business users barely care about, but still need to work. But they take 90% of your programming effort. They’re all work and no glory. And they really gunk up your code.

I see driving as basically infinite edge cases, except maybe someone dies if you don’t handle them right.

I’m actually encouraged by the Cruise model of cars driving around cities at reasonable speeds, and just shutting down if they encounter something they can’t deal with, until a human can remotely steer them out of the jam. This seems reasonable to me. At least for a few decades until you really do get almost all the edge cases somehow codified.

3 Likes

Not how current self driving works. If we ever get to the point where self driving cars are contemplating trolley problems I will happily prefer them driving over any human driver.

Except they absolutely do in Bobman’s scenario of the AI taking over when it was 90% sure an accident was imminent, which started this whole line of conversation.

And you keep missing my point. I’m not saying cars contemplating trolley problems is bad. They have to. It’s inevitable.

I’m simply saying I wouldn’t want to be the one programming the thing.

And I am saying anyone making these arguments is talking about AI way more powerfull than what is needed to have self driving cars and these arguments are pointless as they involve maybe 1 in a million accidents. Code it as random(x) and you are fine.

At an old job of mine our app generated PDF lab reports on demand for drug company employees who were monitoring clinical trials. Doctors involved in the trials, who actually saw patients, got official dot-matrix printed reports delivered to them. These came from a different system that had been completely vetted many years back.

My company wanted to start making the online reports available to doctors. I knew we hadn’t really hardcore validated them. And the library we were using to generate the PDF reports could do annoying things like chop off a quarter inch on the right or left of the report. Not print it on a new page, just chop it off and throw it away.

This could lead to a scenario, however unlikely, that a doctor could read a patient’s say creatine as 0 when it was actually 0.9. This could potentially lead the doctor to think the patient was in crisis, and misdiagnose them. Which, although incredibly unlikely, could possibly kill or seriously injure the patient.

I had to raise holy hell with my bosses to get them to wait until we could fully validate our PDF report. I took another job during this. But I still stayed nights and weekends during my last 2 weeks to finish the validation, because I knew no one else would do it and my bosses didn’t seem to care that much. I literally worked all weekend after my last day and turned in my computer on Monday.

I never would have gotten in trouble or gone to jail. But I couldn’t handle the possibility, however infinitesimal, that my code could kill someone.

This would be the same thing. I won’t implement a random function that has someone’s life as the potential outcome.

tl;dr - I don’t want my code to kill anyone.

3 Likes

Cool story. Self driving cars are not doing trolly problems on who to kill or even fully identify obstacles. It is just not how self driving works. Your argument might be valid if you are coding AI killer drones but self driving cars are making really simple decisions and so do human drivers. It doesn’t go beyond see obstacles, avoid yes/no. Nothing like these convoluted theoretical decisions about should I hit the cat or the kid.

Am programmer, agree with Dutch on this. It’s part of the reason I think self driving is a ways away. All Teslas do is object recognition and avoiding objects that are bad. Trolley problems are miles beyond the capability of current self-driving systems. You’re assuming a recognition of moral dilemmas that they simply do not possess.

And trolley problems are such an insanely small percentage of possible accidents, they are irrelevant.

You still have to tell the car what to do. It’s not like these scenarios never come up.

This one happened just a few blocks from my house. This woman is disabled and drives with all hand gear. Supposedly her hand-brakes failed, so to avoid the cars stopped at the light, she swerved into the oncoming lane, which was clear except for a bunch of pedestrians getting out of church. I guess a self-driving car would never swerve into the oncoming traffic lane, even if it looked clear.

I can’t just ignore an edge case because it only comes up once in a blue moon. Even if the plan is that the system crashes, that’s still a plan. And if it doesn’t kill anyone that can be acceptable.

The argument that we’re decades away is not really relevant since I was specifically responding to Bobman’s idea about the car taking over if it was 90% sure of an impending accident. That presumes the car knows how to avoid an accident.

This entire conversation is about how to program that functionality. But dutch and now you respond like I’m screaming that the trolley problem will doom self-driving cars. That isn’t my point at all. I’m just talking about how I wouldn’t want to have to program what Bobman proposed - having the car take over if it was 90% sure of an accident.

Unless your only solution is just to slam on the brakes in all scenarios. Which is fine I guess, but could still get the driver killed if swerving is a better option. So now that we’ve introduced swerving, we might want to check what we’re swerving into.

From everything I’ve seen about self-driving cars - none of this is that far-flung into the future. The cars build up a model of everything around them, and try to categorize these things. I’m not saying the car is considering a moral dilemma. I’m saying someone has to program in a concrete set of rules for when to swerve and when to slam on the brakes.

The car is going to do whatever it thinks is the highest-probability chance of avoiding an accident, even if all the options are hopeless. These things are driven by neural nets, nobody is there programming if… then statements.

1 Like

Oh the inanity.

The “AI” is programmed by

People

https://twitter.com/Saeko_Cut/status/1596035268452261888?t=DOTGmQjz40dzd8-62WZ9rw&s=19

I’m sure this was fucking horrible to go through but why in the world would you make some shit up about this?

2 Likes

Narcissists lie when they want to want to elevate themselves to the center of a story or situation. He may have actually lied so often about this that he actually believes it.

3 Likes

I don’t think we see real mainstream autonomous driving for at least a couple of decades here even if the tech was perfect today because the regulatory and liability issues are massive headaches for a semi-functional governed country such as the US. We will be well behind the curve on rolling this out here in the US.

You can optimize it as much as you want, but the basic kernel of the idea is that the computer should take over if it’s sure you’re about to crash and it’s sure it can avoid the crash safely. This would work so well because most actual crashes are driven by inattention. If the AI doesn’t know what to do, it can just leave it to the driver.

You just have a fundamental misunderstanding on how current AI for self driving cars work. Nobody knows what the AI will do in these extremely rare edge cases but it’s reaction speed and control over the car will on average produce better outcomes then a human who probably just gambled and rationalised their great outcome after the fact anyway.
If it happens often enough to show up as recognisable pattern then the AI will take the better decision more often than a human.

1 Like

Memory is basically confabulation around a handful of known details at the best of times. Memories of profoundly traumatic events are totally unreliable. There are a million completely plausible explanations here that are consistent with both parents telling the truth as they remember it. My oldest daughter has been in the hospital a few times, for relatively minor issues as these things go, and I remember (or think I remember) a handful of random moments with the rest being a complete blur.

This tweet is vile. What sort of sick fuck goes around fact-checking the exact way somebody’s kid died for a Twitter dunk. Just have some modicum of decency.

5 Likes

You don’t even need to get to the level of AI to see this. Modern cars will apply corrective steering if they detect that you’re drifting out of your lane. The computer doesn’t check to see if there’s a nun carrying a baby standing up ahead before it acts, it just triggers when it meets the lane departure criteria. Most of the time, that’s the right thing to do. Some tiny fraction of the time, it will cause an accident. All the trolley problem philosophizing is completely pointless. You can train an AI to do the thing that it predicts will have the best consequences without having a perfectly tuned objective function. If you don’t tell the AI that it’s worse to hit a nun carrying a baby than a layperson carrying a baby, it might make mistakes when that choice comes up, but it doesn’t matter that much, because very close to 100% of the time, the choice will be “crash” vs “don’t crash” or “crash at a high speed” vs “crash at a low speed” and it will make very good decisions.

4 Likes

I hate the corrective steering when I’m driving. Oh there’s debris in my lane and no one next to me or a stopped car in the shoulder so let me give them space…beep beep, vibration, have to steer harder to override

1 Like

My wife thinks every safety device ever created for cars was just an accommodation for Bad Drivers and they should all just get off Her Road.

9 Likes