Douchebag 2.0—an Elon Musk company

Driving on known, predictable routes allows for Gatik’s autonomous vehicle system to be hyper-tuned to the characteristics of a given operating environment, its nuances and local risks, all while being scalable across a diverse array of operating domains. Gatik’s vehicles are also monitored at all times by a remote supervisor, as well as a person in the passenger seat to ensure smooth transitions between stages of driverless operations.

I will say that known routes is a great use case for driverless.

:vince1:

that actually looks like it works perfectly as intended

i think dedicated lanes for autonomous driving are going to happen, similarly to HOV and bus lanes. if i had to make up a regulation, i would also say some sort of light on a car that indicates it’s under autopilot. at least until everyone learns to trust it and not fuck with it.

What if, and hear me out here they were on rails?

5 Likes

Wow, then you could carry more than 4 or 5 people. Much efficiency gains! Of course you’d need to implement some kind of system whereby they stop at regular intervals.

Brilliant I never would have thought of that.

things a cryptonazi would do for $100

6 Likes

The trolley problem argument against self driving cars is utter nonsense. It is not like humans make informed decisions when avoiding a crash evidenced by the amount of people causing bad crashes avoiding cats, dogs and soccer balls.

1 Like

I think you’re missing the point.

Assuming we’re going to let AI take over if it determines that a crash is 90% certain, then someone actually has to come up with an algorithm to tell the car exactly what to do. You can’t just tell a car “take over and prevent a crash”. Computers don’t work that way.

So let’s say an accident is iminent and the car takes over (or is already self-driving). It has two choices of action:

  1. Slam on the brakes and run into the back of a semi-truck. The car could slide under the frame of the truck, which could cause serious injury to the driver, or kill them. No one else would be harmed.
  2. Swerve to the right into the bicycle lane, where there is a bicyclist who could be killed by the impact, but the driver would be unharmed.

Which do you tell it to do? Or do you say “don’t take over if you run into a trolley problem” – which is still a thing that has to be programmed? And now you have a dead driver.

That was my only point. A lot of weight comes down on whoever has to design that algorithm. I wouldn’t want to do it.

https://twitter.com/elonmusk/status/1595869526469533701

Had a colleague visit Waymo in Phoenix (I work in the industry but not on autonomous cars). They’re fully autonomous under ideal conditions, but use a human driver if there’s rain.

1 Like

Why not just have them in the driver’s seat?

Re: driving, I actually think that people are collectively surprisingly good at driving, especially when you compare it to how terrible we are at everything else. Like people still do stupid things behind the wheel but given the number of vehicles in close proximity to each other at any one time, I think it’s somewhat miraculous that our roads don’t look like war zones at all times. I would think that eventually we’d get AI drivers that are better than humans but I honestly have no idea where the technology is at or how we reach our goals with it so not going to weigh in on it.

2 Likes

I’ve always thought this too, esp on 55mph 2-lane highways. You’re counting on a ton of people not veering over the line and killing you!

re: self driving cars being safer than human drivers, there’s probably a median vs average issue to contend with. The x% of drivers at any given point who are drunk, high, texting, excessively speeding, racing, or otherwise distracted probably account for a lot more than x% of deaths. Simply by being much safer than these people, self-driving can be safer “than average,” but that doesn’t mean self-driving is an improvement over a good or even median driver.

2 Likes

I already had this convo above. A human driver is making the exact same decisions that we’d be asking an AI to make. So what’s the difference? In one instance the human is deciding in real time. In the other the human is deciding before the fact (programming the AI)

Is one more problematic than the other?

Nobody except philosophy majors care about that as it is not a decision matrix actual AI works against. AI is not going to make decisions like that just like human drivers don’t.They will react on whatever happened first and avoid that possibly causing a worse outcome same as drivers avoiding cats end up killing people.

Most of it is people are just freaked out about ceding control over saving their own asses. People want the freedom to save themselves / their families in that 1 in a billion chance scenario where the car would kill them to save others, even if it means being far far less safe on the road if AI runs everything.

2 Likes

I feel like non-programmers aren’t understanding this. You don’t just give the computer a set of basic life rules and let it make its own decisions (like a human would). You have to tell the computer very specific things about whose life to favor in specific scenarios.

I’m not making any judgements about which is more problematic or which is safer. I’m simply saying I wouldn’t want to be the one to program into the computer “the driver’s life is worth more than a bicyclist”.

This is wrong. Computers can process stuff millions of times faster than humans. Humans just let their lizard brain react in the moment and sort it out later.

For a computer - an accident scenario is basically like a human sitting down for a few days to analyze the problem and come up with solutions.

You don’t just train the AI in accident scenarios and set it off and running. Any sort of accident avoidance has to be coded into the software with very specific rules.

For one thing a completely driverless car with no passengers would have very different rules. It would want to sacrifice itself before hurting anyone. A driverless car with passengers only in the back would might different rules as well. All this stuff has to be coded by a human. And I wouldn’t want to be that human. That’s all I’m trying to say.

2 Likes