Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.
Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.
The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.
Are we surprised?
Got this saved next time someone tells me that a robot can drive better than a human. They almost had me there, but data doesn’t lie.
This is more specific to Tesla than self driving in general, as Musk decided that additional sensors (like LiDAR and RADAR on other self driving vehicles) are a problem. Publicly he’s said that it’s because of sensor contention - that if the RADAR and cameras disagree, then the car gets confused.
Of course that raises the problem that when the camera or image recognition is wrong, there’s nothing to tell the car otherwise, like the number of Tesla drivers decapitated by trailers that the car didn’t see. Additionally, I assume Teslas have accelerometers so either the self driving model is ignoring potential collisions or it’s still doing sensor fusion.
Not to mention we humans have multiple senses that we use when driving; this is one reason why steering wheels still mostly use mechanical linkages - we can “feel” the road, we can detect when the wheels lose traction, we can feel inertia as we go around a corner too fast. On a related tangent, the Tesla Cybertruck uses steer-by-wire instead of a mechanical linkage.
This is why many (including myself) believe Tesla has a much worse safety record than Waymo. I’ve seen enough drunk and distracted drivers to believe that humans will always drive better than a
humanrobot. Don’t get me wrong, I still have concerns about the technology, but Musk and Tesla has a history of ignoring safety concerns - see the number of deaths related to his desire to have non-mechanical handles and hide the mechanical backup.A robot can theoretically drive better than a human because emotions and boredom don’t have to be involved. But we aren’t there yet and Teslas are trying to solve the hard mode of pure vision without range finding.
Also, I suspect that the ones we have are set up purely as NNs where everything is determined by the training, which likely means there’s some random-ass behaviour for rare edge cases where it “thinks” slamming on the accelerator is as good an option as anything else but since it’s a black box no one really understands, there’s no way to tell until someone ends up in that position.
The tech still belongs in universities, not on public roads as a commercial product/service. Certainly not by the type of people who would at any point say, “fuck it, good enough, ship it like that”, which seems to be most of the tech industry these days.
Reddit loved that idea.
Tbf this is just where ex-redditors go, so don’t think we are immune here.
Musk = POS Nazi.
POS Nazi *pedophile

Use lidar you ketamine saturated motherfucker
Only 4x? Wao, they’re way better than I expected then.
It’s Austin. The traffic is so shitty you can’t go fast enough to get in a wreck most of the time.
I live in the area, and can confirm anecdotally that the Teslas are bad drivers and the Waymos generally are excellent.
a crash with a bus while the Tesla vehicle was stopped
Okay, idk why we would blame this one on the self driving car…
a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.
The difference is a lot of these are never reported when it’s done by a human driver. I very highly doubt the rate is 4x higher than humans. I’m not saying the self driving cars are good. I am just saying human drivers are really bad.
It’s important to draw the line between what Tesla is trying to do and what Waymo is actually doing. Tesla has a 4x higher rate, but Waymo has a lower rate.
Not just lower, a tiny fraction of the human rate of accidents:
https://waymo.com/safety/impact/
Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.
Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.
Not just lower, a tiny fraction of the human rate of accidents:
https://www.iihs.org/research-areas/fatality-statistics/detail/state-by-state
Well, no. Lets talk fatality rate. According to linked data, human drivers
1.26 deaths per 100 million miles traveled
Vs Waymo 2 deaths per 127 million miles :)
Well, Waymo’s really at 0 deaths per 127 million miles.
The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).
The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.
The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:
https://www.iihs.org/ratings/driver-death-rates-by-make-and-model
Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.
I seem to recall a homeless woman that got killed like right away when they released these monstrosities on the road, because why pay people to do jobs when machines can do them for you? I’m sure that will work out for everyone, with investment income.
You seem to recall wrongly.
Unless you found the video I will trust my memory.
The video of the thing that didn’t happen?
Optical recognition is inferior and this is not surprising.
Yeah that’s well known by now. However, safety through additional radar sensors costs money and they can’t have that.
I don’t think it’s necessarily about cost. They were removing sensors both before costs rose and supply became more limited with things like the tariffs.
Too many sensors also causes issues, adding more is not an easy fix. Sensor Fusion is a notoriously difficult part of robotics. It can help with edge cases and verification, but it can also exacerbate issues. Sensors will report different things at some point. Which one gets priority? Is a sensor failing or reporting inaccurate data? How do you determine what is inaccurate if the data is still within normal tolerances?
More on topic though… My question is why is the robotaxi accident rate different from the regular FSD rate? Ostensibly they should be nearly identical.
Which one gets priority?
The one that says there’s a danger.
Alright, so the radar is detecting a large object in front of the vehicle while travelling at highway speeds. The vision system can see the road is clear.
So with your assumption of listening to whatever says there’s an issue, it slams on the brakes to stop the car. But it’s actually an overpass, or overhead sign that the radar is reflecting back from while the road is clear. Now you have phantom braking.
Now extend that to a sensor or connection failure. The radar or a wiring harness is failing and sporadically reporting back close contacts that don’t exist. More phantom braking, and this time with no obvious cause.
Now you have phantom braking.
Phantom braking is better than Wyle E. Coyoteing a wall.
and this time with no obvious cause.
Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.
But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.
FYI, the fake wall was not reproducible on the latest hardware, that test was done on an older HW3 car, not the cars operating as robotaxi which are HW4.
The new hardware existed at the time, but he chose to use outdated software and hardware for the test.
Hardware that was still on the road, or something that had been recalled?
Nah, that one’s on Elon just being a stubborn bitch and thinking he knows better than everybody else (as usual).

He’s right in that if current AI models were genuinely intelligent in the way humans are then cameras would be enough to achieve at least human level driving skills. The problem of course is that AI models are not nearly at that level yet
I am a Human and there were occasions where I couldn’t tell if it’s an obstacle on the road or a weird shadow…








