Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.
Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.
The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.


Optical recognition is inferior and this is not surprising.
Yeah that’s well known by now. However, safety through additional radar sensors costs money and they can’t have that.
Nah, that one’s on Elon just being a stubborn bitch and thinking he knows better than everybody else (as usual).
He’s right in that if current AI models were genuinely intelligent in the way humans are then cameras would be enough to achieve at least human level driving skills. The problem of course is that AI models are not nearly at that level yet
I am a Human and there were occasions where I couldn’t tell if it’s an obstacle on the road or a weird shadow…
Yes. In theory cameras should be enough to get you up to human level driving competence but even that is a low bar.
Even if they were, would it not be better to give the car better senses?
Humans don’t have LIDAR because we can’t just hook something into a human’s brain and have it work. If you can do that with a self-driving car, why cut it down to human senses?
Exactly, with this logic why have motors or wheels?
You don’t have wheels so you shouldn’t use cars
I agree it would be better. I’m just saying that in theory cameras are all that would be required to achieve human level performance, so long as the AI was capable enough
Except humans have self cleaning lenses. Cars don’t.
“So long as the AI has the same intelligence as a human brain” is a pretty big assumption. That assumption is in sci-fi territory.
Yeah thats my point
Cameras are inferior to human vision in many ways. Especially the ones used on Teslas.
Lower dynamic range for one.
Genuinely asking how so?
Are tesla cameras even binocular?
just one more AI model, please, that’ll do it, just one more, just you wait, have you seen how fast things are improving? Just one more. Common, just one more…
I NEED ONE MORE FACKIN’ AI MODEL!!
I don’t think it’s necessarily about cost. They were removing sensors both before costs rose and supply became more limited with things like the tariffs.
Too many sensors also causes issues, adding more is not an easy fix. Sensor Fusion is a notoriously difficult part of robotics. It can help with edge cases and verification, but it can also exacerbate issues. Sensors will report different things at some point. Which one gets priority? Is a sensor failing or reporting inaccurate data? How do you determine what is inaccurate if the data is still within normal tolerances?
More on topic though… My question is why is the robotaxi accident rate different from the regular FSD rate? Ostensibly they should be nearly identical.
Regular FSD rate has the driver (you) monitoring the car so there will be less accidents IF you properly stay attentive as you’re supposed to be.
The FSD rides with a saftey monitor (passenger seat) had a button to stop the ride.
The driverless and no monitor cars have nothing.
So you get more accidents as you remove that supervision.
Edit: this would be on the same software versions… it will obviously get better to some extent, so comparing old versions to new versions really only tells us its getting better or worse in relation to the past rates, but in all 3 scenarios there should still be different rates of accidents on the same software.
The unsupervised cars are very unlikely to be involved in these crashes yet because according to Robotaxi tracker there was only a single one of those operational and only for the final week of January.
As you suggest there’s a difference in how much the monitor can really do about FSD misbehaving compared to a driver in the driver’s seat though. On the other hand they’re still forced to have the monitor behind the wheel in California so you wouldn’t expect a difference in accident rate based on that there, would be interesting to compare.
I’m not too sure it’s about cost, it seems to be about Elon not wanting to admit he was wrong, as he made a big point of lidar being useless