Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.

Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.

The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    FYI, the fake wall was not reproducible on the latest hardware, that test was done on an older HW3 car, not the cars operating as robotaxi which are HW4.

    The new hardware existed at the time, but he chose to use outdated software and hardware for the test.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        As a consumer product, you are responsible and supposed to be paying attention at all times and be ready to take over.

        It is completely acceptable that it does not function perfectly in every scenario and something like a fake wall put on the road causes issues, that is why you need to pay attention.

        There is nothing to recall about this situation.

        If the car is failing on things it shouldn’t be, like both Tesla and Waymo failing to properly stop for school busses while in autonomous mode, that does require an update. Alhough ive seen 0 reports of an autonomous Tesla doing this yet only supervised ones.

        A Tesla not stopping for a school bus in supervised mode is acceptable though because the driver is responsible to stop.

        Edit: and note, a problem like the school busses is a visual processing understanding problem. Lidar won’t help with that kind or problem.

        Edit: and sorry to be clear, it is hardware still on the road, but I’m saying its acceptable that hardware does it because its not autonomous. If the newer hardware running without supervisors was doing it, that’s another story.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Ya, hardware that is on the road that won’t ever be autonomous without getting upgraded hardware amd software because its insufficient for autonomy, but has been shown to not be a problem on the latest autonomous versions.